Home Curriculum Vitae Publications Conference Reports Forthcoming Extramural Colloquia Expert Testimony Teaching Healthcare The Human Ecology of Memory Research Archive Publications and Reports Rants

Link to recent papers on unconscious mental life.

 

Intuition, Incubation, and Insight:

Implicit Cognition in Problem Solving

 

Jennifer Dorfman

University of Memphis

 

Victor A. Shames

University of Arizona

 

John F. Kihlstrom

Yale University

 

Note: An edited version of this paper appeared in G. Underwood (Ed.), Implicit Cognition (Oxford: Oxford University Press, 1996).  

 

A number of isolated phenomena that usually make up the potpourri of topics grouped under thinking, such as functional fixity, the Einstellung effect, insight, incubation, and so on, must surely be included within the scope of a comprehensive theory. No extensive reanalysis of these phenomena from an information processing viewpoint has been carried out, but they speak to the same basic phenomena, so should yield to such an explanation (Newell & Simon, 1973, pp. 871-872).

Introspective analyses of human problem solving have often focused on the phenomena of intuition, incubation, and insight. The thinker senses that a problem is soluble (and perhaps what direction the solution will take), but fails to solve it on his or her first attempt; later, after a period in which he or she has been occupied with other concerns (or, perhaps, with nothing at all), the solution to the problem emerges full-blown into conscious awareness. These phenomena, which have long intrigued observers of problem solving, have also long eluded scientific analysis -- in part because they seem to implicate unconscious processes. The Gestalt psychologists, of course, featured insight in their theories of thinking and problem solving, but interest in intuition, incubation, and insight, among other mentalistic phenomena, declined during the dominance of behaviorism.

With the advent of the cognitive revolution in the 1960s, psychologists returned their attention to thinking, so that a large literature has developed on problems of categorization, reasoning, and judgment as well as problem solving per se. While our understanding of these aspects of thinking has advanced considerably over the past 35 years, most of this research has focused on the subject's performance on various problem-solving tasks, rather than the subject's experience during problem solving. In this chapter, we wish to revive a concern with problem-solving experience, in particular the experiences of intuition and incubation leading to insight, and to argue that recent work on implicit memory provides a model for examining the role of unconscious processes during problem solving.

 

The Stages of Thought

The popular emphasis on intuition, incubation, and insight is exemplified by Wallas's (1926) classic analysis of "a single achievement of thought" (p. 79). As is well known, Wallas decomposed problem solving into a set of stages. The preparation stage consists of the accumulation of knowledge and the mastery of the logical rules which govern the particular domain in which the problem resides. It also involves the adoption of a definite problem attitude, including the awareness that there is a problem to be solved, and the deliberate analysis of the problem itself. Sometimes, of course, the problem is solved at this point. This is often the case with routine problems, in which the systematic application of some algorithm will eventually arrive at the correct solution (e.g., Newell & Simon, 1973). Production of the answer must be followed by the verification stage, in which the solution is confirmed and refined (or shown to be incorrect).

On other occasions, however, this cognitive effort proves fruitless and the correct solution eludes the thinker. In these cases, Wallas argued, thinkers enter an incubation stage in which they no longer consciously thinks about the problem. Wallas (1926) actually distinguished between two forms of incubation: "the period of abstention may be spent either in conscious mental work on other problems, or in a relaxation from all conscious mental work" (p. 86). Wallas believed that there might be certain economies of thought achieved by leaving certain problems unfinished while working on others, but he also believed that solutions achieved by this approach suffered in depth and richness. In many cases of difficult and complex creative thought, he believed, deeper and richer solutions could be achieved by a suspension of conscious thought altogether, permitting "the free working of the unconscious or partially conscious processes of the mind" (p. 87).1 In either case, Wallas noted that the incubation period was often followed by the illumination stage, the "flash" (p. 93) in which the answer appears in the consciousness of the thinker. (This answer, too, is subject to verification.)

Wallas (1926) was quite certain that incubation involved "subconscious thought" (p. 87), and that the "instantaneous and unexpected" (p. 93) flash of illumination reflected the emergence of a previously unconscious thought into phenomenal awareness:

The Incubation stage covers two different things, of which the first is the negative fact that during Incubation we do not voluntarily or consciously think on a particular problem, and the second is the positive fact that a series of unconscious and involuntary (or foreconscious and forevoluntary) mental events may take place during that period (p. 86).

***

[T]he final "flash," or "click,"... is the culmination of a successful train of association, which may have lasted for an appreciable time, and which has probably been preceded by a series of tentative and unsuccessful trains (pp. 93-94).

***

[T]he evidence seems to show that both the unsuccessful trains of association, which might have led to the "flash" of success, and the final and successful train are normally either unconscious, or take place (with "risings" and "fallings" of consciousness as success seems to approach or retire), in that periphery or "fringe" of consciousness which surrounds the disk of full luminosity (p. 94).

Wallas used the term intimation to refer to "that moment in the Illumination stage when our fringe-consciousness of an association-train is in the state of rising consciousness which indicates that the fully conscious flash of success is coming" (p. 97). In other words, intimations are intuitions, in which thinkers know that the solution is forthcoming, even though they do not know what the solution is. This chapter argues that intuition and incubation reflect unconscious processing in problem solving, and that insight reflects the emergence of the solution into phenomenal awareness. Put another way, intuition and incubation are aspects of implicit thought, a facet of implicit cognition.

 

The Scope of Implicit Cognition

For many years, the notion that unconscious mental processes played a role in thinking and problem solving was widely accepted by nonpsychologists (e.g., Koestler, 1964), but viewed skeptically by researchers and theorists in the field.2 With respect to incubation, for example, Woodworth and Schlossberg (1954) suggested that "The obvious theory -- unconscious work, whether conceived as mental or as cerebral -- should be left as a residual hypothesis for adoption only if other, more testable hypotheses break down" (p. 840). More recently, however, there has been a shift toward wider acceptance of the idea of the psychological unconscious -- the idea that mental structures, processes, and states can influence experience, thought, and action outside of phenomenal awareness and voluntary control (Kihlstrom, 1984, 1987, 1990, 1994c).

To a great extent, the contemporary revival of interest in the psychological unconscious may be attributed to demonstrations of implicit expressions of memory in neurological patients and normal subjects (e.g., Cermak, Talbot, Chandler, & Wolbarst, 1985; Graf & Schacter, 1985; Graf, Squire, & Mandler, 1984; Shimamura & Squire, 1984; Schacter & Graf, 1986; Warrington & Weiskrantz, 1968). For example, the preservation of priming effects on such tasks as word-stem completion and lexical decision indicates that some memory of words presented during a study episode has been encoded and preserved, despite the subject's inability to remember the words themselves, or even the study task. Based on such results, Schacter (1987; Graf & Schacter, 1985) argued for a distinction between explicit and implicit expressions of memory (for complete reviews, see Lewandowsky, Dunn, & Kirsner, 1989; Schacter, 1987, 1992a, 1992b, 1995; Schacter, Chiu, & Ochsner, 1993; Roediger, 1990a, 1990b; Roediger & McDermott, 1993).3 Explicit memory refers to conscious recall or recognition of events, while implicit memory refers to any effect on experience, thought, and action that is attributable to past events, in the absence of conscious recollection of those events. Neurological patients suffering from bilateral damage to the medial temporal lobe (including the hippocampus) or the diencephalon (including the mammillary bodies) show a gross impairment of explicit memory (the amnesic syndrome) but preserved implicit memory (for a review, see Shimamura, 1994).

Similar dissociations between explicit and implicit memory can be observed in functional amnesias, including those induced by suggestions for posthypnotic amnesia (Dorfman & Kihlstrom, 1994; Kihlstrom, 1980; Kihlstrom, Tataryn, & Hoyt, 1993; Schacter & Kihlstrom, 1989; Kihlstrom & Schacter, 1995). Moreover, studies of intact, nonamnesic subjects indicate that experimental variables that affect explicit memory do not always affect implicit memory, and vice-versa (e.g., Graf & Mandler, 1984; Jacoby & Dallas, 1981; Tulving, Schacter, & Stark, 1982). For example, elaborative activity at the time of encoding has a major effect on explicit memory, but makes little impact on implicit memory. By the same token, a shift in modality at the time of presentation often affects implicit memory, but usually makes little difference to explicit memory. Thus, dissociations between explicit and implicit memory can be documented in a number of different ways.

A similar distinction can be drawn between explicit and implicit perception (Kihlstrom, Barnhardt, & Tataryn, 1992). By analogy to memory, explicit perception refers to conscious perception, as exemplified by the subject's ability to discern the presence, location, movement, and form as well as other palpable attributes) of a stimulus. Implicit perception refers to any effect of a current stimulus event on experience, thought, and action, in the absence of conscious perception of that event. As in the domain of memory, it appears that explicit and implicit perception can be dissociated in a number of different ways. For example, implicit perception is familiar in the literature on so-called "subliminal" perception, in which a subject's behavior is shown to be influenced by stimulus events that are too weak or too brief to be consciously perceived (for reviews, see Bornstein & Pittman, 1992). It has long been known that priming effects similar to those observed in studies of implicit memory can be produced by stimuli that are presented for very brief durations, or masked by preceding or subsequent events (e.g., Forster & Davis, 1984; Fowler, Wolford, Slade, & Tassinary, 1981; Greenwald, Klinger, & Liu, 1989; Marcel, 1983). Subliminal perception appears to be analytically limited, in that there are restrictions on the amount of processing subliminal stimuli can receive (Greenwald, 1992), and on the amount of information that can be extracted from them. The same sorts of limits obtain in many cases of implicit memory as well. For example, implicit memory produced by degraded stimulus presentations is often limited to knowledge about perceptual structure rather than meaning (Tulving & Schacter, 1990). Similarly, subliminal priming is often limited to repetition effects, which can be mediated by perceptual rather than semantic representations of the stimuli. Despite their limitations, subliminal effects provide evidence for a distinction between the subjective threshold, the point at which a stimulus cannot be consciously perceived, and the objective threshold, the point at which all differential response to a stimulus disappears (Cheesman & Merikle, 1984, 1985, 1986; Merikle & Reingold, 1992). Semantic processing may be possible for stimuli presented near the subjective threshold, while only perceptual processing may be possible for stimuli presented near the objective threshold.

The construct of implicit perception also includes effects attributable to unperceived stimuli that are not themselves subliminal. For example, patients with damage to the striate cortex of the occipital lobe report a lack of visual experience in corresponding portions of their visual fields; yet, they are often able to make above-chance "guesses" about the properties of stimuli presented to their scotomas (Weiskrantz, 1986). This "blindsight" has a parallel in the behavior of patients with visual neglect resulting from unilateral lesions in the right temporoparietal regions of the cerebral cortex (Bisiach, 1992). These patients appear unresponsive to stimuli in the corresponding portions of the left (contralateral) visual field, but careful observation sometimes reveals that their test performance is in fact influenced by the neglected stimuli (e.g., Marshall & Halligan, 1988). Dissociations such as these are also familiar in the functional blindness and deafness associated with the conversion disorders, and in the parallel phenomena induced in normal subjects by hypnotic suggestion (e.g., Bryant & McConkey, 1989a, 1989b; for reviews see Kihlstrom, 1992, 1994b; Kihlstrom et al., 1992; Kihlstrom et al., 1993).

Evidence for implicit perception often comes from the same sorts of tasks, such as priming, that provide evidence for implicit memory. Although it could be argued on these grounds that implicit perception and implicit memory are essentially the same phenomena, there are important distinctions between them (Kihlstrom et al., 1993). Still, as in any taxonomy, there are borderline cases. For example, demonstrations of implicit memory for items presented during general anesthesia (for reviews, see Cork, Couture, & Kihlstrom, 1994; Kihlstrom, 1993b; Kihlstrom & Schacter, 1990) probably provide evidence for implicit perception as well, because the patients are (at least ostensibly) unconscious at the time that the target events occurred.

Taken together, implicit memory and implicit perception exemplify the domain of the cognitive unconscious (Kihlstrom, 1987). However, there is evidence of implicit processing in other cognitive domains as well. For example, some lines of research provide evidence for implicit learning, by which subjects acquire new knowledge in the absence of any conscious intention to learn, and without awareness of what they have learned. For example, in a paradigm refined by Reber (1967; 1993; see also Berry & Dienes, 1994; Lewicki, 1986), subjects are asked to memorize a list of letter strings that (unknown to them at the time) conform to the rules of an artificial grammar. They are then apprised of this fact, and asked to distinguish between new (unstudied) instances of grammatical strings and ungrammatical ones. Normal subjects can perform such a task with above-chance accuracy, even though they cannot specify the grammatical rules which govern their performance. Questions about the role of consciously accessible knowledge in such paradigms persist, but in principle the findings conform to the general model of the explicit-implicit distinction. Thus, implicit learning may be defined as a change in experience, thought, or action which is attributable to the acquisition of new semantic or procedural knowledge (as opposed to strictly episodic knowledge about past events), in the absence of conscious awareness of that knowledge.

Similarly, at least in principle, implicit thought (Kihlstrom, 1990) may be observed whenever the subject's experience, thought, and action is affected by a thought (e.g., image, judgment, conclusion, or solution to a problem) in the absence of conscious awareness of that thought. Implicit thought is a difficult concept, because the idea of thinking is part and parcel of our everyday notions of consciousness. For William James (1890), for example, unconscious thought was a contradiction in terms (for a discussion of this view, see Kihlstrom, 1984; Kihlstrom & Tobias, 1991). In a recent review, Lockhart and Blackburn (1994) presented evidence for the role of implicit memory and implicit learning in problem solving. In this chapter, we consider implicit thought as a category in and of itself, by reviewing experiments which bring into the laboratory the intuition and incubation experiences which Wallas (1926) found in problem solvers who are on the verge of insight.

 

The Role of Intuitions in Problem Solving

In philosophy, intuition refers to the immediate apprehension of an idea without any conscious analysis. In ordinary language, however, the term refers to the person's feeling that a decision, judgment, or solution is correct, in the absence of supporting evidence (Bowers, 1994). Thus, problem solvers' feelings of warmth reflect their belief that they are getting closer to a solution, even though they do not know what the solution is. Intuition, so defined, is a form of metacognition reflecting people's knowledge or beliefs about their cognitive states and processes (Flavell, 1979; for reviews, see Metcalfe & Shimamura, 1994; Nelson, 1992).

In some sense, the problem solver's intuitions and feelings of warmth are analogous to the feeling of knowing (FOK; Hart, 1965, 1967), the tip-of-the-tongue state (TOT; Brown & McNeil, 1966), and other phenomena within the domain of metamemory (Nelson & Narens, 1990). In the classic feeling-of-knowing paradigm, the subject is posed a question (e.g., "What is the capital of Somalia?"); if the answer is not forthcoming, the subject is asked to judge the likelihood that he or she will recognize the correct answer. The accuracy of these self-predictions is then assessed by means of an actual recognition test. In the same way, people can be presented with a difficult problem (i.e., one that is not immediately soluble), and asked to judge the likelihood that they will be able to solve it, or to generate feelings of warmth (FOWs) representing how close they think they are to the solution. The accuracy of these FOWs (and other, similar, intuitions) is then determined by whether the subject's attempt is successful.

One difference between metacognition in memory and problem solving is that while the mechanisms of FOKs are fairly well known, the mechanisms of FOWs are less clear. According to Nelson and his colleagues (Nelson, Gerler, & Narens, 1984), metamemory judgments like the FOK are mediated by two different sorts of processes, trace-access and inference. Trace access involves gaining partial access to information stored in a memory trace. Thus, while in the TOT state, subjects can provide accurate information about the orthographic and phonemic features of a word, without being able to produce the word itself. Inference involves the subjects' judgments about whether they might have acquired the critical information at some time in the past. For example, most people give low FOK judgments to Fidel Castro's telephone number, because they know that they have never been in a position to acquire that information. By contrast, people give higher FOK judgments for the White House telephone number, because they can remember many instances (political advertisements in the newspaper, for example) where they might have encountered that factoid.

Just as FOKs are intuitions about knowledge that has not yet been retrieved from memory, so the FOWs experienced in problem solving are intuitions about solutions that have not yet been achieved. A possible metacognitive mechanism for FOWs is suggested by the problem solving theory known as the General Problem Solver (GPS; Newell & Simon, 1973). According to Newell and Simon, the mental representation of a problem, or problem space, includes the initial state, the goal state, intermediate states (or subgoals) to be achieved on the way to the goal state, and operations by which one state is transformed into another. By means of difference reduction, for example, the problem solver engages in any activity that reduces the difference between the current state and the goal state. A more complicated strategy is means-end analysis, by which the solver analyzes the problem space into a set of differences (e.g., between subgoals and the final goal) and identifies and executes operations that will eliminate or reduce the most important differences between the current state and the goal state. The difference between the two techniques is that means-end analysis may sometimes temporarily increase the distance between the current state and the goal. Thus, the Hobbits and Orcs problem actually requires that the thinker undo some of the progress made in getting the creatures to the other side of the river (Thomas, 1974). Nevertheless, GPS assumes that the thinker has in short-term memory a representation of the distance between the current state and the goal state, and this representation is the basis for feelings of warmth:

We can now see how cues become available sequentially, and why, consequently, strategies of search that use the cues are possible. Each time a process is applied to an initial state, a new state with a new description is produced. If there are relations (known to or learnable by the problem solver) between characteristics of the state description and distance from the goal (i.e., the final description that represents the solution), these relations can be used to tell when the problem solver is getting "warmer" or "colder," hence, whether or not he should continue along a path defined by some sequence of processes (Newell et al., 1962/1979, p. 155).

A clear implication of the GPS analysis of FOWs is that they should be relatively accurate predictors of problem-solving success. Unfortunately, in contrast to the relative accuracy of FOKs in memory-retrieval situations, intuitions about problem solving are not necessarily accurate. Metcalfe (1986a, 1986b; Metcalfe & Wiebe, 1987) has argued that

FOWs are accurate when problems are solved through memory retrieval, but not when they are solved through insight processes. In other words, when subjects are presented with insight problems, their FOW judgments do not predict whether they will actually solve the problems. In fact, Metcalfe has consistently found that incremental warmth ratings over a trial are actually associated with errors in problem solving. When insight problems are correctly unraveled, FOWs tend to remain stable at low levels, until virtually the moment that the problem is solved. Thus, either subjects have no intuitions, or their intuitions are in error.

Metcalfe (1986a, 1986b; Metcalfe & Wiebe, 1987) interpreted this surprising outcome in terms of Gestalt accounts of problem solving (Luchins & Luchins, 1970; Scheerer, 1963; Wertheimer, 1959; for reviews, see Dominowski, 1981; Ellen, 1982; Ohlsson, 1984; Weisberg, 1992; Weisberg & Alba, 1981a, 1981b), which posit a fundamental discontinuity between the deliberate processing of information and the spontaneous restructuring of a problem. Some problems can be solved by memory retrieval, and in these cases FOWs accurately reflect the gradual accumulation of problem-relevant information. Other problems must be solved by restructuring, where metacognitive experiences may be lacking or even invalid. However, many problems seem to admit solution by either retrieval or restructuring. Thus, Metcalfe has classified anagrams as insight problems, on the basis of the finding that metacognitive experiences do not predict success of solution. However, such problems do not require insight for their solution, as the classic insight problems (Maier, 1930, 1931) seem to do. That is, anagrams can also be solved by a brute-force process of memory retrieval, in which the thinker rearranges letters systematically and checks each version against the mental lexicon.

In actual practice, it seems difficult to assign problems to insight and noninsight categories on a principled basis, and in the absence of objective criteria, a definition which states that insight problems are those in which intuitions are invalid -- seems almost tautological. In fact, Lockhart and Blackburn (1994) have defined insight as that component of problem solving which involves conceptual access, as opposed to procedural restructuring. By this criterion, anagrams are not pure insight problems, and may not qualify as insight problems at all. In any event, the requirement that true insight must be unaccompanied by valid metacognitions violates our frequent everyday experience -- phenomenological evidence that must also be taken seriously (Shames & Kihlstrom, 1994).

In what follows, we wish to use the term insight problem solving without entering the debate over what constitutes an "insight" problem, or "insight" problem solving. Rather, we wish to use the term insight in the same way we use the term intuition -- in its ordinary-language, folk-psychological sense, which places primary emphasis on the "aha!" experience (Simon, 1986/1989). As described by Simon (1986/1989, p. 483):

The "aha" phenomenon differs from other instances of problem solution... only in that the sudden solution is here preceded by a shorter or longer period during which the subject was unable to solve the problem, or even to seem to make progress toward its solution. The "aha" may occur while the subject is working on the problem, or after the problem has been put aside for some period of "incubation".

From the point of view of this chapter, insight occurs at the moment when the solution to a problem emerges into consciousness. What concerns us are the precursors to insight: the subject's intuitions, based on the activation of knowledge structures below the threshold for awareness; and the incubation period during which the unconscious becomes conscious.

 

Evidence for Intuitions 

in Insight Problem Solving

Some evidence for valid intuitions in insight problem solving comes from a series of studies reported by Bowers Regehr, Balthazard, and Parker (1990; see also Bowers, Farvolden, & Mermigis, 1995), and based in turn on the Remote Associates Test (RAT) developed by Mednick (1962; Mednick and Mednick, 1967) for the assessment of creativity. Mednick defined creative thinking as the combination of ideas that were not previously associated with each other: "The more mutually remote the elements of the new combination, the more creative the process or solution" (Mednick, 1962, p. 221). These associations may occur in a number of ways: serendipitously, by means of similarity between the previously unassociated elements (e.g., words that rhyme but are semantically unrelated), or through mediation by a third idea. In order to distinguish creative associations from the products of mentally retarded or thought-disordered individuals, Mednick further required that these new ideas prove practically useful in some way. It was not enough that they be merely original.

It follows from Mednick's definition that creative individuals are disposed to generate remote associations -- literally to connect ideas that other people fail to see as related. Accordingly, Mednick (1962) developed a procedure in which subjects are given a set of three words each unrelated to the others in associative terms, and are asked to generate an associate which all have in common. Barring psychotically loose associations, the only answer is a word which is a remote associate of each of the test items. A favorite example is:

 

DEMOCRAT

GIRL

FAVOR.

(solution)

Bowers et al. (1990, Experiment 1) developed a variant on the RAT known as the "dyads of triads" (DOT) procedure, in which subjects were presented with two RAT-like items. Only one of these was soluble or coherent -- i.e., the three words did in fact have some distant associate in common. The other was considered insoluble or incoherent, because -- again barring psychotically loose associations -- the elements had no associate in common. An example is:

PLAYING

CREDIT

REPORT

STILL

PAGES

MUSIC

 (solution)

The subjects were asked to inspect the triads and generate the correct solution to the coherent one. After a few seconds, they were asked to indicate which triad was coherent. Of course, in the absence of an actual solution, this judgment represents a guess, hunch, or intuition on the part of subjects. Across five samples of college undergraduates, Bowers et al. (1990) found that subjects were able to choose which DOT triads were soluble at rates significantly greater than chance. The subjects were rarely certain of their choices, but greater accuracy was associated with higher confidence ratings. Thus, the subjects' intuitions about the DOT items were accurate more often than not. From Bowers's (1984, 1994) point of view, these choices are not arbitrary -- rather, they are informed guesses reflecting the processing of information outside of awareness.

Further analysis of the DOT test by Bowers et al. (1990) indicated that there are two types of coherent triads. In semantically convergent triads, the remote associate preserves a single meaning across the three elements. Thus, in the triad

GOAT

PASS

GREEN,

(solution)

the solution preserves a single meaning regardless of the element with which it is associated. On the other hand, some triads are semantically divergent, in that the remote associate has a different meaning in the context of each element (or, at least, the meaning of the associate in the context of one element is different from its meaning in the context of the other two). Thus, in the triad

STRIKE

SAME

TENNIS,

(solution)

the solution has a different meaning in association with each element. Bowers et al. (1990) found that the degree of semantic convergence was significantly correlated with the likelihood that subjects would correctly guess which DOT item was coherent.

Bowers et al. (1990, Experiment 2) also performed a conceptual replication of the DOT procedure in the nonverbal domain. For this purpose, they constructed a new test, the Waterloo Gestalt Closure Task (WGCT), consisting of two gestalt closure items similar to those developed by Street (1931), Mooney (1957), and Harshman (1974). Each item of the WGCT contained two components: the coherent portion was a fragmented representation of a familiar object; the incoherent portion was the same stimulus rendered meaningless by rotating and displacing the fragments. As in the DOT procedure, subjects were given a few seconds to generate the name of the coherent gestalt. Then they were asked to guess which figure was coherent. Again, these guesses were significantly greater than chance. Even when subjects rated their confidence levels as zero, their guesses were still correct above chance levels.

Which aspects of the drawings contribute to these successful intuitions is not known at present. In a study reported informally by Bowers et al. (1995), subjects were presented with a coherent WGCT item or its incoherent counterpart, followed by a list of four words. When asked to indicate which word was the correct solution to the gestalt, the subjects were correct well above chance on coherent gestalts, compared to chance levels for incoherent ones. Thus, subjects are responding to more or less specific pictorial content in the stimulus.

Returning to the verbal domain, Bowers et al. (1990, Experiment 3a) obtained further evidence of the role of intuitions in an expansion of the RAT procedure known as the Accumulated Clues Task (ACT). The ACT items consisted of 15 words, all of which had a single associate in common. For the first 12 words, the associate was relatively remote, while for the last three words, the associate was relatively close. An example is:

TIMES

INCH

DEAL

CORNER

PEG

HEAD

FOOT

DANCE

PERSON

TOWN

MATH

FOUR

BLOCK

TABLE

BOX.

(solution)

In the experiment, the subjects saw the words in sequence, one at a time cumulatively, and were asked to generate the one word which was an associate of all the words in the list. Of course, this was impossible on the first trial, because a word like TIMES has many associates. But relatively few of these are also associates of INCH, and even fewer are also associates of DEAL. Thus, evidence accumulates with each clue as to the correct answer. In the course of solving each item, the subjects were asked to generate at least one free association to each cue, and to indicate whether any of these associations appeared promising as solutions to the problem as a whole. Associates so designated may properly be called hunches. Analysis of the responses (Experiment 3b) showed that they progressively increased in associative closeness to the actual solution to the problem as the sequence proceeded. Contrary to the findings of Metcalfe (1986a, 1986b; Metcalfe & Wiebe, 1987), subjects rarely attained correct solutions suddenly, without any hunches or premonitions.

A quite different line of research, reported by Durso and his colleagues (Durso, Rea, & Dayton, 1994; see also Dayton, Durso, & Shepard, 1990), also provides evidence that subjects can have correct intuitions in the course of insight problem solving. They presented subjects with the following vignette, which they were asked to explicate:

A man walks into a bar and asks for a glass of water. The bartender points a shotgun at the man. The man says "Thank you" and walks out.

(solution)

In order to help them solve the problem, the subjects were allowed to ask the experimenter a series of yes-no questions for up to two hours -- at the end of which half of them still had not solved the problem. Then they were asked to rate the relatedness of all possible pairings of 14 words related to the problem. Some of these targets were explicitly stated in the story or otherwise relevant to it (e.g., man, bartender), while others were implicit in the correct solution (e.g., surprise, remedy). The relatedness judgments were then processed by an algorithm, along the lines of those used in multidimensional scaling, which yields a graphical representation of the links among problem-related concepts in the subjects' minds -- literally a map of the problem space. The graphs of subjects who had solved the problem were completely different (r = .00) from those of subjects who had not done so. That is, concepts implicitly related to the solution were much more closely connected in the minds of those who had achieved insight into the problem than they were in those who had not. For example, the nonsolvers' graphs centered on explicit concepts, while the solvers' centered on implicit ones. This suggests that subjects' underlying cognitive structures changed as they achieved insight into the problem.

In a followup study, a new group of subjects rated those concepts important to the solution, as well as control concepts, at several points in the course of problem solving. Analysis showed that the insight-related concepts were initially viewed as highly unrelated. Some time before the insight was achieved, the similarity of insight-related concepts increased significantly; another increase was observed when the problem was actually solved. This pattern of ratings indicates that during problem solving subjects gradually increase their focus on those concepts important to the solution. In other words, the subjects' intuitions concerning the insight-relevant concepts, and the relations among them, were affected by the ultimate solution to the problem, even though the subjects were not aware of the solution itself:

Like dynamite, the insightful solution explodes on the solver's cognitive landscape with breathtaking suddenness, but if one looks closely, a long fuse warns of the impending organization (Durso et al., 1994, p. 98).

 

The Role of Activation 

in Intuition and Incubation

Bowers et al. (1990, 1995) concluded from their research that intuitions in problem solving reflect the automatic and unconscious activation and integration of knowledge stored in memory (Anderson, 1983; Collins & Loftus, 1975; McClelland & Rumelhart, 1981; Meyer & Schvaneveldt, 1971). This suggestion bears a close resemblance to a spreading-activation view of intuition and incubation initially proposed by Yaniv and Meyer (1987) based on a study of retrieval from semantic memory (for a critique of spreading activation theory, see Ratcliff & McKoon, 1981, 1988, 1994). In their experiments, Yaniv and Meyer (1987) presented subjects with the definitions of uncommon words, and asked them to generate the word itself. An example is:

LARGE BRIGHT COLORED HANDKERCHIEF; BRIGHTLY COLORED SQUARE OF SILK MATERIAL WITH RED OR YELLOW SPOTS, USUALLY WORN AROUND THE NECK.

(solution)

After presentation of the definition the subjects were asked to generate the word. If they could do so, they rated their confidence that they were correct; if they could not, they made TOT and FOK judgments. In either case, the trial ended with a lexical decision task in which the subject was presented with six items, including some English words and some legal nonwords; among the English words presented on each trial was the answer to the immediately preceding word-definition problem and a control item. For example:

SPENDING

DASCRIBE

BANDANNA

TRINSFER

ASTEROID

UMBRELLA.

The general finding was that subjects showed priming on the word targeted by the definition -- even when they were unable to generate the word itself. This was especially the case when subjects experienced the TOT state and gave high FOK ratings. Priming was correlated with the magnitude of feeling-of-knowing ratings of the target words, suggesting that the feeling-of-knowing ratings were based on priming-like effects. A second experiment, which extended the period between the definition and the lexical-decision task by four minutes (filled by other word-generation items), confirmed this finding. This result indicates that the priming detected in the first experiment can last long enough to contribute to incubation effects -- despite increased memory load created by the filler task (unfortunately, the design of the experiments did not permit an analysis for incubation effects themselves).

Yaniv and Meyer (1987) proposed a model of priming effects in semantic memory which offers a mechanism by which intuition and incubation can occur in problem solving. In their view, presentation of the definition activates relevant nodes in semantic memory. Activation then spreads from these nodes until it reaches a node representing the target and accumulates there to a level sufficient to bring the target into conscious awareness. These levels of activation can persist for a substantial period of time, despite the subject's engagement in other cognitive activities. Incubation effects reflect the accumulated influence of cues contained in the original statement of the problem, inferences generated by the subject's initial work on the problem, and new contextual cues processed during the ostensibly dormant period. But even before the threshold for conscious awareness is crossed, subthreshold levels of activation can sensitize the problem solver to new information pertinent to the solution. According to Yaniv and Meyer, this sensitization, revealed by priming effects in lexical decision, underlies FOKs and other metamemory judgments; we believe, with Bowers et al. (1990, 1994), that it underlies intuitions in problem solving as well.

In the present context, the most important point is Yaniv and Meyer's (1987) proposal that subjects are sensitive to, or influenced by, knowledge structures that are activated below the level required for conscious awareness.4 Similarly, Bowers et al. (1990, 1995) argued that subthreshold levels of activation, spreading from nodes representing the clues and converging on the node representing the solution, served as the basis for subjects' intuitions about which DOT and ACT problems were soluble and what the solutions were. A similar process, constructed within the framework of models of object-recognition (e.g., Biederman, 1987; Peterson, 1993; Tarr, 1995), could account for intuitions and insights on nonverbal problems such as those posed by the WGCT.

Unfortunately, the evidence for such a proposal is relatively indirect. In fact, accurate judgments of coherence on the DOT might be based on other kinds of processes. For example, coherent and incoherent test items might differ on a dimension of interclue relatedness. That is, the words PLAYING, CREDIT, and REPORT, which made up a coherent triad in their experiments, might be more closely associated with each other than are the words STILL, PAGES, AND MUSIC, which comprised an incoherent one. If so, subjects could make accurate judgments of coherence by attending solely to the relations among cues, rather than picking up on the convergence of the activation on some item stored in semantic memory.

To test this hypothesis, Dorfman (1990, Normative Study) constructed RAT-like items consisting of two sets of six words (Dyads of Sextuples), one coherent and one incoherent. For example:

SCHOOL

CHAIR

JUMP

NOON

HEELS 

WIRE

KITTEN

SCOTCH

SALAD

BREEZE

MILE

INSURANCE

 (solution)

Dorfman then asked a group of subjects to rate (on a 0-5 scale) the semantic relatedness of the items in each sextuple. Another group of subjects actually solved the problems (the average subject solved 74% of them) and then made dichotomous judgments of solubility analogous to the coherence judgments of Bowers et al. (1990). The judgments of solubility were highly correlated (r = .89) with actual ease of solution. More important, the items of coherent sextuples were judged to be significantly more semantically related than those of incoherent ones (M = 3.39 vs. 1.90). This indicates that interclue relatedness is a potential clue to coherence. However, correlation analysis indicated that relatedness was only modestly related to either solubility judgments (r = .39) or actual solution accuracy (r = .29). In a series of subsequent experiments (Experiments 1 and 2, described below), relatedness continued to show only modest correlations with solution accuracy: rs = .29 and .32, respectively. Thus, while interclue relatedness may play some role in metacognitive judgments of coherence, other factors are also important.

In a more direct test of the activation hypothesis, Dorfman (1990, Experiments 1 and 2) examined the effects of external cues on metacognitive judgments in problem solving. In Experiment 1, subjects saw one clue for each of 40 sextuples (20 coherent and soluble, 20 incoherent and insoluble), and then generated a word that was associatively related to that clue. If this associate was in fact the correct solution to a coherent sextuple, that problem was eliminated from further consideration (the subjects were given no feedback about their performance). In the second phase, the subjects were given two such cues, generated a word that was associatively related to both, and made a judgment of the coherence of the item. Again, items that were correctly solved were eliminated from consideration. The problem-solving and coherence tasks were repeated for the sets of three, four, five, and six items.

In this experiment, subjects were more likely to solve coherent problems when they received a greater number of clues. Thus, subjects correctly solved 18% of problems in the initial phase of the experiment, when only one clue was given. This success was of course entirely adventitious, reflecting accidents of word association. They solved another 22% of the problems after they received a second clue, and yet another 20% after receiving the third clue. By the time all six clues had been given, the average subject had solved 84% of the problems. Thus, the incremental presentation of cues promoted discovery of insightful solutions. As in the research of Bowers et al. (1990), the subjects were able to discriminate coherent from incoherent problems, even though they were unable to provide the correct solutions. Thus, after the second trial, at which point approximately 40% of the items had been solved, the remaining soluble problems received average coherence ratings of 1.34, compared to 1.07 for the insoluble problems. The coherence ratings were especially high for soluble problems which were eventually solved, compared to those to which the subject failed to find the solution, Ms = 1.40 vs. 1.13, respectively.

Experiment 2 substituted warmth judgments for coherence ratings, and yielded comparable results. The subjects correctly solved 17% of problems in the initial phase of the experiment, when only one clue was given, another 26% after receiving the second clue, and yet another 20% after receiving the third clue; by the time all six clues had been given, the average subject had solved 91% of the problems. As in the first experiment, the subjects were able to discriminate coherent from incoherent problems, even though they were unable to provide the correct solutions. Thus, after the second trial, with 43% of the items solved, the remaining soluble problems received average warmth ratings of 1.38, compared to 1.09 for the insoluble problems. This time, however, the warmth ratings for soluble problems which were eventually solved were no greater than for those that remained unsolved; in fact, they were somewhat lower, Ms = 1.36 vs. 1.48, respectively. As in the study by Bowers et al. (1990), intuitions were related to solubility; but as in the research of Metcalfe (1986a, 1986b; Metcalfe & Wiebe, 1987), in this case warmth ratings did not predict the actual emergence of a solution.

In fact, viewed across the six phases of each experiment, the coherence and warmth judgments failed to show consistent signs of an incremental pattern. As a rule, subjects gave relatively low coherent or warmth ratings for unsolved problems, until the trial on which they solved them -- at which time, of course, the ratings showed a sharp increase. Thus, items solved in Phase 5 of Experiment 1, by which time subjects had seen five cues, received average coherence ratings of 3.67 on that trial; prior to this time, however, they had received ratings of 1.18 in Phase 2 (two cues), 1.34 in Phase 3 (three cues), and 1.02 in Phase 4 (four cues). In Experiment 2, items that were solved in Phase 5 received average warmth ratings of 3.98 in that phase, but ratings of 1.15, 0.99, and 0.44 in Phases 2-4, respectively.

Although Dorfman's (1990) Experiments 1 and 2 did not reveal the gradual buildup of coherence or warmth envisioned by Bowers et al. (1990, 1995), her findings do not necessarily require the sudden, Gestalt-like restructuring process described by Metcalfe (1986a, 1986b; Metcalfe & Wiebe, 1987). Discontinuous metacognitive experiences need not imply underlying discontinuities in the processing of information. Taken together with the research of Bowers et al. (1990), Dorfman's findings suggest that problems of this sort are solved by means of a gradual accrual of information from memory. The thinker's initial unsuccessful attempts to solve a problem activate information in memory, sensitizing him or her to encounters with additional clues (Yaniv & Meyer, 1987). Such encounters partially activate other, related memory traces, producing a build-up of underlying coherence which contributes to the intuition that the problem is soluble. Gradually coherence builds up further, to the point at which a novel recombination of information is possible. At this point, the thinker may have an explicit hypothesis or hunch as to the solution to the problem. When activation of underlying memory traces reaches the threshold needed for discovery of an insightful solution, intuition has led to insight. However, the gradual buildup of coherence may not generate progressively strong intuitions. Metacognitions may be based on activation crossing a threshold, rather than on the absolute magnitude of activation. In this way, coherence could build up preconsciously, but intuitions would emerge only after coherence levels had reached the threshold of conscious awareness. Thus, intuitions could reflect activation processes that occur outside of conscious awareness, and perhaps outside of conscious control as well.

 

Priming in Insight Problem Solving

Further evidence for the role of unconscious activation in insight problem solving was provided by a doctoral dissertation recently completed by Shames (1994). Shames noted that the intuitions of subjects on the DOT and similar tests, like the intimations described by Wallas (1926), resembled semantic priming effects of the sort familiar in the study of implicit memory -- except that it is the thought that is implicit, rather than any representation of current or past experience. Yaniv and Meyer (1987) had made the same argument, but their task involved retrieval from semantic memory. And while retrieval from semantic memory may be a problem in some sense, it does not require establishing (or finding) new associations or connections. Moreover, problem solving on such tasks is usually characterized by a much more mundane phenomenology than problems that are accompanied by insight. Accordingly, Shames imported the RAT into Yaniv and Meyer's priming paradigm.

Shames's first experiment illustrates his basic paradigm. Subjects were presented with RAT-like items and were given five seconds to find a single associate which all three cues had in common. The subjects then indicated, by a simple yes/no response, whether they knew the answer. Under these circumstances of testing, the subjects' responses were negative on a majority of the trials; thus, the bulk of the triads went unsolved. Immediately after their yes/no response, six items were presented for lexical decision. As in Yaniv and Meyer (1987), these items included both words and nonwords, and the words included both the item targeted by the RAT problem and a control word. Comparing response latencies to target and control items, both item and subject analyses revealed a significant priming effect for unsolved items. That is, the subjects showed significantly shorter response latencies when making lexical decisions about targets primed by unsolved RAT items, compared to controls. Interestingly, the priming effect observed for solved RAT items was smaller and not significant.

Such a finding deserves replication, especially in light of the difference obtained between solved and unsolved problems. Accordingly, Shames (1994, Experiment 5) developed a larger set of triads and repeated his experiment. Again, he obtained a significant priming effect for unsolved triads, but this time the effect for solved triads was also significant.

One potential problem with Shames's original procedure is that triads are classified as solved or unsolved depending on subjects' self-reports of whether they knew the answer. This reliance on self-reports makes some researchers nervous. For example, subjects might have solved an RAT problem, but withheld this information from the experimenter. In an attempt to overcome this difficulty, Shames (1994, Experiment 6) classified RAT items as easy or hard based on the performance of a normative group. Hard items (which are coherent but relatively unlikely to be solved by subjects) produced significant priming on lexical decision, but easy items (which are also coherent, but relatively likely to be solved) did not.

At this point, the matter of whether solved and unsolved triads yield different levels of priming remains uncertain. Two subsequent replications of Experiment 1 by Shames (unpublished data) yielded priming for unsolved triads, but not for solved ones. Thus, the general trend of the available evidence is that unsolved problems show priming while solved ones do not. The issue is important, because some theorists (e.g., Merikle & Reingold, 1992) have argued that the best evidence for unconscious processes is provided by qualitative differences between explicit and implicit processes. The fact that unsolved items produce priming but solved items do not might be one such difference.

But why might unsolved triads produce more priming than solved ones? One possibility is suggested by Zeigarnik's (1927) classic study of memory and problem solving. As every schoolchild knows, Zeigarnik asked her subjects to engage in simple tasks, but interrupted half of these tasks before they could be completed. Later, on a test of incidental memory, the subjects tended to remember uncompleted tasks better than completed ones -- a phenomenon enshrined in psychological lore as the Zeigarnik effect (for reviews, see Butterfield, 1964; Weiner, 1966).5 Zeigarnik's explanation, in the tradition of Lewinian field theory, was that the brain systems involved in task performance remain activated until the task is completed -- creating a kind of cognitive tension. When a problem is solved, closure is achieved and the tension dissipates. In much the same way, persisting activation following failure in problem solving might be the basis of the semantic priming effects observed in Shames's experiments. In any event, the fact that semantic priming occurs on trials with unsolved triads is consistent with the hypothesis of implicit problem solving: there is an effect on experience, thought, or action of the solution to a problem, in the absence of conscious awareness of what that solution is.

One thing we know for sure is that the priming effects observed by Shames (1994) reflect actual problem-solving activity. Shames (1994, Experiment 2) simply asked subjects to study RAT items in a recognition memory paradigm. On each trial, the subjects were presented with an RAT item to memorize, followed by the lexical decision task. They were then presented with a single word and asked whether it had been part of the RAT item just presented. In contrast to the results of Experiments 1 and 5, where the RAT items were studied under a problem-solving set, the priming of RAT solutions was not significant. Thus, activation does not spread automatically from nodes representing RAT cues to a node representing their common associate; activation of solutions only occurs when the subject is operating under a particular problem-solving set.

Another line of evidence that automatic spreading activation is not enough to prime RAT solutions came from work on insoluble or incoherent RAT items. Shames (1994, Experiment 3) replaced one word in each RAT item with a misleading cue, unrelated to the target remote associate. Thus, two of the words converged on the solution, but the third did not. Under these circumstances, there was no priming effect on lexical decision. If the priming effect were due merely to activation automatically spreading from individual cue nodes, some priming should have been observed, although perhaps the degree of priming would not have been as large as when all three clues were valid. The fact that incoherent RAT items do not prime lexical decisions indicates that the priming reflects progress in problem solving. Taken together, Experiments 2 and 3 render it unlikely that the semantic priming observed by Shames is produced merely by the encoding of the individual words in the RAT item, rather than any attempt to find an associate that all three have in common.

In Shames's experiments, subjects proceed to the lexical decision task after only five seconds of problem solving. Therefore, it is possible that the priming reflects the effect of solutions that emerged over the course of the lexical decision task; if this were the case, the observed priming would not reflect implicit problem solving at all, but rather explicit awareness of the solution to the RAT item at hand. Under such circumstances, however, subjects should be able to identify correct solutions to RAT items as such when they appear as targets for lexical decision. Accordingly, Shames (1994, Experiment 5) required subjects to make solution judgments rather than lexical decisions after presentation of the RAT items. RAT problems were followed by a series of six words and nonwords, as in the lexical decision task. As each of the six probes appeared, subjects were required to determine whether it was an associate of all three RAT cues. On average, it took subjects approximately one second longer to identify RAT targets as such than to identify them as words. Thus, it appears that the subjects do not have enough time during the lexical decision task to consciously identify items as RAT solutions.

The priming effect occurs during the incubation period, after the problem has been posed, but before the solution has arrived in consciousness. The priming effect uncovered by Shames (1994) may underlie the ability of subjects to judge which RAT items are coherent (Bowers et al., 1990, 1994). Both phenomena seem to reflect activity in the target node that remains beneath the threshold for conscious accessibility -- the "rising toward consciousness" of which Wallas (1926, p. 94) wrote. Still, further research is required to connect priming with intuitions.

 

The Problem of Incubation

As with intuition, the most convincing evidence for the role of incubation in problem solving is anecdotal. Woodworth and Schlosberg (1954), in an early and influential review, cited "a large mass of testimony from creative thinkers to the effect that laying aside a baffling problem for a while is often the only way to reach a satisfactory solution" (p. 838). Inventors, scientists, poets, and artists, they noted, routinely testify to the power of incubation (for dramatic and inspiring accounts in a variety of fields, see Koestler, 1964); they also tend to characterize incubation as an unconscious process. Campbell (1960) thought that incubation resulted from the random fusion of memory representations, an ongoing process which occurs unconsciously; when the new combination is relevant to the problem at hand, it emerges into consciousness as a creative insight -- a mental equivalent of the survival of the fittest.

However, there are other accounts of incubation which afford no role for unconscious processes (for reviews of the possibilities, see Perkins, 1981; Posner, 1973; Woodworth, 1938; Woodworth & Schlossberg, 1954). For example, prolonged thought might result in a kind of mental fatigue: after the incubation period, the thinker returns to the problem refreshed and more likely to achieve a solution. According to this view, no work at all, whether conscious or unconscious, is done on the problem during the incubation period. Alternatively, the incubation period merely affords an opportunity for further conscious work, activity which is simply forgotten after the problem has been solved -- perhaps because it occurred in brief, barely noticed spurts. Thus, what appears to the thinker as unconscious incubation is merely an after-the-fact illusion about what has gone on during the interval.

Perhaps the most popular theoretical account of incubation has been proposed by Simon (1966; see also Newell et al., 1962/1979, pp. 140-141; 1986/1989, pp. 484-485), based on an early version of GPS theory. The theory holds that at each point in problem solving, the current state is held in short-term memory, while the final state, and other subgoals yet to be achieved, is held in long-term memory. If the thinker is interrupted, or for other reasons turns attention away from the problem, the current state will be lost due to decay or displacement. When the thinker returns to the problem, the problem space or subgoal hierarchy likely will have been altered by newly processed information. Thus, thinkers re-enter the problem space at a somewhat different point than they left it. This difference in vantage point, rather than any unconscious work, accounts for the effects of the incubation period. Incubation, then, on this view, is tantamount to a combination of forgetting and prompting. That is, the incubation period provides an opportunity to forget the misleading cues, but it also provides an opportunity for the person to encode more appropriate cues newly available in the environment. Woodworth (1938; Woodworth & Schlosberg, 1954) offered a similar hypothesis, involving the breaking of inappropriate problem-solving sets. According to this view, the incubation period affords an opportunity to forget (presumably by means of decay or interference) an inappropriate set or direction. Simon's (1966) theory adds to simple forgetting the influence of new information encoded during the incubation period.

Theoretical explanations of incubation may be premature, in view of the fact that attempts to observe incubation under laboratory conditions have not been notably successful (for a review, see Olton, 1979). In pioneering research, Patrick (1935, 1937, 1938) observed artists, poets, and scientists as they discussed the content of their ongoing creative projects. She found that ideas recurred in different forms throughout the creative process, separated by periods in which other thoughts were present. Incubation, defined as the early appearance and disappearance of an idea that later appeared in the final product, was observed in the majority of artists' and poets' reports. However, Eindhoven and Vinacke (1952) found no evidence of a discrete incubation stage when they asked artists and nonartists to illustrate a poem under what they considered to be more naturalistic conditions. For example (p. 161):

[T]here was no sharply defined break between preparation and incubation or illumination. Incubation, itself, might be defined as thought about the problem, whether subconscious, or not, and thus would constitute an aspect of creation which persisted throughout the experiment. It probably continues, for instance, even while the subject is finishing his final sketch, at which time it may influence the modification of detail (verification).

 

From their point of view, Wallas's (1926) stages of thought occur concurrently rather than successively, and are better thought of a component processes of creativity.

More convincing evidence of incubation has been found in a number of recent experimental studies (e.g., Dreistadt, 1969, Fulgosi & Guilford, 1968, 1972; Kaplan, 1989; Mednick, Mednick, & Mednick, 1964; Murray & Denny, 1969; Peterson, 1974; Silveira, 1971), but other studies have either failed to replicate these positive findings or reported negative results (e.g., Gall & Mendelsohn, 1967; Olton, 1979; Olton & Johnson, 1976; Silveira, 1971). Moreover, a number of studies have been able to document incubation effects only following certain delays (e.g., Fulgosi & Guilford, 1968, 1972), for subjects of certain ability levels (e.g., Dominowski & Jenrick, 1972; Murray & Denny, 1969, Patrick, 1986), with certain kinds of filler tasks (e.g., Patrick, 1986), or when certain kinds of environmental cues were available (e.g., Dominowski & Jenrick, 1972; Dreistadt, 1969). Surveying the literature, Smith and Blankenship (1991) concluded that there was "neither a strong base of empirical support for the putative phenomenon of incubation nor a reliable method for observing the phenomenon in the laboratory (p. 62)".

Smith and Blankenship (1989, 1991; see also Smith, 1994) have recently introduced a method which promises to correct both of these problems. Their method involves presenting a problem along with an ostensible hint about the solution -- a hint which is actually misleading. In one variant on their method (Smith & Blankenship, 1989, Experiment 1), subjects were presented with a series of rebus puzzles (in which pictures, words, or other symbols represent the pronunciation of a word or phrase). For example:

TIMING TIM ING.

(solution)

The first 15 rebuses were accompanied by helpful clues, in order to establish the subjects' set; however, the last five rebuses were accompanied by misleading clues. A sample misleading item is:

YOU JUST ME

Clue: Beside.

(solution)

In another variant (Smith & Blankenship, 1991, Experiment 1), subjects received RAT items to solve, along with misleading associates. An example is:

LICK (tongue)

SPRINKLE (rain)

MINES (gold).

(solution)

After an initial presentation of the problems, some subjects were retested after a filled or unfilled interval, this time without any clues; others were retested immediately. Across a number of experiments, subjects performed better if they had not been presented with the misleading clues. Where they had received the misinformation, however, the subjects performed better with an incubation interval than if they had been tested immediately; longer incubation intervals produced superior performance as well as increased forgetting of the misleading cues; both were presumably due to the extended incubation period. Smith and Blankenship concluded that incubation occurs if subjects suffer from initial fixation in problem solving. Under these circumstances, as hypothesized by Simon (1966), the incubation period facilitates forgetting of the fixation, permitting subjects to restart problem-solving activity at a new, more useful point in the problem space. However, Smith and Blankenship did not claim that forgetting accounts for all incubation effects. Their method shows that the incubation period can be important for overcoming inappropriate sets, but does not shed light on the role of any unconscious processes that might occur during incubation.

In the memory domain, a model for incubation effects may be found in the phenomenon of hypermnesia, or the growth of memory over time (for reviews, see Erdelyi, 1984; Kihlstrom & Barnhardt, 1993; Payne, 1987). Following a single study trial, repeated tests generally reveal that memory for any single item fluctuates over time: some items that are remembered on early trials are forgotten on later ones, and vice-versa (Tulving, 1964). Usually, intertrial loss exceeds intertrial gain, producing the classic forgetting curve of Ebbinghaus (1885). However, under some circumstances the reverse is true, resulting in a net gain in recall (e.g., Erdelyi & Becker, 1974). As noted by Mandler (1994), the recovery of previously unremembered items is analogous to the achievement of previously unattained solutions (see also Smith & Vela, 1991). The analogy is strengthened by evidence that longer intertest intervals, analogous to longer incubation periods, produce greater amounts of hypermnesia (Madigan, 1976, Experiment 2).

How this hypermnesia occurs has been a matter of some debate (Payne, 1987). Erdelyi (1988) has speculated that imaginal processing is responsible for the effect, while Roediger (1982) has proposed that hypermnesia occurs under any conditions that enhance initial recall levels (see also Madigan & O'Hara, 1992). Klein and his colleagues (Klein, Loftus, Kihlstrom, & Aseron, 1989) tested the role of encoding factors in hypermnesia, and found that elaborative processing promoted intertrial recovery, while organizational processing prevented intertrial forgetting. The fact that elaborative activity underlies the recovery of previously forgotten items is consistent with an activation account: elaboration fosters the spread of activation through the memory network, so that over trials items that were previously activated only to subthreshold levels cross threshold and become available to conscious recollection. This is very similar to the process proposed by Yaniv and Meyer (1987) to account for incubation in problem solving.

 

Autonomous and Interactive Activation 

in Incubation

But what kind of activation is involved in incubation? Actually, there are two kinds of activation effects implicated in Yaniv and Meyer's (1987) account of incubation. Autonomous activation occurs merely with the passage of time, in the absence of contact with environmental events. By contrast, interactive activation requires the presentation of environmental cues which make contact with the pre-existing knowledge structures. Interactive activation requires more than the passage of time: new information must be available in the environment as well. Time is relevant because it provides an opportunity for the environment to provide new information to the thinker.

The classical view of unconscious incubation seems to implicate autonomous activation processes, inasmuch as they focus on cognitive activity below the threshold of awareness. Thus, Poincaré (cited in Posner, 1973, p. 170) reported that after failing to solve some problems having to do with indeterminant ternary quadratic equations, he gave up in disgust, and spent a few days at the seaside thinking of other things. One day, while walking on a bluff, it suddenly occurred to him that the equations which had earlier stumped him were identical with those of non-Euclidean geometry. However, interactive activation may also be important in incubation, if problem solving is particularly facilitated when new environmental cues are processed by a cognitive system that has been prepared for them by means of its own subthreshold activities. Thus, Gutenberg (cited in Koestler, 1964, pp. 122-123), while attending a wine harvest, realized that the steady pressure use to crush grapes might also be useful for imprinting letters. Associations of this kind happen frequently in problem-solving, even without conscious awareness (Maier, 1930, 1931). It seems possible, then, that introspective accounts of insight might underestimate the importance of external stimuli in incubation.

The existing literature offers support for the role of both autonomous and interactive activation in incubation. As noted earlier, Smith and Blankenship (1989) found that a period of inactivity (e.g., sitting quietly or performing a music-perception task) facilitated the solution of verbal rebus problems. The effect of a period of inactivity alone, without the provision of additional external cues, sets the occasion for forgetting to occur; but it also provides an opportunity for autonomous activation to accumulate. Inactivity effects have also been found on the Consequences Test (Fulgosi & Guilford, 1968, 1972; Kaplan, 1989, Experiments 1-5), anagrams (Goldman, Wolters, & Winograd, 1992; Peterson, 1974), classical insight problems (Murray & Denny, 1969; Silviera, 1971), and social-relations problems (Kirkwood, 1984). By the same token, Dominowski and Jenrick (1972) found that inactivity alone was insufficient to produce incubation on a classical-style insight problem; however, the provision of external cues did facilitate performance, providing evidence for interactive activation. Similar results were obtained by Dreistadt (1969), again on an insight problem of the classical type, and Kaplan (1989, Experiment 6), with a diverse set of puzzles and riddles.

At least one study has provided evidence for both autonomous and interactive activation in the same experiment. Mednick et al. (1964, Experiment 2) asked subjects to solve RAT problems; they were retested on unsolved items either immediately, or after a 24-hour delay. One group of delay subjects received no additional cues. However, two additional groups of delay subjects completed an analogy task in which some of the solutions were also solutions to an unsolved RAT item; one of these groups completed the analogies before the incubation period, the other afterwards. Subjects in the delay conditions performed better than those who were retested immediately, providing evidence for autonomous activation; but the effects of the incubation period were greater for those who also completed the RAT-relevant analogies, providing evidence for interactive activation. The subjects who completed the analogies were generally unaware of the connection between the analogies and the RAT problems.

A pair of studies by Dorfman (1990, Experiments 3 and 4) directly examined the role of autonomous and interactive activation in incubation effects. In Experiment 3, subjects were initially presented with sets of three items from the coherent sextuples described earlier. As before, they were asked to solve each problem and to make a coherence judgment. In the next phase, unsolved problems were repeated, either immediately or following delays of 5 or 15 minutes. In this phase, no additional cues were given. Then subjects received one, two or three additional clues in successive phases. The short and long intervals were filled with problems from other conditions.

As in Dorfman's (1990) other experiments, coherence ratings were low until just before the solution was achieved, at which point they increased markedly. Moreover, rated coherence was higher for problems which were ultimately solved than for those which remained unsolved (the absence of incoherent problems in this experiment makes it impossible to determine whether subjects could discriminate soluble from insoluble problems). There was no effect of the short or long incubation periods, taken alone, compared to the no-delay condition. That is, the subjects solved approximately the same number of new problems in each of the three delay conditions (No Delay, M = 1.47; Short Delay, M = 1.20; Long Delay, M = 1.33). However, the incubation period did interact with the cue conditions: provision of one or two additional cues resulted in more new solutions (Ms = 3.36) following the short delay than when there was no delay at all (M = 2.43). There was no additional advantage with three cues, however, so that by the time the subjects had received three additional cues there was no longer a difference between these groups. Moreover, there was no advantage for extra cues in the long-delay group. Although the results are somewhat puzzling in detail, the interaction indicates that any incubation observed in this experiment was due more to interactive rather than to autonomous activation. That is, incubation was only effective when the experimenter provided additional problem-relevant cues after the incubation period.

One potential problem with Experiment 3 was that the incubation period was filled with effortful activity -- viz, the subjects worked on some problems while others incubated. This methodological feature is common in incubation research, because experimenters want to make sure that subjects do not continue to work consciously on the incubating problem. The finding that subjects are more likely to solve problems after, say, ten minutes of thought compared to five minutes of thought is relatively uninteresting. The question, so far as incubation is concerned, is whether five minutes' worth of not thinking adds anything to problem solving. Nevertheless, the filled-interval procedure may be less than optimal for studying incubation, because it may create interference which obscures or dampens the effect. Accordingly, Dorfman's (1990) Experiment 4 repeated the basic conditions of Experiment 3, except that the incubation period was filled with unrelated arithmetic problems; she also added an intermediate-delay condition. Thus, the subjects' minds were still occupied, but with a task that was less likely to interfere with solving the sextuple problems.

Again, coherence ratings remained low until just before the problem was solved, and coherence ratings were higher for solved compared to unsolved problems. This time, however, more problems were solved after the long delay (M = 5.30) than after no delay (M = 3.40), or after delays of short (M = 4.50) or moderate (M = 3.20) intervals -- even before additional cues were given. There was no additional advantage provided by extra cues. Thus, these findings yielded evidence of autonomous rather than interactive activation.

The main difference between Dorfman's (1990) Experiments 3 and 4 concerned the task imposed during the incubation interval. In Experiment 3, it consisted of problems of the same sort as those being incubated. In Experiment 4, it consisted of qualitatively different sorts of problems. Thus, it seems that incubation can be demonstrated when the filler task does not compete for cognitive resources with the problems being incubated. Apparently, the incubation period allows activation to spread to nodes related to the problem elements, including the target solution. This, in turn, increases the likelihood that the solution will occur to the subject the next time the problem comes up. When autonomous activation alone fails to yield a solution, interactive activation can make an additional contribution. Thus, activation during incubation sensitizes the subject to new information provided by the environment, as suggested by Yaniv and Meyer (1987); activation contributed by this new information then combines with that built up over the incubation interval to increase further the likelihood of discovering the correct solution.

Although external cues play an important role in problem solving, the importance of the incubation interval itself should not be ignored. In the past, theorists have preferred to shy away from the popular construal of incubation as an unconscious process (e.g., Posner, 1973; Smith & Blankenship, 1991; Woodworth & Schlosberg, 1954). However, many of the mechanisms proposed as an alternative to unconscious processing do not seem to account for Dorfman's (1990) results. For example, the mere recovery from fatigue can be ruled out on the grounds that the subjects were engaged in taxing mental tasks during the incubation interval. For the same reason, we can rule out the possibility of conscious processing during incubation. Campbell's (1960) notion of the random fusion of memory representations also seems unlikely. The activation process is not random, but rather is directed along specific associative pathways. Forgetting may play a role in incubation, as Simon (1966) and Smith (1994; Smith & Blankenship, 1989, 1991) have suggested, but the notion of activation building up from subthreshold levels, and eventually (perhaps with additional stimulation from external cues) crossing the threshold of consciousness, remains attractive.

 

Implicit Thought

Thought comes in many forms. Neisser (1967, p. 297) suggested that "Some thinking is deliberate, efficient, and obviously goal-directed; it is usually experienced as self-controlled as well. Other mental activity is rich, chaotic, and inefficient; it tends to be experienced as involuntary, it just 'happens'." Neisser further suggested that the two forms of thought -- rational, constrained, logical, realistic, and secondary process on the one hand, and intuitive, creative, prelogical, autistic, and primary process on the other -- reflected serial and parallel processing, respectively. In serial processing, each mental operation is performed in sequence, while in parallel processing a number of operations are performed simultaneously. In current versions of the serial-parallel distinction, such as the parallel distributed processing framework of Rumelhart and McClelland (1986; McClelland & Rumelhart, 1986), parallel processes operate too quickly to be accessible to phenomenal awareness.

Another categorization of thought distinguishes between automatic and controlled processes (e.g., Anderson, 1982; Hasher & Zacks, 1979, 1984; LaBerge & Samuels, 1974; Logan, 1980; Posner & Snyder, 1975; Schneider & Shiffrin, 1977; Shiffrin & Schneider, 1977, 1984). In theory, automatic processes are inevitably engaged by particular inputs, regardless of the subject's intention -- although lately some attention has been given to conditional automaticity (Bargh, 1989, 1994). Moreover, the execution of automatic processes consumes few or no attentional resources. Thus, automatic processes operate outside of phenomenal awareness and voluntary control. They are unconscious in the strict sense of the word.

Until recently, the notion of unconscious thought was closely tied to the concepts of parallel processing, automaticity, and procedural knowledge more generally. It is almost an article of faith among cognitive psychologists that people lack introspective access to the procedures by which we perceive and remember objects and events, store and retrieve knowledge, think, reason, and solve problems (e.g., Nisbett & Wilson, 1977; Wilson & Stone, 1985). In large part, the goal of cognitive psychology is to explicate these unconscious procedures and processes (Barsalou, 1992). However, there has also been a parallel assumption, mostly unstated, that the declarative knowledge on which these processes operate is accessible to awareness. Put another way, we do not know how we think, but we do know what we think. Now, however, research on implicit perception, and especially on implicit memory, has made it clear that we do not always know what we think and know, either: activated mental representations of current and past experience can influence experience, thought, and action even though they are inaccessible to conscious awareness. By extension, it now seems possible to think of thoughts, images, and judgments which operate in the same way. The unconscious mental representations which underlie our intuitions, and which emerge into consciousness only after a period of incubation, are neither implicit percepts (because they are not representations of the present environment) nor implicit memories (because they are not representations of past experience). Call them implicit thoughts. While implicit perception and memory are well established, the study of implicit thought has only just begun.

 

Author Notes

Preparation of this paper was supported by a postdoctoral fellowship (MH-10042) from the National Institute of Mental Health to Jennifer Dorfman, a Minority Graduate Research Fellowship from the National Science Foundation to Victor A. Shames, and a research grant (MH-35856) from the National Institute of Mental Health to John F. Kihlstrom.

 

Footnotes

1.  The idea that unconscious mental processes are more creative than conscious ones is at the heart of many attempts to link creativity with hypnosis and other altered states of consciousness (Bowers & Bowers,1979; Shames & Bowers, 1992).  Return to text.

2.  For historical surveys of the psychology of thinking and problem solving, see Dominowski and Bourne (1994) and Ericsson and Hastie (1994).    Return to text.

3.  Similar distinctions have been drawn by others in terms of memory with and without awareness, direct and indirect memory, or declarative and nondeclarative memory (e.g., Jacoby & Witherspoon, 1982; Johnson & Hasher, 1987; Richardson-Klavehn & Bjork, 1988; Squire, 1992; Squire & Knowlton, 1995); the differences among these taxonomic proposals need not concern us here.  Return to text.  

Solution 1:  The remote associate is PARTY.  Return to text.

Solution 2:  The solution to the right-hand problem is CARD.  Return to text.

Solution 3:  The solution is MOUNTAIN.  Return to text.

 

Solution 4:  The solution is MATCH.  Return to text.

Solution 5:  The solution is SQUARE.  Return to text.

Solution 6:  The man was suffering from hiccups.  Return to text.

 

4.  Kihlstrom (1993a, 1993b, 1994a) has argued that superthreshold activation is necessary but not sufficient to support conscious awareness of a percept in memory.  According to his view, conscious awareness requires that an activated knowledge structure must also make contact with a representation of the self concurrently active in working memory.  Return to text.

Solution 7The solution is BANDANNA.

 

Solution 8:  The solution to the right-hand problem is HIGH.  Return to text.

5.  Zeigarnik also noted that some subjects, who felt that the interrupted problems were beyond their capacity, remembered successful tasks better than unsuccessful ones  a "reverse" Zeigarnik effect analogous to repression; for a discussion of this effect, see Kihlstrom & Hoyt, 1990).   Return to text.  

Solution 9:  The solution is SPLIT-SECOND TIMING.  Return to text.

Solution 10:  The solution is JUST BETWEEN YOU AND ME.  Return to text.

Solution 11:  The solution is SALT.  Return to text.

 

References

Anderson, J.R. (1982). Acquisition of cognitive skill. Psychological Review, 89, 369-406.

Anderson, J.R. (1983). A spreading activation theory of memory. Journal of Verbal Learning & Verbal Behavior, 22, 261-295.

Bargh, J.A. (1989). Conditional automaticity: Varieties of automatic influence in social perception and cognition. In J.S. Uleman & J.A. Bargh (Eds.), Unintended thought (pp. 3-51). New York: Guilford.

Bargh, J.A. (1994). The four horsemen of automaticity: Awareness, intention, efficiency, and control in social cognition. In R.S. Wyer & T.K. Srull (Eds.), Handbook of social cognition, 2nd Ed. (Vol. 1, pp. 1-40). Hillsdale, N.J.: Erlbaum.

Barry, D.C., & Dienes, Z. (Eds). (1994). Implicit learning: Theoretical and empirical issues. Hillsdale, N.J.: Erlbaum.

Barsalou, L.W. (1992). Cognitive psychology: An overview for cognitive scientists. Hillsdale, N.J.: Erlbaum.

Biederman, I. (1987). Recognition-by-components: A theory of human image understanding. Psychological Review, 94, 115-147.

Bisiach, E. (1992). Understanding consciousness: Clues from unilateral neglect and related disorders. In A.D. Milner & M.D. Rugg (Eds.), The neuropsychology of consciousness (pp. 113-137). London: Academic Press.

Bornstein, R.F., & Pittman, T.S. (1992). Perception without awareness: Cognitive, clinical, and social perspectives. New York: Guilford.

Bowers, K.S. (1984). On being unconsciously influenced and informed. In K.S. Bowers & D. Meichenbaum (Eds.), The unconscious reconsidered (pp. 227-272). New York: Wiley-Interscience.

Bowers, K.S. (1994). Intuition. In R.J. Sternberg (Ed.), Encyclopedia of intelligence (pp. 613-617). New York: Macmillan.

Bowers, K.S., Farvolden, P., & Mermigis, L. (1995). Intuitive antecedents of insight. In S.M. Smith, T.M. Ward, & R.A. Finke (Eds.), The creative cognition approach (pp. 27-52). Cambridge, Ma.: MIT Press.

Bowers, K. S., Regehr, G., Balthazard, C. G., & Parker, K. (1990). Intuition in the context of discovery. Cognitive Psychology, 22, 72-110.

Bowers, P. G., & Bowers, K. S. (1979). Hypnosis and creativity: A theoretical and empirical rapprochement. In E. Fromm & R. E. Shor (Eds.), Hypnosis: Developments in research and new perspectives (2nd ed., pp. 351-379). New York: Aldine.

Brown, R., & McNeil, D. (1966). The "tip of the tongue" phenomenon. Journal of Verbal Learning & Verbal Behavior, 5, 325-337.

Bryant, R.A., & McConkey, K.M. (1989a). Hypnotic blindness: A behavioral and experiential analysis. Journal of Abnormal Psychology, 98, 71-77.

Bryant, R.A., & McConkey, K.M. (1989b). Visual conversion disorder: A case analysis of the influence of visual information. Journal of Abnormal Psychology, 98, 326-329.

Butterfield, E.C. (1964). The interruption of tasks: Methodological, factual, and theoretical issues. Psychological Bulletin, 62, 309-322.

Campbell, D.T. (1960). Blind variation and selective retention in creative thought as in other knowledge processes. Psychological Review, 67, 380-400.

Cermak, L.S., Talbot, N., Chandler, K., & Wolbarst, L.R. (1985). The perceptual priming phenomenon in amnesia. Neuropsychologia, 23, 615-622.

Cheesman, J., & Merikle, P.M. (1984). Priming with and without awareness. Perception & Psychophysics, 36, 387-395.

Cheesman, J., & Merikle, P.M. (1985). Word recognition and consciousness. In D. Besner, T.G.Waller, & G.E. MacKinnon (Eds.), Reading research: Advances in theory and practice (Vol. 5, pp. 311-352). New York: Academic.

Cheesman, J., & Merikle, P.M. (1986). Distinguishing conscious from unconscious perceptual processes. Canadian Journal of Psychology, 40, 343-367.

Collins, A. M., & Loftus, E. F. (1975). A spreading-activation theory of semantic processing. Psychological Review, 82, 407-428.

Cork, R.L., Couture, L.J., & Kihlstrom, J.F. (1993). Memory and recall. In J.F. Biebuyck, C. Lynch, M. Maze, L.J. Saidman, T.L. Yaksh, & Zapol, W.M. (Eds.), Anesthesia: Biologic foundations (pp. xxx-xxx). New York: Raven Press.

Dayton, T., Durso, F.T., & Shepard, J.D. (1990). A measure of the knowledge reorganization underlying insight. In R.W. Schvanaveldt (Ed.), Pathfinder associative networks: Studies in knowledge organization (pp. 267-277). Norwood, N.J.: Ablex.

Dominowski, R.L. (1981). Comment on "An examination of the alleged role of 'fixation' in the solution of several 'insight' problems" by Weisberg and Alba. Journal of Experimental Psychology: General, 110, 199-203.

Dominowski, R.L., & Bourne, L.E. (1994). History of research on thinking and problem solving. In R.J. Sternberg (Ed.), Thinking and problem solving (pp. 1-35). San Diego: Academic Press.

Dominowski, R.L., & Jenrick, R. (1972). Effects of hints and interpolated activity on solution of an insight problem. Psychonomic Science, 26, 335-337.

Dorfman, J., & Kihlstrom, J.F. (1994). Implicit memory in posthypnotic amnesia. Manuscript in preparation.

Dorfman, J.. (1990). Metacognitions and incubation effects in insight problem solving. Unpublished doctoral dissertation, University of California, San Diego.

Dreistadt, R. (1969). The use of analogies and incubation in obtaining insights in creative problem solving. Journal of Psychology, 71, 159-175.

Durso, F. T., Rea, C. B., & Dayton, T. (1994). Graph-theoretic confirmation of restructuring during insight. Psychological Science, 5, 94-98.

Eindhoven, J.E., & Vinacke, W.E. (1952). Creative processes in painting. Journal of General Psychology, 47, 139-164.

Ellen, P. (1982). Direction, past experience, and hints in creative problem solving: Reply to Weisberg and Alba. Journal of Experimental Psychology: General, 111, 316-325.

Erdelyi, M.H. (1984). The recovery of unconscious (inaccessible) memories Laboratory studies of hypermnesia. In G.H. Bower (Ed.), The psychology of learning and motivation (Vol. 18, pp. 95-127). New York: Academic Press.

Erdelyi, M.H., & Becker, J. (1974). Hypermnesia for pictures: Incremental memory for pictures but not words in multiple recall trials. Cognitive Psychology, 6, 159-171.

Ericsson, K.A., & Hastie, R. (1994). Contemporary approaches to the study of thinking and problem solving. In R.J. Sternberg (Ed.), Thinking and problem solving (pp. 1-79). San Diego: Academic Press.

Flavell, J. (1979). Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. American Psychologist, 34, 906-911.

Forster, K. I., & Davis, C. (1984). Repetition priming and frequency attenuation in lexical access. Journal of Experimental Psychology: Learning, Memory, & Cognition, 10, 680-698.

Fowler, C.A., Wolford, G., Slade, R., & Tassinary, L. (1981). Lexical access with and without awareness. Journal of Experimental Psychology: General, 110, 341-362.

Fulgosi, A., & Guilford, J.P. (1968). Short-term incubation in divergent production. American Journal of Psychology, 81, 241-246.

Fulgosi, A., & Guilford, J.P. (1972). A further investigation of short-term incubation. Acti Instituti Psychologici, 64-73, 67-70.

Gall, M., & Mendelsohn, G.A. (1967). Effects of facilitating techniques and subject/experimenter interactions on creative problem solving. Journal of Personality & Social Psychology, 5, 211-216.

Goldman, W.P., Wolters, N.C., & Winograd, E. (1992). A demonstration of incubation in anagram problem solving. Bulletin of the Psychonomic Society, 30, 36-38.

Graf, P., & Schacter, D.L. (1985). Implicit and explicit memory for new associations in normal subjects and amnesic patients. Journal of Experimental Psychology: Learning, Memory, & Cognition, 11, 501-518.

Graf, P., Squire, L.R., & Mandler, G. (1984). The information that amnesic patients do not forget. Journal of Experimental Psychology: Learning, Memory, & Cognition, 10, 164-178.

Greenwald, A.M. (1992). New Look 3: Unconscious cognition reclaimed. American Psychologist, 47, 766-779.

Greenwald, A.G., Klinger, M.R., & Liu, T.J. (1989). Unconscious processing of dichoptically masked words. Memory & Cognition, 17, 35-47.

Harshman, R.A. (1974). Harshman figures. Unpublished manuscript, University of Western Ontario.

Hart, J.T. (1965). Memory and the feeling-of-knowing experience. Journal of Educational Psychology, 56, 208-216.

Hart, J.T. (1967). Memory and the memory-monitoring process. Journal of Verbal Learning & Verbal Behavior, 6, 685-691.

Hasher, L., & Zacks, R.T. (1979). Automatic and effortful processes in memory. Journal of Experimental Psychology: General, 108, 356-388.

Hasher, L., & Zacks, R.T. (1984). Automatic processing of fundamental information: The case of frequency of occurrence. American Psychologist, 39, 1372-1388.

Jacoby, L.L., & Dallas, M. (1981). On the relationship between autobiographical memory and perceptual learning. Journal of Experimental Psychology: Learning, Memory, & Cognition, 110, 306-340.

Jacoby, L.L., & Witherspoon, D. (1982). Remembering without awareness. Canadian Journal of Psychology, 36, 300-324.

James, W. (1890). Principles of psychology. 2 vols. New York: Holt.

Johnson, M.K., & Hasher, L. (1987). Human learning and memory. Annual Review of Psychology, 38, 631-668.

Kaplan, C.A. (1989). Hatching a theory of incubation: Does putting a problem aside really help? If so, why? Unpublished doctoral dissertation, Carnegie Mellon University.

Kihlstrom, J.F. (1980). Posthypnotic amnesia for recently learned material: Interactions with "episodic" and "semantic" memory. Cognitive Psychology, 12, 227-251.

Kihlstrom, J.F. (1984). Conscious, subconscious, unconscious: A cognitive perspective. In K.S. Bowers & D. Meichenbaum (Eds.), The unconscious reconsidered (pp. 149-211). New York: John Wiley & Sons.

Kihlstrom, J. F. (1987). The cognitive unconscious. Science, 237, 1445-1452.

Kihlstrom, J. F. (1990). The psychological unconscious. In L. A. Pervin (Ed.), Handbook of personality: Theory and Research (pp. 445-464). New York: The Guilford Press.

Kihlstrom, J.F. (1992). Dissociative and conversion disorders. In D.J. Stein & J. Young (Eds.), Cognitive science and clinical disorders (pp. 247-270). San Diego: Academic.

Kihlstrom, J. F. (1993a). The continuum of consciousness. Consciousness & Cognition, 2, 334-354.

Kihlstrom, J.F. (1993b). Implicit memory function during anesthesia. In P.S. Sebel, B. Bonke, & E. Winograd (Eds), Memory and Awareness in Anesthesia (pp. 10-30). New York: Prentice-Hall.

Kihlstrom, J.F. (1994a). Consciousness and me-ness. In J. Cohen & J. Schooler (Eds.), Scientific approaches to the question of consciousness (pp. xxx-xxx). Hillsdale, N.J.: Erlbaum.

Kihlstrom, J.F. (1994b). One hundred years of hysteria. In S.J. Lynn & J.W. Rhue (Eds.), Dissociation: Theoretical, Clinical, and Research Perspectives (pp. 365-394). New York: Guilford.

Kihlstrom, J.F. (1994c). The rediscovery of the unconscious. In J. Singer & H. Morowitz (Eds.), The mind, the brain, and complex adaptive systems (pp. xxx-xxx). Santa Fe Institute Studies in the Sciences of Complexity, Vol. 22. Reading, Ma.: Addison-Wesley.

Kihlstrom, J.F., & Barnhardt, T.M. (1993). The self-regulation of memory: For better and for worse, with and without hypnosis. In D.M. Wegner & J.W. Pennebaker (Eds.), Handbook of mental control (pp. 88-125). Englewood Cliffs, N.J.: Prentice Hall.

Kihlstrom, J.F., Barnhardt, T.M., & Tataryn, D.J. (1992). Implicit perception. In R.F. Bornstein & T.S. Pittman (Eds.), Perception without awareness (pp. 17-54). New York: Guilford.

Kihlstrom, J.F. & Hoyt, I.P. (1990). Repression, dissociation, and hypnosis. In J.L. Singer (Ed.), Repression and dissociation: Implications for personality theory, psychopathology, and health (pp 181-208). Chicago: University of Chicago Press.

Kihlstrom, J.F., & Schacter, D.L. (1990). Anaesthesia, amnesia, and the cognitive unconscious. In B. Bonke, W. Fitch, & K. Millar (Eds.), Memory and awareness during anaesthesia (pp. 22-44). Amsterdam: Swets & Zeitlinger.

Kihlstrom, J.F., Tataryn, D.J., & Hoyt, I.P. (1993). Dissociative disorders. In P.J. Sutker & H.E. Adams (Eds.), Comprehensive handbook of psychopathology, 2nd Ed (pp. 203-234). New York: Plenum.

Kihlstrom, J.F., & Tobias, B.A. (1991). Anosognosia, consciousness, and the self. In G.P. Prigatano & D.L. Schacter (Eds.), Awareness of deficit following brain injury: Theoretical and clinical aspects (pp. 198-222). New York: Oxford University Press.

Kirkwood, W.G. (1984). Effects of incubation sequences on communication and problem solving in small groups. Journal of Creative Behavior, 18, 45-61.

Klein, S.B., Loftus, J., Kihlstrom, J.F., & Aseron, R. (1989). The effects of item-specific and relational information on hypermnesic recall. Journal of Experimental Psychology: Learning, Memory, & Cognition, 15, 1192-1197.

Koestler, A. (1964). The act of creation. New York: Macmillan.

LaBerge, D., & Samuels, S.J. (1974). Toward a theory of autoamtic information processing in reading. Cognitive Psychology, 6, 101-124.

Lewandowsky, S., Dunn, J.C., & Kirsner, K. (Eds.). (1989). Implicit memory: Theoretical issues. Hillsdale, N.J.: Erlbaum.

Lewicki, P. (1986). Nonconscious social information processing. Orlando, Fl.: Academic Press.

Lockhart, R. S., & Blackburn, A. B. (1993). Implicit processes in problem solving. In P. Graf & M. E. J. Masson (Eds.), Implicit memory: New Directions in cognitive development and neuropsychology (pp. 95-115). Hillsdale, NJ: Lawrence Erlbaum.

Logan, G.D. (1980). Attention and automaticity in Stroop and priming tasks: Theory and data. Cognitive Psychology, 12, 523-553.

Luchins, A.S., & Luchins, E.H. (1970). Wertheimer's seminars revisited: Problem solving and thinking. Vol. 3. Albany: State University of New York at Albany Faculty-Student Association.

Madigan, S. (1976). Recovery and reminiscence in item recall. Memory & Cognition, 4, 233-236.

Madigan, S., & O'Hara, R. (1992). Initial recall, reminiscence, and hypermnesia. Journal of Experimental Psychology: Learning, Memory, & Cognition, 18, 421-425.

Maier, N.R.F. (1930). Reasoning in humans: I. On direction. Journal of Comparative Psychology, 10, 115-143.

Maier, N.R.F. (1931). Reasoning in humans: II. The solution of a problem and its appearance in consciousness. Journal of Comparative Psychology, 12, 181-194.

Mandler, G. (1994). Hypermnesia, incubation, and mind popping: On remembering without really trying. In C. Umiltà & M. Moscovitch (Eds.), Attention and performance XV: Conscious and nonconscious information processing (pp. 3-33). Cambridge, Ma.: MIT Press.

Marcel, A.J. (1983). Conscious and unconscious perception: Experiments on visual masking and word recognition. Cognitive Psychology, 15, 543-551.

Marshall, J.C., & Halligan, P.W. (1988). Blindsight and insight in visuo-spatial neglect. Nature, 336, 766-767.

McClelland, J.C., & Rumelhart, D.E. (1981). An interactive activation model of context effects in letter perception: I. An account of basic findings. Psychological Review, 88, 375-407.

McClelland, J.C., Rumelhart, D.E., & the PDP Research Group (1986). Parallel distributed processing: Explorations in the microstructure of cognition. Vol. 2: Psychological and biological models. Cambridge, Ma.: MIT Press.

Mednick, M.T., Mednick, S.A., & Mednick, E.V. (1964). Incubation of creative performance and specific associative priming. Journal of Abnormal & Social Psychology, 69, 84-88.

Mednick, S. A. (1962). The associative basis of the creative process. Psychological Review, 69, 220-232.

Mednick, S. A., & Mednick, M. T. (1967). Examiner's manual: Remote Associates Test. Boston: Houghton Mifflin.

Merikle, P.M., & Reingold, E.M. (1992). Measuring unconscious perceptual processes. In R.F. Bornstein & T.S. Pittman (Eds.), Perception without awareness (pp. 55-80). New York: Guilford.

Metcalfe, J. (1986a). Feeling of knowing in memory and problem solving. Journal of Experimental Psychology: Learning, Memory, & Cognition, 12, 288-294.

Metcalfe, J. (1986b). Premonitions of insight predict impending error. Journal of Experimental Psychology: Learning, Memory, & Cognition, 12, 23-634.

Metcalfe, J., & Shimamura, A.P. (Eds.). (1994). Metacognition: Knowing about knowing. Cambridge, Ma.: MIT Press.

Metcalfe, J., & Wiebe, D. (1987). Intuition in insight and noninsight problem solving. Memory & Cognition, 15, 238-246.

Meyer, D. E., & Schvaneveldt, R. W. (1971). Facilitation in recognizing pairs of words: Evidence of a dependence between retrieval operations. Journal of Experimental Psychology, 90, 227-234.

Mooney, C.M. (1957). Closure as affected by viewing time and multiple visual fixations. Canadian Journal of Psychology, 11, 21-28.

Murray, H.G., & Denny, J.P. (1969). Interaction of ability level and interpolated activity (opportunity for incubation) in human problem solving. Psychological Reports, 24, 271-276.

Neisser, U. (1967). Cognitive psychology. New York: Appleton-Century-Crofts.

Nelson, T.O. (Eds.). (1992). Metacognition: Core readings. Needham Heights, Ma.: Allyn & Bacon.

Nelson, T.O., Gerler, D., & Narens, L. (1984). Accuracy of feeling-of-knowing judgments for predicting perceptual identification and relearning. Journal of Experimental Psychology: General, 113, 282-300.

Nelson, T. O., & Narens, L. (1980). Norms of 300 general-information questions: Accuracy of recall, and feeling-of-knowing ratings. Journal of Verbal Learning & Verbal Behavior, 19, 338-368.

Newell, A., & Simon, H.A. (1973). Human problem solving. Englewood Cliffs, N.J.: Prentice-Hall.

Newell, A., Simon, H. A., & Shaw, J. C. (1979). The process of creative thinking. In H. A. Simon (Ed.), Models of thought (pp. 144-174). New Haven: Yale University Press. Originally published, 1962.

Nisbett, R., & Wilton, T.D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological Review, 84, 231-259.

Ohlsson, S. (1984). Restructuring revisited: An information-processing theory of restructuring and insight. Scandinavian Journal of Psychology, 25, 117-129.

Olton, R. M. (1979). Experimental studies of incubation: Searching for the elusive. Journal of Creative Behavior, 13, 9-22.

Olton, R.M., & Johnson, D.M. (1976). Mechanisms of incubation in creative problem solving. American Journal of Psychology, 89, 617-630.

Patrick, A.S. (1986). The role of ability in creative "incubation". Personality & Individual Differences, 7, 169-174.

Patrick, C. (1935). Creative thought in poets. Archives of Psychology, #178.

Patrick, C. (1937). Creative thought in artists. Journal of Psychology, 4, 35-37.

Patrick, C. (1938). Scientific thought. Journal of Psychology, 5, 55-83.

Payne, D.G. (1987). Hypermnesia and reminiscence in recall: A historical and empirical review. Psychological Bulletin, 101, 5-27.

Perkins, D. N. (1981). The mind's best work. Cambridge, MA: Harvard University Press.

Peterson, C. (1974). Incubation effects in anagram solution. Bulletin of the Psychonomic Society, 3, 29-30.

Peterson, M.A. (1994). Object-recognition processes can and do operate before figure-ground organization. Current Directions in Psychological Science, 3, 105-111.

Posner, M. I. (1973). Cognition: An introduction. Glenview, IL: Scott, Foresman and Co.

Posner, M. I., & Snyder, C. R. R. (1975). Attention and cognitive control. In R. L. Solso (Eds.), Information processing and cognition (. Hillsdale, NJ: Lawrence Erlbaum.

Reber, A.S. Implicit learning of artificial grammars. Journal of Verbal Learning & Verbal Behavior, 6, 317-327.

Reber, A.S. (1993). Implicit learning and tacit knowledge: An essay on the cognitive unconscious. New York: Oxford University Press.

Ratcliff, R., & McKoon, G. (1981). Does activation really spread? Psychological Review, 88, 454-462.

Ratcliff, R., & McKoon, G. (1988). A retrieval theory of priming in memory. Psychological Review, 95, 385-408.

Ratclifff, R., & McKoon, G. (1994). Retrieving information from memory: Spreading-activation theories versus compound-cue theories. Psychological Review, 101, 177-187.

Richardson-Klavehn, A., & Bjork, R.A. (1988). Measures of memory. Annual Review of Psychology, 36, 475-543.

Roediger, H.L. (1982). Hypermnesia: The importance of recall time and asymptotic level of recall. Journal of Verbal Learning & Verbal Behavior, 21, 662-665.

Roediger, H. L. (1990a). Implicit memory: A commentary. Bulletin of the Psychonomic Society, 28, 373-380.

Roediger, H.L. (1990b). Implicit memory: Retention without remembering. American Psychologist, 45, 1043-1056.

Roediger, H.L., & McDermott, K.B. (1993). Implicit memory in normal human subjects. In F. Boller & J. Grafman (Eds.), Handbook of neuropsychology (Vol. 8, pp. 63-131). Amsterdam: Elsevier.

Roediger, H.L., Payne, D.G., Gillespie, G.L., & Lean, D.S. (1982). Hypermnesia as determined by level of recall. Journal of Verbal Learning & Verbal Behavior, 21, 635-655.

Rumelhart, D.E., McClelland, J.L., & the PDP Research Group. (1986). Parallel distributed processing: Explorations in the microstructure of cognition. Vol. 1: Foundations. Cambridge, Ma.: MIT Press.

Schacter, D. L. (1987). Implicit memory: History and current status. Journal of Experimental Psychology: Learning, Memory, & Cognition, 13, 501-518.

Schacter, D. L. (1992a). Consciousness and awareness in memory and amnesia: Critical issues. In A. D. Milner & M. D. Rugg (Eds.), The neuropsychology of consciousness (pp. 179-200). San Diego: Academic Press.

Schacter, D.L. (1992b). Understanding implicit memory: A cognitive neuroscience approach. American Psychologist, 47, 559-569.

Schacter, D.L. (1995). Implicit memory: A new frontier for cognitive neuroscience. In M. Gazzaniga (Ed.), The cognitive neurosciences (pp. 815-824). Cambridge, Ma.: MIT Press, 1995.

Schacter, D.L., Chiu, C.-Y.P., & Ochsner, K.N. (1993). Implicit memory: A selective review. Annual Review of Psychology, 16, 159-182.

Schacter, D.L., & Graf, P. Preserved learning in amnesic patients: Perspectives on research from direct priming. Journal of Clinical & Experimental Neuropsychology, 15, 3-12.

Scheerer, M. (1963). Problem-solving. Scientific American, 208, 118-128.

Schneider, W., & Shiffrin, R.M. (1977). Controlled and automatic human information processing: I. Detection, search, and attention. Psychological Review, 84, 1-66.

Shames, V.A. (1994). Is there such a thing as implicit problem-solving? Unpublished doctoral dissertation, University of Arizona.

Shames, V.A., & Bowers, P. G. (1992). Hypnosis and creativity. In E. Fromm & M. R. Nash (Eds.), Contemporary hypnosis research (pp. 334-363). New York: The Guilford Press.

Shames, V.A., & Kihlstrom, J.F. (1994). Respecting the phenomenology of human creativity. Behavioral & Brain Sciences, 17, 551-552.

Shiffrin, R.W., & Schneider, W. (1977). Controlled and automatic human information processing: II. Perceptual learning, automatic attending, and a general theory. Psychological Review, 84, 127-190.

Shiffrin, R.W., & Schneider, W. (1984). Automatic and controlled processing revisited. Psychological Review, 91, 269-276.

Shimamura, A.P. (1994). Neuropsychological analyses of implicit memory: History, methodology, and theoretical implications. In P. Graf & M.E.J. Masson (Eds.), Implicit memory: New directions in cognition, development, and neuropsychology (pp. 265-285). Hillsdale, N.J.: Erlbaum.

Shimamura, A.P., & Squire, L.R. (1984). Paired associate learning and priming effects in amnesia: A neuropsychological study. Journal of Experimental Psychology: General, 113, 556-570.

Silveira, J.M. (1971). Incubation: The effects of interruption timing and length on problem solution and quality of problem processing. Doctoral Dissertation, University of Oregon, 1971. Dissertation Abstracts International, 1972, 32, 5506b.

Simon, H.A. (1966). Scientific discovery and the psychology of problem solving. In R. Colodny (Ed.), Mind and cosmos (pp. 22-40). Pittsburgh: University of Pittsburgh Press.

Simon, H. A. (1989). The information-processing explanation of Gestalt phenomena. In H. A. Simon (Ed.), Models of thought (Vol. 2, pp. 481-493). New Haven: Yale University Press. Originally published 1986.

Smith, S.M. (1994). Getting into and out of mental ruts: A theory of fixation, incubation, and insght. In R. Sternberg & J. Davidson (Eds.), The nature of insight (pp. 229-251). Cambridge, Ma.: MIT Press.

Smith, S.M., & Blankenship, S.E. (1989). Incubation effects. Bulletin of the Psychonomic Society, 27, 3111-314.

Smith, S.M., & Blankenship, S. E. (1991). Incubation and the persistence of fixation in problem solving. American Journal of Psychology, 104, 61-87.

Smith, S.M., & Vela, E. (1991). Incubated reminiscence effects. Memory & Cognition, 19, 168-176.

Squire, L.R. (1992). Declarative and nondeclarative memory: Multiple brain systems supporting learning and memory. Journal of Cognitive Neuroscience, 4, 232-243.

Squire, L.R., & Knowlton, B.J. (1995). Memory, hippocampus, and brain systems. In M. Gazzaniga (Ed.), The cognitive neurosciences (pp. 825-837). Cambridge, Ma.: MIT Press.

Street, R.F. (1931). A Gestalt completion test. Contributions to Education, Whole #481. New York: Columbia University Teachers College.

Tarr, M.J. (1995). Rotating objects to recognize them: A case study of the role of viewpoint dependency in the recognition of three-dimensional objects. Psychonomic Bulletin & Review, xx, xxx-xxx.

Thomas, J.C. (1974). An analysis of behavior in the hobbits-orcs problem. Cognitive Psychology, 6, 257-269.

Tulving, E. (1964). Intratrial and intertrial retention: Notes toward a theory of free recall verbal learning. Psychological Review, 71, 219-237.

Tulving, E., & Schacter, D.L. (1990). Priming and human memory systems. Science, 247, 301-306.

Tulving, E., Schacter, D.L., & Stark, H. (1982). Priming effects in word-fragment completion are independent of recognition memory. Journal of Experimental Psychology: Learning, Memory, & Cognition, 8, 336-342.

Wallas, G. (1926). The art of thought. New York: Franklin Watts.

Warrington, E. K., & Weiskrantz, L. (1968). New method for testing long-term retention with special reference to amnesic patients. Nature 228, 628-630).

Weiner, B. (1986). Effects of motivation on the availability of memory traces. Psychological Bulletin, 65, 24-37.

Weisberg, R. W. (1992). Metacognition and insight during problem solving: Comment on Metcalfe. Journal of Experimental Psychology: Learning, Memory, and Cognition, 18, 426-431.

Weisberg, R.W., & Alba, J.W. (1981a). An examination of the alleged role of "fixation" in the solution of several "insight" problems. Journal of Experimental Psychology: General, 110, 169-172.

Weisberg, R.W., & Alba, J.W. (1981b). Gestalt theory, insight, and past experience: Reply to Dominowski. Journal of Experimental Psychology: General, 110, 193-198.

Weiskrantz, L. (1986). Blindsight: A case study and implications. Oxford: Oxford University Press.

Wertheimer, M. (1959). Productive thinking. New York: Harper & Row.

Wilson, T.D., & Stone, J.I. (1985). Limitations of self-knowledge: More on telling more than we can know. Review of Personality & Social Psychology, 6, 167-183.

Woodworth, R.S. (1938). Experimental psychology. New York: Holt, Rinehart, & Winston.

Woodworth, R.S., & Schlosberg, H. (1954). Experimental psychology (Rev. Ed.). New York: Holt, Rinehart, & Winston.

Yaniv, I., & Meyer, D. E. (1987). Activation and metacognition of inaccessible stored information: Potential bases for incubation effects in problem solving. Journal of Experimental Psychology: Learning, Memory, and Cognition, 13, 187-205.

Zeigarnik, B. (1927). Das Behalten von erledigten und unerledigten Handlungen. Psychologie Forschung, 9, 1-85.

 

This page last modified 04/08/2010 02:58:52 PM .