Home Introduction Prescientific 19th Century Interference Cognitive Primary Learning Representation Encoding Storage Retrieval Implicit Memory Implicit Learning Neural Bases Modeling Development Emotion Personal Social Conclusion

 

Associationism and Interference Theory

The work of Ebbinghaus and Calkins set the stage for the development of an empirical science of memory employing the verbal-learning paradigm in which subjects study word-like materials (like nonsense syllables) and then receive memory tests after a retention interval.

Note that the verbal-learning paradigm is a laboratory model of episodic memory, in which each list, and each item on each lists, constitutes an episode in the subject's experience, more or less uniquely located n space and time.  The task of the subject is not to recognize the words as legal (lexical decision), or to respond with the meaning of the word, both of which would be semantic memory tasks.  The task of the subject is to remember what happened there and then.

Following Ebbinghaus, the measurement of memory was typically in terms of savings in relearning, and the point of the experiments was to gain control over the conditions of acquisition (encoding), retention (storage), and reproduction (retrieval).  

These early experiments were guided by a rudimentary theory of associationism, which had implications for what memories looked like, and how they were formed and retrieved.  Basically, the theory was that memories looked like associations between stimuli and responses, formed according to a principle of contiguity between events, and retrieved by following associative links from stimulus to response.  

But, as noted, Ebbinghaus' empirical results on remote and reverse associations had already challenged the prevailing theory, requiring revisions at least to the principle of association by contiguity.  It was at this point that the scientific study of memory really began to take off.

Note: The material that follows is largely based on Crowder (1976).

 

Time-Dependent Forgetting

RetentionTime.JPG
            (56495 bytes)Much of the early research was stimulated by Ebbinghaus' observation of the time-dependent nature of forgetting.  This had long been appreciated, of course, without benefit of formal experimentation, but Ebbinghaus determined the exact shape of the forgetting curve -- showing, somewhat counter-intuitively, that most forgetting occurs relatively early in the retention interval, and that the rate of forgetting subsequently diminishes.  But the real question for theory was: What happens during the retention interval to cause time-dependent forgetting?

 


Decay Theory (Not Fade Away/Don't Fade Away)

One obvious possibility is that memories fade with the passage of time, much as photographs do.  This was, in fact, an ancient theory of forgetting, whose roots go back to Aristotle and the wax-tablet metaphor: memories are like letters etched into wax; the wax softens with time, and the letters become indistinct.  The modern counterpart of the wax-tablet metaphor was Thorndike's Law of Use -- that associations are strengthened through use, and weakened through disuse.  This theory became known as the decay theory.  

Objections to decay theory were summarized by McGeoch (pronounced MacHew; 1932).

One was that memories don't always decay with time.  In a classic study by Ballard (1913) of children learning poetry, an initial study phase was followed by several tests in succession.  The result was an overall improvement in memory over trials, which Ballard termed reminiscence, in contrast with the  oblivescence of forgetting.  

Ballard's work was largely ignored at first, but interest in the phenomenon was revived in the 1970s by Erdelyi, who renamed Ballard's phenomenon hypermnesia, in contrast with amnesia.  Erdelyi documented hypermnesia in a variant on the conventional verbal-learning paradigm in which subjects were engaged in a single study trial, followed by several retention tests without any further opportunity for study.  Some items dropped out of memory over time (inter-trial forgetting), while other items dropped in (intertrial recovery).  Technically, reminiscence refers to the subsequent recovery of initially forgotten items.  In many experiments, intertrial recovery is equaled or excelled by intertrial forgetting, resulting in the net loss of memory over time observed by Ebbinghaus.  But under some circumstances, intertrial recovery exceeds intertrial forgetting, resulting in a net gain of memory -- which is hypermnesia.  This obviously is inconsistent with a general principle of decay.

Moreover, it turns out that the mere passage of time is not the important factor.  In another classic experiment, Jenkins and Dallenbach (1924) studied the effects of sleep on memory.  Their subjects actually lived for a short time in the laboratory, memorizing lists of nonsense syllables at various times of the day.  Their memory was tested 1, 2, 4, or 8 hours after learning, with the retention interval filled with sleep or waking activity.  Controlling for the passage of time, J&D found that retention was better following sleep: there was some forgetting, but not very much.  These basic results were confirmed by Ekstrand (1967, 1972), who further determined that retention following sleep was even better if no dreaming occurred (i.e., NREM sleep).

Even without this empirical evidence, decay theory is unsatisfactory, because time is not a psychological variable.  It doesn't refer to any mental structure or process, and thus can't play a role in psychological theory.  The real question is what happens over time.  McGeoch offered an analogy to rust: metal rusts over time, but time doesn't produce rust.  Rather, rust is caused by oxidation, which is in turn a time-dependent process.  So, then, what psychological process, correlated with time, produces forgetting?


Retroactive Inhibition and Consolidation

The earliest proposal actually predates the evidence from reminiscence and sleep.  According to the theory of retroactive inhibition (RI) offered by Muller and Pilzecker (1900), events occurring during the retention interval cause forgetting to occur, by interfering with the formation of new memories.

The retroactive inhibition paradigm consists of three phases.  

 

 

The general finding is that memory is poorer following the interpolated learning. 

This, in turn is explained in terms of consolidation theory.  According to M&P, memories are acquired through learning experiences, as byproducts of perceptual activity.  But the memory trace is not completely formed at the moment of perception.  Rather, the new trace remains is an active state of perseveration following perception.  Later, the trace is permanently fixated in a state of consolidation.  RI occurs because the interpolated task disrupts perseveration, so that there is nothing left to consolidate.  Thus, the disruption of consolidation represents a permanent memory loss: a storage failure caused by the failure to complete the encoding process.  

A considerable amount of evidence supported consolidation theory.


Response Competition

But there were also anomalous results.

So apparently there is something else going on instead of, or in addition to, consolidation failure.

McGeoch (1942) offered an alternative theory based on response competition.  In his view, forgetting was not a failure of storage, but rather a failure of retrieval.  And the retrieval failure occurs because of competition by unwanted memories -- a competition that is based on similarity.

Thus, both inhibition effects can be explained by interference.

Note the technical distinction between retroactive and proactive inhibition and retroactive and proactive interference.

In practice, however, people often use inhibition and interference interchangeably.


Interference Theory

VerbLearnParad.JPG (56912 bytes)With interference theory, paired-associate learning replaced serial learning as the paradigm of choice for studying human learning and memory, because it allows the investigator to vary the cue and target items independently, yielding four paradigms for verbal learning.



Note the emerging paradigms offer parallels in the study of animal learning, which was classically analyzed in terms of associations between stimuli and responses:

Both paradigms fit the behaviorist avoidance of mentalistic contents: memories can't be publicly observed, but learned behaviors can be.  At this point, the study of memory was transformed into the study of verbal learning, which was also analyzed in terms of stimulus-response (S-R) learning theory.  For a long time, the primary scientific journal devoted to studies of memory (other than the Journal of Experimental Psychology itself) was the Journal of Verbal Learning and Verbal Behavior -- which only recently was renamed the Journal of Memory and Language.

OsgoodTransSurf.JPG (54366 bytes)A vast amount of new research employing the various paradigms for verbal learning research were summarized by Osgood (1949) in terms of the Osgood Transfer Surface, which plotted the amount of transfer as a function of the amount of similarity between stimulus and response terms. 



According to the transfer surface, interference was greatest when the cues were the same, but the targets were different.


The Three- and Two-Factor Theories of Forgetting

The initial development of interference theory culminated in the three-factor theory of forgetting proposed by McGeogh (1942).

McGeoch's original interference theory was predicated on the independence hypothesis, which held that traces of each list are formed independently of each other, and exist together in memory storage -- in other words, that the interpolated list does not somehow destroy the original.  In this view, interference does not necessarily result in overt intrusions.  Rather, one set of memories merely dominates the other at the time of reproduction, making the reproduction of the target list more difficult without actually weakening the target memories.  Thus, interference theory locates forgetting at the time of retrieval.  Everything learned remains intact in memory storage, and the interpolated learning doesn't displace the original learning.

MI1940.JPG (44660 bytes)The independence hypothesis was challenged by Melton and Irwin (1940), who studied RI as a function of interpolated learning.  Their experiment involved the serial learning of a list of 18 nonsense syllables over a series of 5 acquisition trials.  Then, they gave subjects varying trials (5, 10, 20, or 40) of interpolated learning, and tested memory for the original list.  They found that RI increased dramatically with increased interpolated learning, as predicted.  But intrusions from the interpolated list progressively decreased.   Now, the independence hypothesis doesn't predict that overt intrusions have to appear.  But they are a symptom of response competition, and if response competition is increasing, intrusions shouldn't decrease.  So clearly something else is going on.  The "Factor X" (M&I's term) cannot be attributed to response competition, but rather reflects unlearning -- meaning that interpolated learning actually weakens traces of the original learning after all.

Unlearning is somewhat analogous to extinction in animal conditioning.  The original responses intrude during the interpolated learning, either overtly or covertly.  But as they are "incorrect" in the new context, they are not reinforced and may even be punished.  Therefore, the original learning gets extinguished.  Of course, continuing the analogy, extinguished responses are subject to spontaneous recovery.  And, as it happens, spontaneous recovery increases with the length of the "rest" interval following extinction trials.  Apparently, spontaneous recovery also occurs in RI, with better retention of the original list following longer retention intervals (represented, in this case, by larger numbers of trials of interpolated learning).

On the basis of these results, Melton and Irwin (1940) proposed a two-factor theory of forgetting, which held that forgetting is a function of two processes:

Their theory led to a novel prediction tested by Melton & Von Lackum (1941), who compared RI to PI just after learning of an interpolated list.  The idea was that PI involves only response competition, while RI involves both response competition and unlearning.  the result was that RI was more severe than PI, which underscores the role of unlearning as "Factor X".

The two-factor theory represented a big change in interference theory, because it implied that forgetting was not just a matter of retrieval failure, but also affected the consolidation of memories.  But at the same time, it was clear that unlearning did not really entail a loss from storage -- otherwise, we would not observe the spontaneous recovery of the unlearned responses.

So, what is the nature of the unlearning?  It must be something like discrimination learning, where memory for the original list is preserved in storage, but somehow tagged as "not appropriate" in the current context.  This consideration led to the paradox of identity.  Consider the A-B, A-B paradigm, which is simply an experiment on savings in relearning.  If interference is a function of similarity, we should observe the maximum interference in this case.  But, of course, we don't observe any interference at all.  This is because of the particular paradigm being employed. 

As a thought experiment, we might predict considerable confusion on the subject's part.  This inability to discriminate between lists is the essence of transfer.


Modifications to Free Recall

So now the question becomes: How do we get evidence that memory of the initial list is preserved?

One technique is called modified free recall (MFR).  In the A-B, A-D paradigm, we interrupt learning of the A-D list, and ask subjects to recall either target, B or D, which has been paired with the cue A.  

Note that this isn't really free recall.  Paired-associate learning is, technically speaking, cued recall, in which the subject is presented with the cue A, and asked to produce the associated target, B or D.  This makes the point, which will be emphasized later, that all recall is cued recall.  The difference between free recall, cued recall, and recognition is the amount of information provided by the retrieval cue.  

 It turns out that the frequency of B responses declines with practice on the A-D list, which is analogous to extinction.

ABloses.JPG (28927
            bytes)But ADstronger.JPG (27318
            bytes)MFR permits only one response to be given.  A-D may dominate recall because it is stronger than A-B, but not because A-B has itself lost strength.  

 

 

Accordingly, Barnes and Underwood (1959) introduced modified modified free recall (MMFR), in which subjects are presented with the stimulus term A, and then may recall either B or D or both.  When they did this, the result was that the frequency of B responses again declined with practice on the A-D list.  So unlearning does occur after all.  And, for that matter, unlearning is probably more powerful than interference -- which is why more RI is accounted for by "Factor X" than by intrusions.

But unlearning is a factor in RI, which raises the question of whether all forgetting occurs by virtue of RI.

Underwood (1957) argued that this clearly can't be the case.  He reviewed the literature on a special verbal-learning paradigm in which subjects memorized a single serial list of nonsense syllables, and then received a retention test 24 hours later.  The salient result was that about 75% of the list was forgotten.  This can't have been produced by RI.  RI is a function of similarity, and college students simply don't encounter enough nonsense material over 24 hours to produce RI effects of this magnitude.  Rather, Underwood argued that this forgetting must be a function of PI.  

PIBuildUp.JPG (44024
            bytes)In a further analysis, Underwood examined recall of a serial list as a function of the number of lists learned previously.




Underwood had no problem accounting for a 75% loss when many lists had been learned previously.  But how to account for the 25% loss when there had been no additional learning?  In this case, there was no RI from interpolated lists, and no PI from prior lists.

Underwood argued that the source of PI is prior learning outside the laboratory.  In order to learn a list of CVC nonsense syllables, subjects must unlearn many prior verbal habits.  For example, in order to learn BOT, they have to unlearn highly familiar English words like BAT, BET, BIT, and BUT (note that, today, BOT might not count anymore as a nonsense syllable in the first place).  This unlearning, being analogous to extinction, is subject to spontaneous recovery.  And it's the spontaneous recovery of the previously acquired verbal habits that interferes with memory for the list.  This interference is proactive, as it represents the influence of previously acquired verbal habits.

Underwood's discovery of the importance of PI represents the culmination of the interference theory of forgetting, which at this point (1957) appeared to provide a complete account of learning and forgetting.

However, there's still a problem.  Underwood's theory held that forgetting is caused by the recovery of memories, and that the greatest source of interference is proactive -- either from lists previously learned during the experimental session or from prior language habits.  This recovery postulate is cute, but ultimately unsatisfactory, because recovery is something that occurs spontaneously with time -- and interference theory began by rejecting decay theory on precisely the same grounds.  You can't reject decay while accepting recovery.  Instead, you have to make spontaneous recovery something other than a primitive postulate -- you have to get actual evidence for it.

This evidence has proved hard to come by.

In further experiments using the MMFR procedure, subjects studied A-B, then A-D, and recall was tested at varying retention intervals, with subjects instructed to recall both targets associated with A.  In these experiments, there was a general failure to observe the spontaneous recovery of A-B.

This failure was explained by Postman, Stark, and Fraser (1968) in terms of response-set suppression.  In their view, the subject has learned A-B, and now must learn A-D.  While learning A-D, A-B is gradually unlearned.  On the later test, A-B spontaneously recovers, so that the subject actually remembers both A-B and A-D.  The subject also discriminates between the two lists, but avoids thinking about the entire A-B list. Therefore, the A-B items don't appear in recall, and there is no evidence of spontaneous recovery.  

According to this view, spontaneous recovery has, in fact occurred -- there's just no evidence of it.  Obviously, this is not a satisfactory resolution.  We need actual evidence of response-set suppression.

This evidence was provided by Postman et al. (1968) with a further modification of MMFR, which they called modified MMFR.  In this procedure, the experimenter provides the A cue, and also the D response, and the subject must only provide the B response.  This modification is intended to release suppression of the A-B responses.  Comparison conditions included standard MMFR, in which subjects gave both B and D responses, and standard recall of the A-B list.  The retention tests were conduced both immediately after learning the A-D list, and after a delay.  The results were as predicted:

So, in the end, interference theory was able to provide a fairly satisfactory account of forgetting, at least in its own terms.  Interference does occur, both retroactively and proactively.  But it also became clear that interference is the least of it.  Interference is supposed to be a passive, involuntary process in which memory for one list gets in the way of memory for another.  But response-set suppression makes it clear that the subject actually has a great deal of control over the process.  After spontaneous recovery has occurred, he can simply ignore the interfering items.  This self-regulation of memory is not what the interference theorists had in mind.

Accordingly, it was pretty evidence that interference theory was getting pretty complicated.  By 1968, as the cognitive revolution in psychology was gaining momentum, it was pretty clear that interference theory was about to collapse of its own weight.


The Fan Effect

FanMethod.JPG (48318
            bytes)Still, there's no question that interference occurs.  A contemporary spin on the concept has been provided by Anderson's (1974) studies of the fan effect.  In these experiments, a subject memorizes a series of sentences such as The doctor is in the bank and The lawyer is in the park.  There were 1-3 professions associated with each location, and 1-3 locations associated with each profession.  Memory was then tested via a sentence verification procedure, essentially a recognition test, in which the subject is presented with sentences such as The doctor is in the bank and The doctor is in the park and must indicate whether the sentence had been memorized or not.  Because the sentences had been well  memorized, not just studied, this is pretty easy -- in fact, performance is typically perfect -- which is why the dependent variable in question is response latency, not accuracy.

FanEffect.JPG (49401
            bytes)Anderson's experiments varied the number of facts associated with each profession and location, and response latency varied as a function of the number of associated acts.



Thus, even when memory is perfect, in terms of accuracy, interference still occurs.  Memory for one fact slows retrieval of memory for other facts, and this is a kind of forgetting.


Toward the Cognitive Revolution

At the same time, it became clear that the dream of Ebbinghaus had come to naught.  He began memory research from the standpoint of associationism, and the idea that the mind is a tabula rasa written on by experience -- experience that formed associations between ideas.  He invented nonsense syllables to simulate this tabula rasa -- completely new stimulus materials, with respect to which the mind is effectively, a blank slate.

But it turns out that the mind isn't a blank slate -- at least so far as Ebbinghaus' nonsense syllables are concerned.  Early on, Glaze (1928) had pointed out that "nonsense syllables" actually differed in meaningfulness, and meaningfulness facilitates verbal learning.  Subjects can often relate nonsense syllables to prior knowledge.  

On a personal note, I received my PhD from the University of Pennsylvania, where the Department of Psychology had a tradition of bestowing a nonsense syllable upon each of its new graduates (eventually they ran out of nonsense syllables, and turned to pseudo-words).  Mine was TUL, which I immediately committed to memory by relating it to Endel Tulving, a prominent memory researcher who had indeed been influential in my early research (and later research, too, for that matter).  TUL, ostensibly a meaningless CVC combination, actually held a lot of meaning -- for me. 

So you can't treat the mind as a blank slate, and treating it as if it is one -- by, for example employing nonsense syllables as stimulus materials -- probably distorts our understanding of learning and memory.  

In 1968, interference theory was exhausted, and theory and research in human learning and memory needed a new approach.  Fortunately, just such an approach was right around the corner.  It came as part of the "cognitive revolution" in psychology, which began in 1948 or 1956 (depending on your point of view; I lean toward 1956), but really began to gain momentum in the mid-1960s.  The cognitive revolution completely altered our theories of memory, and also our methods for studying the subject.

 

This page last modified 10/16/2008.