Sherlock Holmes had very definite ideas about forgetting and its causes. In Arthur Conan Doyle’s story A Study in Scarlet, Holmes gives Watson the following stern lecture:
I consider a man’s brain is like a little empty attic, and you have to stock it with such furniture as you choose. A fool takes in all the lumber of every sort that he comes across, so that the knowledge which might be useful to him gets crowded out, or at best jumbled up with a lot of other things so that he has difficulty in laying his hands upon it…. It is a mistake to think that that little room has elastic walls and can distend to any extent. Depend on it, there comes a time when for every addition of knowledge you forget something you knew before.
As we’ll see, Holmes was invoking a fundamental principle of the Interference Theory of Forgetting, some 50 years before the theory was actually introduced. He was also suggesting an interpretation of what forgetting might consist of: difficulty in finding or locating information, rather than the actual loss of that information. This possibility was also raised by Hermann Ebbinghaus in 1885
in the opening paragraph of On Memory, the first systematic treatment of memory and forgetting:
Mental states of every kind—sensations, feelings, images—which were at one time present in consciousness and then have disappeared from it—have not with their disappearance absolutely ceased to exist. Although the inwardly turned mind may no longer be able to find them, nevertheless they have not been destroyed and annulled, but in a certain manner they continue to exist, stored up, so to speak, in the memory.
This is a provocative statement, appearing to suggest that most or all of what we consciously experience is stored and maintained in memory, even though we may not have conscious access to those memory records. Oddly, Ebbinghaus did not pursue this idea in On Memory, nor is there any evidence that Sherlock Holmes actually tested his hypotheses about memory. But these two quotes identify some of the basic problems that memory researchers grappled with over the next 100 years.
The time course of memory was first described in detail by Ebbinghaus over 100 years ago. He started with these general observations:
Left to itself every mental content gradually loses its capacity for being revived, or at least suffers loss in this regard under the influence of time. Facts crammed at examination time soon vanish, if they were not sufficiently grounded by other study and later subjected to a sufficient review. But even a thing so early and deeply founded as one’s mother tongue is noticeably impaired if not used for several years.
Ebbinghaus then went on to study the details of this process. How quickly are memories lost? Is there a steady loss until all of what has been learned has been lost? What does the curve of forgetting over time look like?
The general answer to this last question appears in Figure 4-1. These data come from an experiment in which subjects had to learn and remember a long list of words. Half of the subjects heard
FIGURE 4-1 Forgetting of memorized materials over time.
the list just once and the other half heard the list three times. The subjects were then tested for retention of what they’d learned, either right away or at intervals of one to four days days. (These are usually referred to as forgetting curves, even though they actually show how much was retained.) There is a large drop in memory performance initially, followed by further but smaller losses as the retention interval gets longer. The rate at which retention loss occurs is greatest early on but diminishes as time goes by. These basic facts about forgetting apply across shorter and longer time scales, the nature of the to-be-remembered experiences, and the kind of person doing the remembering.
The practical importance of these simple facts about the forgetting curve is apparent in the results of an experiment in which a group of factory and office workers were trained in the procedures for cardiopulmonary resuscitation until they had mastered the required skills. Three years later—a period during which few of the workers had any occasion to use the skills—retention tests showed that the skills and knowledge had deteriorated to about
15 percent of what they originally were, and the deterioration followed the classic curve of forgetting from three to 36 months.
John Anderson and colleagues at Carnegie-Mellon University have developed an interesting idea about why forgetting is sometimes useful or adaptive. The idea is basically that the longer information is stored in memory without being accessed or used, the less likely it is that losing that information will have any bad consequences. If you have not called on your once-upon-a-time knowledge of Latin declensions or your storehouse of information about baseball in the past five years, it is even less likely that you will in the next five years. Maintaining a large number of memory traces over long time periods has biological costs, which might be greater than the costs of allowing some traces to deteriorate.
Of all the things that influence whether or not you can successfully remember something when you need to, there’s one that stands out: how well that “something” was stored in memory to start with. This is referred to as the influence of degree of original learning, and it has been demonstrated experimentally many times. As Figure 4-1 shows, subjects who had three learning trials recalled more at each retention interval than subjects who had one learning trial. You are probably not very surprised by this outcome. However, consider another result of this experiment that also appears in Figure 4-1: The rate of information loss over time appears to be the same for the two degrees of original learning (that is, the two curves seem to have the same slope).
Interestingly, this principle also seems to apply to “slow learners” and “fast learners.” Test a group of 60 people in the word-learning experiment, giving everyone the same amount of time to commit the material to memory. Now divide the group into a subgroup of the 30 people with the highest scores on an immediate memory test and another subgroup of the 30 people with the lowest scores. Then retest them at different retention intervals
from one to four weeks. Again, to no one’s surprise, the fast learners will remember more than the slow learners—but both groups will have forgotten about the same amount. This surprising outcome has been found in comparisons of children and adults, younger adults and the elderly, schizophrenics and normal people, and people with and people without brain damage of various kinds. It is true that children, schizophrenics, and the brain-damaged learn much more slowly than do their respective comparison groups, but they do not appear to forget what they’ve learned any more rapidly.
Here’s another interesting variation on this theme. Imagine another word learning experiment in which all of the subjects learned a list of 30 words perfectly—100 percent correct at the end of the learning phase. Of course, some people would require only a few learning trials to get to the point where they could remember all 30 words, while others would need many more trials. What would you find when you tested retention one month later? You would find that there isn’t much, if any, difference in the amount recalled by the fast learners versus the slow learners.
In general, if you equate the initial learning for different people, you typically find that their later retention is about equal. As far as differences among people in memory are concerned, it seems to say that the differences reside in the encoding or learning phase of the memory process. People with “good memories” store information faster or more efficiently than people with “bad memories,” but fast and slow learners do not seem to differ in how well their memory systems resist the forces of forgetting.
It has been known for many years that distributed practice in learning leads to better retention of the learned material than does massed practice. In a classic demonstration of this principle, Geoffrey Keppel at the University of California at Berkeley had two groups of subjects study a set of word pairs (such as tree–kitchen, bell–dream, and mark–shadow) so that they could recall
TABLE 4-1 The Effects of Massed Practice and Distributed Practice
|
|
Monday |
Tuesday |
Wednesday |
Thursday |
Friday or one week later |
|
Massed practice |
— |
— |
— |
Eight trials |
Retention test |
|
Distributed practice |
Two trials |
Two trials |
Two trials |
Two trials |
Retention test |
the second member of each pair when given the first word as a cue (a time-honored method known as paired-associate learning). Both groups were given eight learning trials, but how these trials were arranged over time was different (see Table 4-1). Subjects in the massed practice group experienced all eight trials on the same day. Subjects in the distributed practice group experienced two learning trials per day, spread out over four days. All subjects were then given a test of retention one day or eight days after the last day of learning. The results were very interesting, particularly for those of us who use a cramming approach to studying. At the one day retention interval, there was only a slight advantage for the distributed practice group. (This, of course, reinforces our cramming habit.) But at an eight-day retention interval, there was a very large difference (80 percent versus 30 percent), in favor of the distributed practice group, a difference of considerable practical significance.
What caused the difference? One explanation is that in the particular kind of learning task that Keppel used, the distributed practice subjects (but not the massed practice subjects) found out over the course of the experiment that some of the word pairs they had learned on Monday were forgotten by Tuesday and that this material needed additional study and strengthening.
The difference in the effectiveness of massed and distributed learning procedures is by no means limited to an artificial laboratory task involving forming associations between arbitrary word pairs. It is important for the more general issue of long-term main-
tenance of knowledge and skills. In a nine-year experiment Harry Bahrick had four heroic subjects learn sets of 50 English and foreign language pairs with 13 or 26 practice sessions occurring at two- or four- or eight-week intervals. Widely spaced sessions (eight-week intervals) led to much better retention over periods of up to five years than more closely spaced practice sessions (two-week intervals). In fact, 13 training sessions spaced at 56-day intervals were as good as 26 training session at 14-day intervals! Bahrick and other researchers argue that findings such as these need to be taken seriously by administrators of training programs, including the rather expensive one called education. It now seems possible to design instruction and training in ways that will maximize the retention and maintenance of skills and knowledge.
What causes the decline of memory performance shown in Figure 4-1? Does it reflect an actual loss of information from memory or a decrease in how readily a person can access information that is still preserved in memory? Some answers to these questions as well as a number of important facts about remembering and forgetting can be illustrated by the following simple example. Suppose you’ve been a subject in a psychology experiment in which the experimenter has shown you a list of 40 common English words, one every five seconds during which you rated the pleasantness of each word’s meaning. A few minutes after you completed this task the experimenter told you that he wanted to test your memory of the words in the list, using a variety of testing procedures. The standard names for some of these procedures and the procedures themselves appear in Table 4-2.
Start with Free Recall 1. What this kind of memory test will typically show is that your recall is quite imperfect. For example, by the end of a 10-minute recall period you may have come up with only 16 words. Your rate of recalling new words slows from minute to minute, and at the end of the recall period the curve is far below the maximum of 40 words.
TABLE 4-2 Probing Episodic Memory
|
Test Method |
Instructions |
|
Free Recall 1 |
“Recall as many words as you can in any order they come to mind. You will have 10 minutes for this test.” |
|
Free Recall 2 |
“Try to recall the words again, including the ones you’ve already remembered as well as any new ones that come to mind.” |
|
Cued Recall 1 |
“One word that you haven’t recalled was the name of an animal.” |
|
Cued Recall 2 |
“One word that you haven’t recalled rhymed with “bother.” |
|
Recognition |
“Here’s a list of 20 words. Half of them were in the list you saw; half of them weren’t. Which ones are the list words? |
|
Implicit test |
“Pronounce each of these words as fast as you can.” |
But has this recall effort actually exhausted your memory? Is it true that there’s no more information in your memory store about what words were in the list? Almost certainly not. This basic fact about memory can easily be demonstrated by continuing this memory testing. The next type of test is simple: Just repeat the free recall test (Free Recall 2). What is typically found that some words will be recalled in the second test that were not recalled in the first test. This is called hypermnesia, a kind of growth (not loss) of memory over time. How many new words will typically be recalled on the second attempt? Generally, a relatively small percent of the remaining ones—for example, 4 out of 24, or 16 percent. By the way, a simple explanation of hypermnesia is that at the end of the first 10-minute recall period, the rate of recall is quite low, but it isn’t zero. If you simply kept trying to recall for another 10 minutes, you will keep recalling more new words (though this can be a frustrating and headache-inducing task).
At this stage, suppose you’ve been able to remember 20 of the 40 words. How can we further test your memory for the remaining 20? One way is to provide retrieval cues. Table 4-2 shows two versions of such a test. These kinds of cues are generally effective when a free recall test—a memory search without cues—starts to
come up empty. In our example, let’s suppose that the retrieval cues lead to the production of six more words, so memory has now been demonstrated to exist for 26 of the 40 words.
We’ll try a recognition test next, taking 10 of the remaining 14 words and also 10 new words that were not in the original list and make up a random ordering of them, and you have to select the target words from it. Once again, your performance on this test will most likely show further evidence of memory for words that were not initially accessible to your recall attempts.
Finally, we could use an implicit test of memory. All of the tests we’ve considered so far are classified as explicit tests of memory—you have been told to try to remember some specific information. For an implicit test we could take the four list words we didn’t use for the recognition test, mix them with some new words, and have you try to pronounce each of them as quickly as possible. Notice that you’re not being asked to remember anything. The idea is that if there are still some traces of information left in your memory about these four words, then it will probably show up in faster pronouncing of these words than of the four new words that were not in the list.
There are many other techniques that could be used to probe your memory for the word list (hypnosis, for example). The main point should be clear enough: More information is typically available in your memory than is accessible at a given point in time, in a given “retrieval environment.” Cues for retrieval are crucial for some kinds of remembering, and many failures to remember may actually be cases of cue-dependent forgetting.
Once again, these conclusions and principles are not limited in their applicability to simple laboratory experiments with word lists or other artificial materials. They are demonstrably true for memory of life experiences over intervals of many years. In a study by Harry Bahrick and colleagues, “392 high school graduates were tested for memory of names and portraits selected from yearbook. The retention interval since graduation varied from 2 weeks to 57 years.” For subjects in their seventies, free (unaided) recall of classmates’ names averaged about 20 percent. When they
were given graduation photos as retrieval cues, they were able to recall about 50 percent of the names. And when they were given a recognition test, they averaged about 80 percent correct—50 years after graduation!
However effective various methods of prompting and cuing are, the basic fact remains that forgetting occurs. Once upon a time you could have recalled most of your classmates’ names, and you certainly would have been able to recognize all of them. What happened as time went by? Why can you no longer do this, and why do you now need memory prompts?
The ideas that memory decays as time goes by or that memory weakens with disuse have long been common answers to these questions, but there are problems with them as explanations. As the psychologist John McGeoch famously put it many years ago, “In time, iron may rust and men grow old, but the rusting and the aging are understood in terms of the chemical and other events which occur in time, not in terms of time itself.” McGeoch acknowledged that there were good reasons for thinking that over long periods of time, the biological basis of memory might deteriorate. His main argument against decay as a general explanation of forgetting was a simple and powerful one: Over a given period of time, the amount of forgetting that occurs can be increased or decreased by varying the nature of the events that occur during that time interval, particularly in ways that interfere with memory for a given event. The process is known as retroactive interference and is a basic and very general cause of forgetting.
Allan Baddeley demonstrated an interesting example of retroactive interference in everyday life events (see Table 4-3). He was able to arrange and control where people parked when making two successive visits to a clinic. The experiment went like this:
TABLE 4-3 An Experiment on Memory and Retroactive Interference
|
Monday |
Wednesday |
Next Monday |
|
Park in location A |
Park in location B |
“Where did you park last Monday?” |
|
Park in location A |
— |
“Where did you park last Monday?” |
Some visitors came to the clinic on both Monday and Wednesday of a given week, while other visitors came only on Monday. The following Monday they were given a test of memory for where they had parked the previous Monday. The results of the experiment were clear: People who had visited the clinic twice had much poorer memory for this information than people who had visited the clinic only once. In other words, the experience of visiting the clinic on Wednesday somehow impeded remembering the Monday experience.
Retroactive interference can be created experimentally in many different ways. In one experiment with six-year-old children, the experimenters read either one or two stories to different groups of children and then tested their memory for the first story a week later. The children who had heard only one story were much better at answering questions about it than were the children who had heard two stories.
So it seems that Sherlock Holmes was right when he said that “there comes a time when for every addition of knowledge you forget something you knew before.” However, he did leave out something important: the influence of similarity.
In experiments of the kind considered here, the amount of forgetting caused by events that occur after the creation of memory for earlier events is typically much greater when the two sets of events are similar. In the experimental condition of Baddeley’s clinic visit experiment, the same situation (visiting the clinic) was associated with two different sets of information about parking. Under conditions like those in Baddeley’s experiment, this results in loss of access to memory about the events associated with the first experience.
It seems reasonable to conclude that retroactive interference is an important cause of forgetting life experiences in general. For example, we’ve seen in Bahrick’s research on memory of high school classmates’ names over periods of many years, that unaided recall of names was quite poor. According to the Interference Theory of Forgetting, the principles of retroactive interference and similarity explain this in terms of the very large amount of information about other names and other people entered into memory between the time the subjects left high school and the time their memories were tested many years later.
Interference Theory also identifies a second kind of process known as proactive interference. Baddeley’s clinic visit experiment included conditions that were intended to evaluate the importance of this second kind of interference, with the arrangement shown in Table 4-4. This experiment was designed to see what effects a prior experience (the Monday visit) would have on retention of a later experience (the Wednesday visit). The effect, it turns out, was a negative one: Subjects who visited the clinic only once (on Wednesday) were better at remembering where they had parked than subjects who had made a previous visit.
It is not known exactly how the process of interference works. The classical interpretation of retroactive interference is that it results from associative unlearning. In Baddeley’s experiment we may suppose that information in memory about going to the clinic on Monday becomes associated with information about a parking location, and that this association is somehow weakened when another, later visit is associated with new information about parking location. This is perhaps something like the process of
TABLE 4-4 An Experiment on Memory and Proactive Interference
|
Monday |
Wednesday |
Next Monday |
|
Park in location A |
Park in location B |
“Where did you park last Wednesday?” |
|
— |
Park in location B |
“Where did you park last Wednesday?” |
overwriting of information in a computer’s memory. Michael Watkins has described this as the cue overload principle: A retrieval cue such as “visiting the clinic” loses its ability to access a memory when there is too much information attached to it.
In 1940 John McGeoch proposed that some amount of ordinary forgetting occurs because of differences between the circumstances existing at the time a memory is formed (the learning context) and the circumstances existing at a later time when a person needs to access the memory (the retrieval context). The key idea here is that change of context is detrimental to memory retrieval. This has been demonstrated in many different settings. One is reinstatement of environment context. Suppose you were able to return to your high school environment—to your old homeroom or cafeteria or gymnasium. Would this improve your ability to recall classmates’ names? Experiments have shown that the original context does aid retrieval of memories—sometimes. Probably the best-known example of this is Allan Baddeley’s experiment in which members of a diving club studied a list of words on land or while submerged and were later tested for word memory on land or while submerged. The important result of the experiment was that recall was better when the learning and testing environments matched, and recall was not as good when the environments had changed between learning and testing.
Many different kinds of experiments along these lines have been conducted. For example, it has been shown that congruency (sameness) of moods aids recollection. Research subjects who memorize material while in a negative mood state tend to recall the material better on a later test if they are again experiencing a negative mood. Happily, this is also true for positive moods.
The importance of context has also been found in studies of the effects of drugs on memory. Information learned in a drugged state is often better remembered when the person’s memory is later tested while in a drugged state than when the person is sober
(a change of state). The opposite also holds: Information learned in a sober state is remembered better when in a later sober state than in a drugged state. (Of course, the best state to be in when learning and when later remembering is the sober state.) Strong effects of this state dependency effect can be seen in addictions. For example, rats that have developed tolerance to large doses of heroin in one environmental context will overdose and die if the same dose is given to them in a very different environment.
In general, reinstating the context of learning can lead to improved remembering. However, there seem to be limits to the effectiveness of context reinstatement in human learning. One such limit was shown convincingly by William Saufley at the University of California at Berkeley. In a series of 21 experiments in seven different courses, students attended lectures in a given classroom throughout a semester, and then half of the students took their tests in the same classroom while the other half took tests in a different room. This had no effect at all on test performance, possibly because the kind of learning that occurs in lectures (when any does occur) becomes “decontextualized”—that is, a person’s understanding and remembering of a calculus procedure is not tied in any important way to the physical context in which learning occurred.
Can people in a hypnotic trance recall a great deal of accurate information about events they’ve witnessed? Yes. Does hypnosis improve memory? Could we add it to the list of effective memory-recovering techniques we’ve considered? No. These questions and answers are a simple summary of current scientific understanding of the effects of hypnosis on memory retrieval. It’s important to understand why they’re not contradictory.
The use of hypnosis in witness questioning is well illustrated by an event in 1976 known as the Chowchilla (California) kidnapping, in which a school bus was taken over at gunpoint and hidden—bus, driver, and children—in an underground chamber. The
driver and children eventually escaped, and the police questioned the driver to obtain a description of the culprits. The driver was initially able to recall some details, including two digits of a license plate. Then he was hypnotized, which usually involves instructions such as the following:
Turn loose now, relax. Let a good, pleasant feeling come all across your body. Let every muscle and every nerve grow so loose and so limp and so relaxed. Arms limp now, just like a rag doll. That’s good. Now, send a pleasant wave of relaxation over your entire body, from the top of your head to the tips of your toes. Just let every muscle and nerve grow loose and limp and relaxed. You are feeling more relaxed with each easy breath that you take. Droopy, drowsy and sleepy. So calm and so relaxed. You’re relaxing more with each easy beat of your heart … with each easy breath that you take … with each sound that you hear.
The bus driver was then interviewed and was encouraged to try hard to recall and reexperience the original event. Under these conditions he did remember many more details, some of which were used to track down and arrest the culprit.
Is this proof of the efficacy of hypnosis? No, because there are problems with concluding from this incident that hypnosis improves retrieval from memory. One may have already occurred to you: hypermnesia! It’s common to be able to recall things on a second recall attempt that you didn’t remember on an earlier attempt. No hypnotic trance is needed for this to happen. The recovery of memory under hypnosis may be due to the instructions that encouraged the bus driver to spend more time trying to recall.
To demonstrate the special usefulness of hypnosis in memory retrieval, it must be shown that hypnosis promotes better recollection than the recollection that would occur without hypnosis. How should you go about determining what the special effects of hypnosis are on memory, if any? It’s actually pretty straightforward (see Table 4-5). For example, arrange for a group of test subjects to witness a staged event. Then, at some later point, hypnotize a random half of the subjects and compare the accuracy of their recall or recognition to the nonhypnotized subjects (the
TABLE 4-5 How to Test the Effects of Hypnosis on Remembering
|
Experimental Group |
Witness event |
Induce a trance state |
Test memory |
|
Control Group |
Witness event |
No trance state |
Test memory |
control group). Make sure that the only difference is that the experimental group is in a trance state and the other isn’t at the time of the memory test.
The recall instructions, the behavior of the interviewer, the encouragement given to the witness, and the amount of time allowed for recall are all the same. Under these conditions, with these controls, you typically do not find any difference between the two groups in the amount recalled. In some studies the hypnotized subjects actually do worse in the sense that they make more recall errors (confabulations) than do the nonhypnotized subjects.
Marilyn Smith at the University of Toronto reviewed many studies along these lines, some of them laboratory studies and some of them done under highly realistic conditions. She arrived at this conclusion: “When proper control subjects are used and they attempt to recall the same material as hypnotized subjects, with relevant variables held constant, performances for the two groups typically do not differ.” The same conclusion seems to apply to the effects of a procedure called hypnotic age regression in which a person in a trance state is led to “regress” to an early time in their life (typically early childhood) and is encouraged to act as they did at that point in their lives. It is not difficult to get adults to act in a childish way—some of us do this occasionally without hypnosis—but the research on hypnotic age regression has not supported the sensational claims that were once made for it (that people could remember details of their fourth birthday party, or extensive details about a childhood illness, for example).
As Smith puts it, “there has developed among police and other investigative agencies an unshakeable belief that through the appropriate uses of hypnosis otherwise irretrievable memories may
be recalled.” Where does this belief in the efficacy of hypnosis come from? It’s probably based on a cognitive illusion. From the fact that hypnotized witnesses can recall large amounts of information when they’re hypnotized, it’s concluded that this happens because of the hypnosis. But, as we’ve already seen, they don’t recall any more than they would if they weren’t hypnotized. You need a control group!
Perhaps you’re thinking that this is an academic point. After all, a memory testing procedure that includes hypnosis does work. True enough, but hypnosis increases recall errors, the nature of hypnosis itself is not well understood, and there are equally effective and more scientifically respectable procedures available—such as the Cognitive Interview (see Box 4-1).
If you are in your twenties or thirties, depend on it: There will come a time (and it starts before “old age”) when you simply can’t rely on your memory the way you can now. The magazine or journal articles that you read last week and noted as interesting or important—you can’t assume, as you once did, that you will remember them or that they will come to mind when they should come to mind. When these things start to happen often enough, you’ll be experiencing what’s euphemistically called benign senescent forgetting, a general but not pathological decline in memory (see Box 4-2). Senescent forgetting is sufficiently benign that it can be made light of in jokes, and even rationalized as having a positive aspect, as in the adage “everything old is new again.”
Hebb’s experiences, as described in Box 4-2, are typical of normal aging. The decline in memory ability with age can be relatively minor, even into the seventies and later and is not to be confused with senile dementia such as Alzheimer’s disease—which, while far and away the most common form of senility, develops in considerably less than half of people in their early eighties.
|
BOX 4-1 Research demonstrating the important role of context in memory has led to an interesting and useful technology known as the Cognitive Interview, a procedure designed by Ronald Fisher, Edward Geiselman, and others to improve the results of witness interviews. The Cognitive Interview is based on Endel Tulving’s Encoding Specificity Principle, which in most respects is an updated version of McGeoch’s ideas about context and memory. This principle states that any information stored in memory along with information about some target event (a robbery, for example) can later aid recall of the target information. The Cognitive Interview consists of attempts by the interviewer to encourage the witness to think about (to mentally reinstate) the context of the event in question. It uses these specific procedures and instructions: Reinstate the Context: Try to reinstate in your mind the context surrounding the incident. Think about what the surrounding environment looked like at the scene, such as rooms, the weather, any nearby people or objects. Also think about how you were feeling at the time, and think about your reactions to the incident. Report Everything: Some people hold back information because they are not quite sure that the information is important. Please do not edit anything out of your report, even things you think may not be important. Recall the Events in Different Orders: It is natural to go through the incident from beginning to end. However, you also should try to go through the events in reverse order. Or, try starting with the thing that impressed you the most in the incident and then go from there, working both forward in time and backward. Change Perspectives: Try to recall the incident from different perspectives that you may have had, or adopt the perspectives of others who were present during the incident. For example, try to place yourself in the role of a prominent character in the incident and think about what he or she must have seen. The Cognitive Interview has been found to be more effective than standard witness interview procedures and has been adopted by police and investigative agencies. The procedure is simple and inexpensive, requiring a training session of 30 minutes, and it is based on a solid science of memory. |
|
BOX 4-2 Donald Hebb was a pioneering figure in the study of memory and brain processes. At the age of 47 he had a terrifying experience. He was reading a scientific article that was closely related to his own interests. He came upon a passage he thought was particularly important for his work and said to himself, “I must make a note of this.” He then turned the page and found a penciled note about this passage in his own handwriting! He was shocked. He had never before forgotten anything that particularly interested him. He began to worry that perhaps he might be experiencing early signs of senility. As it happens, Hebb was extremely busy at the time. He was director of a new laboratory, had major research projects going, was writing extensively, and was chair of his department. His memory was simply overloaded. He slowed down a bit, cut back on administrative activities, and took more leisure time. As a result, his memory for what he was reading came back to its “normal, haphazard effectiveness.” Hebb reported this experience in an article he wrote “On Watching Myself Get Old” when he was 74. Although he felt he was now experiencing some decline in his memory and thought patterns, he was still an active scientist and writer. Indeed, the editor of the magazine that published his article commented, “If Dr. Hebb’s faculties continue to deteriorate in the manner he suggests, by the end of the next decade he may only be twice as bright and eloquent as the rest of us.” |
If we compare 75-year-olds with 35-year-olds today on tasks of memory and intelligence, the 35-year-olds will of course perform better, but this does not necessarily mean that all forms of memory and intelligence decline with age. Longitudinal studies in which the same individuals have been tested several times over their life span show that some forms of intelligence increase through the late seventies. Furthermore, people who are 60 today perform significantly better on memory tests than people who were tested at age 60 in 1942.
What do we know about the normal course of memory change with increasing age? Beginning in 1988, Lars-Goran Nilsson and colleagues at Uppsala University in Sweden conducted one of the most extensive studies yet of aging and memory. They gave a highly varied set of memory tests to groups of people from ages 35 to 80 (a cross-sectional comparison) and have repeated the testing on a yearly basis on these same people, adding new groups each year (a longitudinal comparison). The test battery included tests of immediate memory, working memory, memory for word lists, tests of episodic-declarative memory, tests of semantic memory, tests of language use, and tests of implicit memory. The study was notable for the extensive health information that was available as well.
One of the major findings of this study was that aging is associated with declines in almost any kind of test of explicit episodic memory for new information. (Older subjects also seemed more prone to false memory—see Chapter 6.) Moreover, these declines were not due to health problems. Age impairments showed up most dramatically in the learning of new material, that is, in trying to store new information in long-term memory. The episodic memory decline found with increasing age was not precipitous at any one age, and the extent of the decline was quite variable in any one age group. Interestingly, in the Swedish study, women’s episodic memory was better than men’s in all age groups, and this did not seem to be due to the women’s better vocabulary scores.
The good news part of the research was that certain aspects of semantic memory, notably language comprehension, showed little or no decline up to at least age 75 when the influence of educational level was taken account of. Also, some kinds of implicit learning did not decline much, if at all, with increasing age. For example, both older and younger subjects displayed significant priming effects (benefits from previous exposure to an event).
Other research, however, seems to indicate that even semantic memory becomes problematic with increasing age. Possibly
the most irritating and familiar one is difficulty in finding a particular word, or accessing one’s “mental dictionary.” This occurs at all ages but becomes more frequent as we age and also becomes harder to resolve by finally accessing the information..
What are the causes of aging-induced forgetting? There is fairly general agreement among those who specialize in aging and memory that the impairments observed in episodic memory tasks are to a considerable extent due to reduced speed of cognitive operations and reduced attentional resource, resulting in less information being encoded into memory. This seems to suggest that age-related memory problems are most likely to occur when events are brief or occur in quick succession, when attention must be divided or rapidly shifted. When elderly people are given a dual listening task , with two different messages presented to each ear, they do not perform as well as younger individuals. Elderly people must make a greater effort to attend to the task, and their ability (or possibly willingness to expend the effort) is somewhat reduced.
General cognitive slowing cannot be the whole story, though. For example, it does not seem to account for word-finding problems. Here we must consider the possibility that semantic memory is not so permanent after all and that actual degradation of semantic memory traces, or of the retrieval paths leading to them, or outright loss of information from semantic memory may occur with advancing age. This leads us to consider some of the biology of aging as it relates to memory and learning.
We’ll start with some general observations about the biology of aging. Life spans vary wildly in life forms. C. elegans, a little worm that lives in dirt and is a favorite of neuroscientists because it has only 302 neurons and 7,000 synapses, all of which have been identified, lives no longer than a month. At the other extreme, redwood trees live for thousands of years. Among animals, a type of clam called a quahog can live for more than 200 years, and lobsters can live for 100 years or more, as can sturgeon and turtles.
But these are the exceptions. Most animals live considerably less than the 100-year maximum life span of humans. The average human life span has, of course, increased dramatically in the past century thanks to improvements in medicine and nutrition. But the maximum human life span has always been about 100 years.
The fact that the maximum life span for humans has not increased despite better medicine, including elimination of many diseases, suggests that there may be built-in aging factors. For a long time it was thought that the organs were primarily responsible for aging; the heart, kidneys, and other organs simply wore out. It is now known that this is not the entire answer. Investigators grew cultures of normal body cells taken from people of different ages. Cells from a human embryo double about 50 times before they die, whereas cells taken from a middle-aged human divide only about 20 times before they die.
Is this control on cell aging in the DNA of the cell nucleus the “prime contractor,” or is it in the cell bodies outside the nucleus, the “subcontractors”? The investigators exchanged the nuclei in human embryo cells and adult cells, and found that whether a cell body was from an embryo cell or from an adult cell, the cell divided only about 20 times if the nucleus was from an adult. If the nucleus was from the embryo, the cell divided about 50 times. These experiments suggested that part of the aging process is genetic, or under the control of the DNA in the cell nucleus. The only kind of human cell that is immortal is the cancer cell.
A genetic time clock that regulates the number of times a human body cell divides cannot, however, be the whole explanation of the aging process. The most important cells in the human body, the neurons in the brain, never divide after birth, although new neurons are formed from stem cells throughout life. Therefore, resetting of the genetic aging clock in body cells would not solve the problem of possible deterioration of the brain.
For many years it was thought that normal aging is accompanied by substantial losses of neurons in the brain, particularly in the cerebral cortex. Indeed, classical studies counting the number
of neurons in particular regions led to estimates of something like a 50 percent loss of neurons in the neocortex by age 95. We now think that those findings are not correct but rather due to artifacts that influence the methods used to count cells. New and much more accurate procedures for determining cell numbers were developed by Mark West in Denmark and others. As a result of these new studies, many areas of the brain do not seem to have significant loss of neurons in aging. Some parts of the brain do show neuron losses. Three examples are the acetylcholine-containing neurons in the basal forebrain, a region of the hippocampus, and Purkinje neurons in the cerebellar cortex. The latter may account for the fact that it is more difficult for elderly people to learn new motor skills; in other words, you can’t teach an old dog new tricks.
All of this is not to say that there are no major changes in the brain as we age. A basic one is a slow but steady decline in brain weight that appears to begin around age 25. Fergus Craik at the University of Toronto has argued that basic changes in the central nervous system that occur in normal aging—declines in brain size, metabolism, blood flow, and neurotransmitters—may be the causes of general cognitive slowing and reduced attentional powers and that impairments of these basic processes negatively impact how well information gets registered and stored in memory. This fits with the results of experimental demonstrations that, just like younger subjects, successful retrieval in older subjects will occur when steps are taken to promote effective processing of new information and when the retrieval environment provides good support—just as in Bahrick’s demonstration that elderly people were able to match names and faces over retention intervals of 50 years or more.
The role of sleep and dreaming in memory storage and retrieval has fascinated people for a long time. Why is it that our memories for dreams are so fleeting and fragmentary? Does sleep improve
memory? Does dreaming improve memory? Can we learn while asleep?
Sleep is an extraordinary mystery that we all take for granted. After nearly a century of research we still have no idea why sleep is necessary. Without sleep, animals die and normal humans go mad and, according to some reports, eventually die. As of this writing we have no understanding of why this is so. An early hypothesis was that sleep provided the body with rest so metabolic functions could recover after a period of waking activity. Actually, sleep is no better than reading a book as far as metabolic activity is concerned.
Virtually all animals with nervous systems are thought to sleep. We do know a great deal about the brain controls on the wake-sleep cycle, the so-called circadian rhythm. Humans and most other primates, and some birds of prey, rely heavily on vision and are active by day. Animals such as mice that serve as prey sensibly remain inactive by day. Unfortunately for mice, owls are active at night and have sonar “vision.”
Patterns of sleep vary widely in animals. The opossum sleeps 19 hours out of every 24 and the giraffe sleeps only about 2. Some birds sleep with only one brain hemisphere at a time, with one eye closed and one eye open. It appears that dolphins also show a similar pattern of brain sleep; only one hemisphere sleeps at a time. Presumably if both hemispheres slept at the same time, birds might fall and dolphins drown.
When someone flies halfway around the world, it takes about a week to reverse the normal wake-sleep cycle to correspond to the new night and day. It used to be thought that the sleep cycle was controlled directly by the cycle of day and night. Indeed, exposure to bright light can speed this reversal. However, even people who have lived in caves with a constant low level of illumination for months show a normal wake-sleep cycle. Actually, they settle down to a 25-hour cycle rather than 24 hours.
The wake-sleep cycle is not directly controlled by the external light-dark (day-night) cycle but is influenced or entrained by it. There is a little group of neurons at the base of the brain just above the optic nerves from the eyes that serves as the master clock controlling our wake-sleep cycles. It has the rather forbidding name suprachiasmatic nucleus (SCN). Nerve cells there exhibit increased activity by day and decreased activity by night. They exhibit this circadian cycle independent of any control from other sources. The neurons have internal clocks. Some optic nerve fibers from the eyes project directly to the SCN neurons and entrain them to the external day-night cycle. But the wake-sleep cycle exists in the SCN neurons even if they are not activated by the optic nerve fibers.
There is one other aspect to this intriguing story: the third eye. All vertebrates, including humans, have a “third eye,” the pineal gland. In lower vertebrates it is just under the skull at the top of the brain and has photoreceptors directly responsive to light shining through the skull. In higher vertebrates and humans it is buried in the depths of the brain. It still has remnants of photoreceptors, but they are vestigial. However, a circuit of neurons connects the SCN with the pineal gland. When darkness develops, or rather when the SCN goes into its darkness-sleep mode, it causes the pineal gland to release the hormone melatonin, which helps induce sleep.
Melatonin is sold over the counter at drug stores and health food stores and many people take it to help get over jet lag. Robert Sack, a physician at the Oregon Health Sciences University in Portland, is an expert on sleep and melatonin. He points out that if you take the hormone at a time when the pineal gland is releasing it, namely when you normally go to sleep, it will not be effective. Instead, take it when you would like to sleep at your destination. Suppose you travel from Los Angeles to London. You need to advance your clock eight time zones from Los Angeles to London. Take melatonin at 3 p.m. on the day of departure. On the next three days or so take the melatonin an hour or two earlier each day (10 p.m. on day 2, 9 p.m. on day 3). It should help. Inter-
estingly, for most people it is easier to fly west than east. Since the natural SCN clock runs on a 25-hour cycle, it is easier to lengthen your day than shorten it.
Sleep occurs in two states: sleep with rapid eye movement (REM) and sleep with no rapid eye movement (NREM). REM sleep is deeper, and the eyes make rapid jerky movements behind the closed lids, as though the person is dreaming. Indeed, when people are awakened from REM sleep, they report typical dreams, including vivid and fantastic images and events. When awakened from NREM sleep, they also report dreams but ones that are pale and have more the quality of daydreams. The patterns of EEG activity, the brain waves recording from the surface of the scalp, also identify REM and NREM sleep. The NREM pattern consists of slow waves (anywhere from 1 to 12 hertz) which were earlier thought to be typical of all sleep. However, in the REM state, brain activity is identical to the waking state, with low-voltage fast activity, much faster and more irregular than the slow waves of NREM sleep.
Do animals dream? Mammals all show alternating periods of NREM and REM sleep. If you watch a dog sleeping, you will see periods when its paws start twitching and it makes little growling sounds. If you look closely, you will see its eyes moving rapidly under the closed lids (REM sleep). The impression that the dog is dreaming, perhaps having a good chase, is compelling. This raises again the issue of consciousness. Dreaming, after all, is another form of awareness. To believe that consciousness or awareness is the exclusive province of humans is a very parochial view.
Memories of dreams are fleeting, but if awakened abruptly from REM sleep, people can give detailed descriptions of their dream experiences. What about people with severe amnesia like the pa-
tient HM? These people cannot remember their own experience in the waking state. But according to one report, when awakened from REM sleep, they can remember and report their dream experiences.
Can a person learn while asleep? Suppose you are trying to learn French. One approach would be to play taped lessons to yourself at night while you sleep. (It’s been said that the best place to learn French is in bed, but that’s probably because of something other than the effects of sleep on memory.) There have been many studies of this approach, with varying results. Some researchers report enhanced learning following sleep exposure to lessons and others are less positive. An important qualification in many of these studies is that no measures were taken of whether the person was actually asleep or instead had been wakened to some degree by the taped message. One recent experiment on learning during sleep eliminated these problems by monitoring the electrical activity of the brain while word lists were read repeatedly to sleeping subjects, and making sure that the subjects remained in REM sleep. The results of the experiment were clear: There was no evidence for any kind of memory formation for events that occurred during sleep, in tests of either implicit or explicit memory.
Two remarks about this experiment and its findings: First, note the contrast with the results of the anesthesia experiment described in Chapter 1, in which implicit memory traces were apparently formed by one of the memory systems of the brains of unconscious subjects. It’s not obvious how to reconcile these two sets of findings, especially since patients under deep anesthesia are very much less responsive to external events than subjects who are merely sleeping. Second, keep in mind that this sleep learning experiment deals with memory for external events that occurred during sleep. Later in this chapter strong evidence is presented that sleep does, in fact, have major influences on memory for experiences that precede a period of sleeping.
Being repeatedly awakened before you would awake naturally would probably impair many cognitive functions. Depriving hu-
mans and animals of sleep definitely impairs performance on a number of learning tasks. This is particularly true if subjects are awakened repeatedly during REM sleep, leading to long-term REM deprivation. But is this due to lack of sleep or to stress? Being repeatedly awakened is a stressful situation. The impairment in learning is much greater when learning new tasks than when repeating older well-learned tasks. Interestingly, deprivation of NREM sleep is much less disruptive than REM deprivation.
There is evidence from the animal research literature that rats show increased REM sleep when they are being trained on various tasks. However, REM sleep levels return to normal once the animals have mastered the tasks. In control studies, animals that are given tasks not involving learning do not show increases in REM sleep.
One clear result of learning and sleep research is that material learned just before a night’s sleep is better retained the next morning than material learned in the morning and tested for at the end of the day. Dramatic examples can be seen in learning motor skills. In one study a complex sequential motor task of finger tapping was trained before a period of normal sleep or waking. Sleep after practice enhanced the speed of performance by about 34 percent compared to being awake after practice. Perhaps there is a lesson here for athletes.
A simple explanation of the effects of sleep on remembering is lack of interference. As we stressed in our discussion of forgetting, learning new material interferes with the memories of earlier learned material. But there is more to this story. Some remarkable discoveries by Matthew Wilson at the Massachusetts Institute of Technology and Bruce McNaughton at the University of Arizona suggest that during sleep the brain actually rehearses material learned during the day.
We take a step back in time to describe the discovery of “place” cells in the hippocampus by John O’Keefe in London some years ago. He recorded the activity of single nerve cells in the hippocampus of the animals that were running through simple mazes. He found that a given neuron would increase its discharge
frequency (action potentials) reliably at one and only one small place in the maze. Other neurons would code other places in the maze so that by recording from a number of such neurons, the entire space of the maze could be coded. Wilson, McNaughton, and others developed ingenious methods for recording from a number of single neurons in the hippocampus of the rat or mouse at the same time, as the animals were running in mazes. The results were remarkable. By looking at the patterns of activity in the “place” neurons being studied, they could tell exactly where the animal was in the maze.
They went one step further and recorded the activity patterns of neurons when the animals were learning the maze and when they were asleep. The patterns of neuronal activity that occurred while the animals were learning the maze were repeated during episodes of REM sleep! The brain, at least the hippocampus, appeared to be rehearsing and consolidating during sleep what was learned that day.
A human brain imaging study appears to support this idea of consolidation during sleep. People were first trained on a motor skill task. During REM sleep some brain areas were more active in trained subjects than in untrained subjects. Further, the activated brain areas were the same areas that were activated while the trained subjects were learning the task. Also, the trained subjects’ performance on the task was improved following a post-sleep retest session.
All of these studies certainly argue that memories are retrieved better following sleep than waking. There is a clear message here for students studying for exams. This work also suggests that dreaming may be a mechanism for rehearsing material that has been experienced during the day. On the other hand, the actual content of dreams often does not seem to have much relationship to daytime experiences, and we don’t yet have a science of the meaning of dreams.
Have you ever experienced hypnagogic images? These particularly vivid images are usually visual; we experience them just as we are falling asleep. They typically occur when people have engaged in novel physical or mental activities for extended periods of time. The existential philosopher Jean-Paul Sartre described them well:
A radical distinction must be drawn between the way a face appears in perception and the manner the same face appears in hypnagogic vision. In the former case something appears which is then identified as a face … consciousness must focus on the object….
In hypnagogic vision this discrepancy does not exist. There is no focusing. Suddenly knowledge appears as vivid as sensory manifestation: one becomes aware of being in the act of seeing a face.
It does seem to be the case that such vivid images reflect the day’s activities. Donald Hebb related the following:
A day in the woods or a day-long car trip after a sedentary winter sometimes has an extraordinarily vivid aftereffect. As I go to bed and shut my eyes—but not till then, though it may be hours since the conclusion of the special visual stimulation—a path through the bush or a winding highway begins to flow past me and continues to do so till sleep intervenes.
Hypnagogic images are very similar to dream images experienced during REM sleep—vivid and fantastic. But there is one important difference. In dreams the person dreaming is always present in the dream, usually as an actor. This is not the case in hypnagogic images; they are images without the presence of the “imager.”
Although exact data are lacking, it has been estimated that about 70 percent of adults report having experienced hypnagogic images. The actual incidence may be higher because some people are reluctant to admit to experiencing “visions” or hallucinations of any sort. Robert Stickgold and associates at Harvard Medical School were able to induce these images in normal subjects. The people were required to practice intensively a complex computer video game called Tetris. About 70 percent of people new to the
game reported vivid hypnagogic images of playing the game as they were falling asleep, and about half of the people expert in the game reported such images. In addition to normal subjects, people with severe medial temporal lobe amnesia like the patient HM were trained and tested on the game. These patients also reported and described hypnagogic images upon falling asleep. But when asked about the game when they were awake, they had no memory of it.
Severely amnesic subjects can also remember and report their dreams. Since they cannot remember normal waking experiences, this seems to imply that different brain systems may be involved in remembering waking experience on the one hand and dreaming and hypnagogic images on the other (see Chapter 5). Specifically, the hippocampus—medial temporal lobe system essential for remembering normal waking experience may not be involved in memories for dreams and hypnagogic images. Instead, they may involve association areas of the cerebral cortex. Recall that visual priming memory involves visual association areas. Stickgold and associates argue that the lack of hippocampal involvement could explain many of the properties of dreams. “Without the anchor of temporal and spatial associations found in hippocampal declarative memories, much of the bizarreness of dreams, including their discontinuities and uncertainties would appear almost inevitable.” The fact that our normal waking memories of most dreams are fleeting and fragmentary is consistent with this possibility.