Introduction to Memory Storage
Memory storage allows us to hold onto information for a very long duration of time—even a lifetime.
Compare different models of long-term and short-term memory storage
- The capacity of long-term memory storage is much greater than that of short-term memory; in theory, long-term memory can hold an infinite amount of information indefinitely. However, in reality, long-term memory is not permanent.
- In order to explain the recall process, a memory model must identify how an encoded memory can reside in storage for a prolonged period of time until it is accessed again.
- The multi- trace distributed memory model, the neural network model, and the dual-store memory search model each seek to explain how memories are stored in the brain.
- vector: In computational neuroscience, a list containing several values.
- encoding: The process of converting information into a construct that can be stored within the brain.
- matrix: In computational neuroscience, a list containing several vectors.
- retrieval: The cognitive process of bringing stored information into consciousness.
- working memory: The system that actively holds multiple pieces of information in the mind for execution of verbal and nonverbal tasks and makes them available for further information processing.
Memories are not stored as exact replicas of experiences; instead, they are modified and reconstructed during retrieval and recall. Memory storage is achieved through the process of encoding, through either short- or long-term memory. During the process of memory encoding, information is filtered and modified for storage in short-term memory. Information in short-term memory deteriorates constantly; however, if the information is deemed important or useful, it is transferred to long-term memory for extended storage. Because long-term memories must be held for indefinite periods of time, they are stored, or consolidated, in a way that optimizes space for other memories. As a result, long-term memory can hold much more information than short-term memory, but it may not be immediately accessible.
The way long-term memories are stored is similar to a digital compression. This means that information is filed in a way that takes up the least amount of space, but in the process, details of the memory may be lost and not easily recovered. Because of this consolidation process, memories are more accurate the sooner they are retrieved after being stored. As the retention interval between encoding and retrieval of the memory lengthens, the accuracy of the memory decreases.
Short-Term Memory Storage
Short-term memory is the ability to hold information for a short duration of time (on the order of seconds). In the process of encoding, information enters the brain and can be quickly forgotten if it is not stored further in the short-term memory. George A. Miller suggested that the capacity of short-term memory storage is approximately seven items plus or minus two, but modern researchers are showing that this can vary depending on variables like the stored items’ phonological properties. When several elements (such as digits, words, or pictures) are held in short-term memory simultaneously, their representations compete with each other for recall, or degrade each other. Thereby, new content gradually pushes out older content, unless the older content is actively protected against interference by rehearsal or by directing attention to it.
Information in the short-term memory is readily accessible, but for only a short time. It continuously decays, so in the absence of rehearsal (keeping information in short-term memory by mentally repeating it) it can be forgotten.
Long-Term Memory Storage
In contrast to short-term memory, long-term memory is the ability to hold semantic information for a prolonged period of time. Items stored in short-term memory move to long-term memory through rehearsal, processing, and use. The capacity of long-term memory storage is much greater than that of short-term memory, and perhaps unlimited. However, the duration of long-term memories is not permanent; unless a memory is occasionally recalled, it may fail to be recalled on later occasions. This is known as forgetting.
Long-term memory storage can be affected by traumatic brain injury or lesions. Amnesia, a deficit in memory, can be caused by brain damage. Anterograde amnesia is the inability to store new memories; retrograde amnesia is the inability to retrieve old memories. These types of amnesia indicate that memory does have a storage process.
Models of Memory Storage
A variety of different memory models have been proposed to account for different types of recall. In order to explain the recall process, however, a memory model must identify how an encoded memory can reside in memory storage for a prolonged period of time until the memory is accessed again, during the recall process. Note that all models use the terminology of short-term and long-term memory to explain memory storage.
Multi-Trace Distributed Memory Model
The multi-trace distributed memory model suggests that the memories being encoded are converted to vectors (lists of values), with each value or “feature” in the vector representing a different attribute of the item to be encoded. These vectors are called memory traces. A single memory is distributed to multiple attributes, so that each attribute represents one aspect of the memory being encoded. These vectors are then added into the memory array or matrix (a list of vectors). In order to retrieve the memory for the recall process, one must cue the memory matrix with a specific probe. The memory matrix is constantly growing, with new traces being added in.
Neural Network Model
The multi-trace model has two key limitations: the notion of an ever-growing matrix within human memory sounds implausible, and the idea of computational searches for specific memories among millions of traces that would be present within the memory matrix sounds far beyond the scope of the human-recalling process. The neural network model is the ideal model in this case, as it overcomes the limitations posed by the multi-trace model and maintains the useful features of the model as well.
The neural network model assumes that neurons form a complex network with other neurons, forming a highly interconnected network; each neuron is characterized by the activation value (how much energy it takes to activate that neuron), and the connection between two neurons is characterized by the weight value (how strong the connection between those neurons is). In this model, connections are formed in the process of memory storage, strengthened through use, and weakened through disuse.
Dual-Store Memory Search Model
The dual-store memory search model, now referred to as the search-of-associative-memory (SAM) model, remains one of the most influential computational models of memory. Two types of memory storage, short-term store and long-term store, are utilized in the SAM model. In the recall process, items residing in the short-term memory store will be recalled first, followed by items residing in the long-term store, where the probability of being recalled is proportional to the strength of the association present within the long-term store. Another type of memory storage, the semantic matrix, is used to explain the semantic effect associated with memory recall.
Network Models of Memory
According to network models of memory, the connections between neurons are the source of memories, and the strength of connections corresponds to how well a memory is stored.
Analyze the network model of memory storage
- Network models of memory storage emphasize the role of neural connections between memories stored in the brain.
- The basis of these theories is that neural networks connect and interact to store memories by modifying the strength of the connections between neural units.
- The parallel distributed processing (PDP) model posits that neural networks interact to store memory and that memory is created by modifying the strength of the connections between neural units.
- connectionism: Any of several fields of psychology that model brain processes in terms of interconnected networks.
Connectionism and Network Models
Network models of memory storage emphasize the role of connections between stored memories in the brain. The basis of these theories is that neural networks connect and interact to store memories by modifying the strength of the connections between neural units. In network theory, each connection is characterized by a weight value that indicates the strength of that particular connection. The stronger the connection, the easier a memory is to retrieve.
Network models are based on the concept of connectionism. Connectionism is an approach in cognitive science that models mental or behavioral phenomena as the emergent processes of interconnected networks that consist of simple units. Connectionism was introduced in the 1940s by Donald Hebb, who said the famous phrase, “Cells that fire together wire together.” This is the key to understanding network models: neural units that are activated together strengthen the connections between themselves.
There are several types of network models in memory research. Some define the fundamental network unit as a piece of information. Others define the unit as a neuron. However, network models generally agree that memory is stored in neural networks and is strengthened or weakened based on the connections between neurons. Network models are not the only models of memory storage, but they do have a great deal of power when it comes to explaining how learning and memory work in the brain, so they are extremely important to understand.
Parallel Distributed Processing Model
The parallel distributed processing (PDP) model is an example of a network model of memory, and it is the prevailing connectionist approach today. PDP posits that memory is made up of neural networks that interact to store information. It is more of a metaphor than an actual biological theory, but it is very useful for understanding how neurons fire and wire with each other.
Taking its metaphors from the field of computer science, this model stresses the parallel nature of neural processing. “Parallel processing” is a computing term; unlike serial processing (performing one operation at a time), parallel processing allows hundreds of operations to be completed at once—in parallel. Under PDP, neural networks are thought to work in parallel to change neural connections to store memories. This theory also states that memory is stored by modifying the strength of connections between neural units. Neurons that fire together frequently (which occurs when a particular behavior or mental process is engaged many times) have stronger connections between them. If these neurons stop interacting, the memory’s strength weakens. This model emphasizes learning and other cognitive phenomena in the creation and storage of memory.