RAM
Random-access memory (RAM /ræm/) is a form of computer
data storage which stores frequently used program instructions to
increase the general speed of a system. A random-access
memory device allows data items to be read or written in almost the same
amount of time irrespective of the physical location of data inside the
memory. In contrast, with other direct-access data storage
media such as hard disks, CD-RWs, DVD-RWs and the older drum memory,
the time required to read and write data items varies significantly
depending on their physical locations on the recording medium, due to
mechanical limitations such as media rotation speeds and arm movement.
RAM contains multiplexing and demultiplexing circuitry, to connect the data lines to the addressed storage for reading or writing the entry. Usually more than one bit of storage is accessed by the same address, and RAM devices often have multiple data lines and are said to be '8-bit' or '16-bit' etc.
In today's technology, random-access memory takes the form of integrated circuits. RAM is normally associated with volatile types of memory (such as DRAM memory modules), where stored information is lost if power is removed, although many efforts have been made to develop non-volatile RAM chips. Other types of non-volatile memories exist that allow random access for read operations, but either do not allow write operations or have other kinds of limitations on them. These include most types of ROM and a type of flash memory called NOR-Flash.
Early computers used relays, mechanical counters or delay lines for main memory functions. Ultrasonic delay lines could only reproduce data in the order it was written. Drum memory could be expanded at relatively low cost but efficient retrieval of memory items required knowledge of the physical layout of the drum to optimize speed. Latches built out of vacuum tube triodes, and later, out of discrete transistors, were used for smaller and faster memories such as registers. Such registers were relatively large and too costly to use for large amounts of data; generally only a few dozen or few hundred bits of such memory could be provided.
The first practical form of random-access memory was the Williams tube starting in 1947. It stored data as electrically charged spots on the face of a cathode ray tube. Since the electron beam of the CRT could read and write the spots on the tube in any order, memory was random access. The capacity of the Williams tube was a few hundred to around a thousand bits, but it was much smaller, faster, and more power-efficient than using individual vacuum tube latches. Developed at the University of Manchester in England, the Williams tube provided the medium on which the first electronically stored-memory program was implemented in the Manchester Small-Scale Experimental Machine (SSEM) computer, which first successfully ran a program on 21 June 1948. In fact, rather than the Williams tube memory being designed for the SSEM, the SSEM was a testbed to demonstrate the reliability of the memory.
Magnetic-core memory was invented in 1947 and developed up until the mid-1970s. It became a widespread form of random-access memory, relying on an array of magnetized rings. By changing the sense of each ring's magnetization, data could be stored with one bit stored per ring. Since every ring had a combination of address wires to select and read or write it, access to any memory location in any sequence was possible.
Magnetic core memory was the standard form of memory system until displaced by solid-state memory in integrated circuits, starting in the early 1970s. Dennard invented dynamic random-access memory (DRAM) in 1968; this allowed replacement of a 4 or 6-transistor latch circuit by a single transistor for each memory bit, greatly increasing memory density at the cost of volatility. Data was stored in the tiny capacitance of each transistor, and had to be periodically refreshed every few milliseconds before the charge could leak away.
A second type, DRAM, is based around a capacitor. Charging and discharging this capacitor can store a '1' or a '0' in the cell. However, this capacitor will slowly leak away, and must be refreshed periodically. Because of this refresh process, DRAM uses more power, but it can achieve greater storage densities and lower unit costs compared to SRAM.
Usually several memory cells share the same address. For example, a 4 bit 'wide' RAM chip has 4 memory cells for each address. Often the width of the memory and that of the microprocessor are different, for a 32 bit microprocessor, eight 4 bit RAM chips would be needed.
One can read and over-write data in RAM. Many computer systems have a memory hierarchy consisting of processor registers, on-die SRAM caches, external caches, DRAM, paging systems and virtual memory or swap space on a hard drive. This entire pool of memory may be referred to as "RAM" by many developers, even though the various subsystems can have very different access times, violating the original concept behind the random access term in RAM. Even within a hierarchy level such as DRAM, the specific row, column, bank, rank, channel, or interleave organization of the components make the access time variable, although not to the extent that access time to rotating storage media or a tape is variable. The overall goal of using a memory hierarchy is to obtain the highest possible average access performance while minimizing the total cost of the entire memory system (generally, the memory hierarchy follows the access time with the fast CPU registers at the top and the slow hard drive at the bottom).
In many modern personal computers, the RAM comes in an easily upgraded form of modules called memory modules or DRAM modules about the size of a few sticks of chewing gum. These can quickly be replaced should they become damaged or when changing needs demand more storage capacity. As suggested above, smaller amounts of RAM (mostly SRAM) are also integrated in the CPU and other ICs on the motherboard, as well as in hard-drives, CD-ROMs, and several other parts of the computer system.
Most modern operating systems employ a method of extending RAM capacity, known as "virtual memory". A portion of the computer's hard drive is set aside for a paging file or a scratch partition, and the combination of physical RAM and the paging file form the system's total memory. (For example, if a computer has 2 GB of RAM and a 1 GB page file, the operating system has 3 GB total memory available to it.) When the system runs low on physical memory, it can "swap" portions of RAM to the paging file to make room for new data, as well as to read previously swapped information back into RAM. Excessive use of this mechanism results in thrashing and generally hampers overall system performance, mainly because hard drives are far slower than RAM.
Software can "partition" a portion of a computer's RAM, allowing it to act as a much faster hard drive that is called a RAM disk. A RAM disk loses the stored data when the computer is shut down, unless memory is arranged to have a standby battery source.
Sometimes, the contents of a relatively slow ROM chip are copied to read/write memory to allow for shorter access times. The ROM chip is then disabled while the initialized memory locations are switched in on the same block of addresses (often write-protected). This process, sometimes called shadowing, is fairly common in both computers and embedded systems.
Since 2006, "solid-state drives" (based on flash memory) with capacities exceeding 256 gigabytes and performance far exceeding traditional disks have become available. This development has started to blur the definition between traditional random-access memory and "disks", dramatically reducing the difference in performance.
The "memory wall" is the growing disparity of speed between CPU and memory outside the CPU chip. An important reason for this disparity is the limited communication bandwidth beyond chip boundaries, which is also referred to as bandwidth wall. From 1986 to 2000, CPU speed improved at an annual rate of 55% while memory speed only improved at 10%. Given these trends, it was expected that memory latency would become an overwhelming bottleneck in computer performance.
A different concept is the processor-memory performance gap, which can be addressed by 3D computer chips that reduce the distance between the logic and memory aspects that are further apart in a 2D chip. Memory subsystem design requires a focus on the gap, which is widening over time. The main method of bridging the gap is the use of caches; small amounts of high-speed memory that houses recent operations and instructions nearby the processor, speeding up the execution of those operations or instructions in cases where they are called upon frequently. Multiple levels of caching have been developed in order to deal with the widening of the gap, and the performance of high-speed modern computers are reliant on evolving caching techniques. These can prevent the loss of processor performance, as it takes less time to perform the computation it has been initiated to complete. There can be up to a 53% difference between the growth in speed of processor speeds and the lagging speed of main memory access.
RAM (random access memory) is the place in a computing device where the operating system (OS), application programs and data in current use are kept so they can be quickly reached by the device's processor. RAM is much faster to read from and write to than other kinds of storage in a computer, such as a hard disk drive (HDD), solid-state drive (SSD) or optical drive. Data remains in RAM as long as the computer is running. When the computer is turned off, RAM loses its data. When the computer is turned on again, the OS and other files are once again loaded into RAM, usually from an HDD or SSD.
When it comes to storage, it seems fast is never fast enough. And although all-flash arrays are an attractive way to boost performance, they’re not a viable option for organizations with budget constraints. Luckily, storage I/O is improving through caching and tiering technology, but which is right for you?.
You can compare RAM to a person's short-term memory and a hard disk to long-term memory. Short-term memory focuses on the work at hand, but can only keep so many facts in view at one time. If short-term memory fills up, your brain is sometimes able to refresh it from facts stored in long-term memory. A computer also works this way. If RAM fills up, the processor needs to continually go to the hard disk to overlay old data in RAM with new, slowing the computer's operation. Unlike a hard disk, which can become completely full of data and unable to accept any more, RAM never runs out of memory, but the combination of RAM and storage memory can be completely used up.
RAM is called random access because any storage location -- also known as a memory address -- can be accessed directly. Originally, the term distinguished regular core memory from offline memory, usually on magnetic tape in which an item of data could only be accessed by starting from the beginning of the tape and finding an address sequentially. RAM is organized and controlled in a way that enables data to be stored and retrieved directly to specific locations. Note that other forms of storage -- such as the hard disk and CD-ROM -- are also accessed directly or randomly, but the term random access is not applied to these forms of storage.
RAM started out as asynchronous, or having a different clock speed for the microchips in the RAM than the processor. This was a problem as processors became more powerful and RAM couldn't keep up with requests for data from the processor. In the early 1990s, clock speeds were synchronized with the introduction of synchronous dynamic random access memory. SDRAM reached its limit quickly, since it transferred data in a single data rate. Around the year 2000, double data rate random access memory (DDR RAM) was developed. This moved data twice in a single clock cycle -- at the start and end. The introduction of DDR RAM also seems to have changed the definition of SDRAM, as many sources now define it as single data rate RAM.
RAM is small, both in physical size -- it's stored in microchips -- and in the amount of data it can hold. A typical laptop computer may come with 4 gigabytes of RAM, while a hard disk can hold 10 terabytes.
RAM comes in the form of discrete or separate microchips, and in modules that plug into slots in the computer's motherboard. These slots connect through a bus or set of electrical paths to the processor. The HDD, on the other hand, stores data on a magnetized surface that looks like a phonograph record, while the SSD stores data in memory chips that, unlike RAM, are not dependent on having power all the time and won't lose data once the power is turned off.
Most PCs allow users to increase the number of RAM modules to a certain limit. Having more RAM in your computer reduces the number of times the processor has to read data from the hard disk, an operation that takes much longer than reading data from RAM. RAM access time is in nanoseconds, while storage memory access time is in milliseconds.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.
Rodney Brown - 1 Aug 2016 9:16 AM @AlbertGareev, I think that technically that is (or will be) true, but not for most people yet. I think that for the non-IT long-time computer nerd like myself those two terms will be here for some time. Add My Comment Cancel.
@AlbertGareev, I think that technically that is (or will be) true, but not for most people yet. I think that for the non-IT long-time computer nerd like myself those two terms will be here for some time.
No one I've ever known has muttered "golly gee, I have way too much RAM." Not sure if there is a sane upper limit, but I've certainly never run into it. As the Duchess of Windsor might have said "you can't be too rich or too thin or have too much RAM.".
RAM contains multiplexing and demultiplexing circuitry, to connect the data lines to the addressed storage for reading or writing the entry. Usually more than one bit of storage is accessed by the same address, and RAM devices often have multiple data lines and are said to be '8-bit' or '16-bit' etc.
In today's technology, random-access memory takes the form of integrated circuits. RAM is normally associated with volatile types of memory (such as DRAM memory modules), where stored information is lost if power is removed, although many efforts have been made to develop non-volatile RAM chips. Other types of non-volatile memories exist that allow random access for read operations, but either do not allow write operations or have other kinds of limitations on them. These include most types of ROM and a type of flash memory called NOR-Flash.
Early computers used relays, mechanical counters or delay lines for main memory functions. Ultrasonic delay lines could only reproduce data in the order it was written. Drum memory could be expanded at relatively low cost but efficient retrieval of memory items required knowledge of the physical layout of the drum to optimize speed. Latches built out of vacuum tube triodes, and later, out of discrete transistors, were used for smaller and faster memories such as registers. Such registers were relatively large and too costly to use for large amounts of data; generally only a few dozen or few hundred bits of such memory could be provided.
The first practical form of random-access memory was the Williams tube starting in 1947. It stored data as electrically charged spots on the face of a cathode ray tube. Since the electron beam of the CRT could read and write the spots on the tube in any order, memory was random access. The capacity of the Williams tube was a few hundred to around a thousand bits, but it was much smaller, faster, and more power-efficient than using individual vacuum tube latches. Developed at the University of Manchester in England, the Williams tube provided the medium on which the first electronically stored-memory program was implemented in the Manchester Small-Scale Experimental Machine (SSEM) computer, which first successfully ran a program on 21 June 1948. In fact, rather than the Williams tube memory being designed for the SSEM, the SSEM was a testbed to demonstrate the reliability of the memory.
Magnetic-core memory was invented in 1947 and developed up until the mid-1970s. It became a widespread form of random-access memory, relying on an array of magnetized rings. By changing the sense of each ring's magnetization, data could be stored with one bit stored per ring. Since every ring had a combination of address wires to select and read or write it, access to any memory location in any sequence was possible.
Magnetic core memory was the standard form of memory system until displaced by solid-state memory in integrated circuits, starting in the early 1970s. Dennard invented dynamic random-access memory (DRAM) in 1968; this allowed replacement of a 4 or 6-transistor latch circuit by a single transistor for each memory bit, greatly increasing memory density at the cost of volatility. Data was stored in the tiny capacitance of each transistor, and had to be periodically refreshed every few milliseconds before the charge could leak away.
A second type, DRAM, is based around a capacitor. Charging and discharging this capacitor can store a '1' or a '0' in the cell. However, this capacitor will slowly leak away, and must be refreshed periodically. Because of this refresh process, DRAM uses more power, but it can achieve greater storage densities and lower unit costs compared to SRAM.
Usually several memory cells share the same address. For example, a 4 bit 'wide' RAM chip has 4 memory cells for each address. Often the width of the memory and that of the microprocessor are different, for a 32 bit microprocessor, eight 4 bit RAM chips would be needed.
One can read and over-write data in RAM. Many computer systems have a memory hierarchy consisting of processor registers, on-die SRAM caches, external caches, DRAM, paging systems and virtual memory or swap space on a hard drive. This entire pool of memory may be referred to as "RAM" by many developers, even though the various subsystems can have very different access times, violating the original concept behind the random access term in RAM. Even within a hierarchy level such as DRAM, the specific row, column, bank, rank, channel, or interleave organization of the components make the access time variable, although not to the extent that access time to rotating storage media or a tape is variable. The overall goal of using a memory hierarchy is to obtain the highest possible average access performance while minimizing the total cost of the entire memory system (generally, the memory hierarchy follows the access time with the fast CPU registers at the top and the slow hard drive at the bottom).
In many modern personal computers, the RAM comes in an easily upgraded form of modules called memory modules or DRAM modules about the size of a few sticks of chewing gum. These can quickly be replaced should they become damaged or when changing needs demand more storage capacity. As suggested above, smaller amounts of RAM (mostly SRAM) are also integrated in the CPU and other ICs on the motherboard, as well as in hard-drives, CD-ROMs, and several other parts of the computer system.
Most modern operating systems employ a method of extending RAM capacity, known as "virtual memory". A portion of the computer's hard drive is set aside for a paging file or a scratch partition, and the combination of physical RAM and the paging file form the system's total memory. (For example, if a computer has 2 GB of RAM and a 1 GB page file, the operating system has 3 GB total memory available to it.) When the system runs low on physical memory, it can "swap" portions of RAM to the paging file to make room for new data, as well as to read previously swapped information back into RAM. Excessive use of this mechanism results in thrashing and generally hampers overall system performance, mainly because hard drives are far slower than RAM.
Software can "partition" a portion of a computer's RAM, allowing it to act as a much faster hard drive that is called a RAM disk. A RAM disk loses the stored data when the computer is shut down, unless memory is arranged to have a standby battery source.
Sometimes, the contents of a relatively slow ROM chip are copied to read/write memory to allow for shorter access times. The ROM chip is then disabled while the initialized memory locations are switched in on the same block of addresses (often write-protected). This process, sometimes called shadowing, is fairly common in both computers and embedded systems.
Since 2006, "solid-state drives" (based on flash memory) with capacities exceeding 256 gigabytes and performance far exceeding traditional disks have become available. This development has started to blur the definition between traditional random-access memory and "disks", dramatically reducing the difference in performance.
The "memory wall" is the growing disparity of speed between CPU and memory outside the CPU chip. An important reason for this disparity is the limited communication bandwidth beyond chip boundaries, which is also referred to as bandwidth wall. From 1986 to 2000, CPU speed improved at an annual rate of 55% while memory speed only improved at 10%. Given these trends, it was expected that memory latency would become an overwhelming bottleneck in computer performance.
A different concept is the processor-memory performance gap, which can be addressed by 3D computer chips that reduce the distance between the logic and memory aspects that are further apart in a 2D chip. Memory subsystem design requires a focus on the gap, which is widening over time. The main method of bridging the gap is the use of caches; small amounts of high-speed memory that houses recent operations and instructions nearby the processor, speeding up the execution of those operations or instructions in cases where they are called upon frequently. Multiple levels of caching have been developed in order to deal with the widening of the gap, and the performance of high-speed modern computers are reliant on evolving caching techniques. These can prevent the loss of processor performance, as it takes less time to perform the computation it has been initiated to complete. There can be up to a 53% difference between the growth in speed of processor speeds and the lagging speed of main memory access.
RAM (random access memory) is the place in a computing device where the operating system (OS), application programs and data in current use are kept so they can be quickly reached by the device's processor. RAM is much faster to read from and write to than other kinds of storage in a computer, such as a hard disk drive (HDD), solid-state drive (SSD) or optical drive. Data remains in RAM as long as the computer is running. When the computer is turned off, RAM loses its data. When the computer is turned on again, the OS and other files are once again loaded into RAM, usually from an HDD or SSD.
When it comes to storage, it seems fast is never fast enough. And although all-flash arrays are an attractive way to boost performance, they’re not a viable option for organizations with budget constraints. Luckily, storage I/O is improving through caching and tiering technology, but which is right for you?.
You can compare RAM to a person's short-term memory and a hard disk to long-term memory. Short-term memory focuses on the work at hand, but can only keep so many facts in view at one time. If short-term memory fills up, your brain is sometimes able to refresh it from facts stored in long-term memory. A computer also works this way. If RAM fills up, the processor needs to continually go to the hard disk to overlay old data in RAM with new, slowing the computer's operation. Unlike a hard disk, which can become completely full of data and unable to accept any more, RAM never runs out of memory, but the combination of RAM and storage memory can be completely used up.
RAM is called random access because any storage location -- also known as a memory address -- can be accessed directly. Originally, the term distinguished regular core memory from offline memory, usually on magnetic tape in which an item of data could only be accessed by starting from the beginning of the tape and finding an address sequentially. RAM is organized and controlled in a way that enables data to be stored and retrieved directly to specific locations. Note that other forms of storage -- such as the hard disk and CD-ROM -- are also accessed directly or randomly, but the term random access is not applied to these forms of storage.
RAM started out as asynchronous, or having a different clock speed for the microchips in the RAM than the processor. This was a problem as processors became more powerful and RAM couldn't keep up with requests for data from the processor. In the early 1990s, clock speeds were synchronized with the introduction of synchronous dynamic random access memory. SDRAM reached its limit quickly, since it transferred data in a single data rate. Around the year 2000, double data rate random access memory (DDR RAM) was developed. This moved data twice in a single clock cycle -- at the start and end. The introduction of DDR RAM also seems to have changed the definition of SDRAM, as many sources now define it as single data rate RAM.
RAM is small, both in physical size -- it's stored in microchips -- and in the amount of data it can hold. A typical laptop computer may come with 4 gigabytes of RAM, while a hard disk can hold 10 terabytes.
RAM comes in the form of discrete or separate microchips, and in modules that plug into slots in the computer's motherboard. These slots connect through a bus or set of electrical paths to the processor. The HDD, on the other hand, stores data on a magnetized surface that looks like a phonograph record, while the SSD stores data in memory chips that, unlike RAM, are not dependent on having power all the time and won't lose data once the power is turned off.
Most PCs allow users to increase the number of RAM modules to a certain limit. Having more RAM in your computer reduces the number of times the processor has to read data from the hard disk, an operation that takes much longer than reading data from RAM. RAM access time is in nanoseconds, while storage memory access time is in milliseconds.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.
Rodney Brown - 1 Aug 2016 9:16 AM @AlbertGareev, I think that technically that is (or will be) true, but not for most people yet. I think that for the non-IT long-time computer nerd like myself those two terms will be here for some time. Add My Comment Cancel.
@AlbertGareev, I think that technically that is (or will be) true, but not for most people yet. I think that for the non-IT long-time computer nerd like myself those two terms will be here for some time.
No one I've ever known has muttered "golly gee, I have way too much RAM." Not sure if there is a sane upper limit, but I've certainly never run into it. As the Duchess of Windsor might have said "you can't be too rich or too thin or have too much RAM.".


No comments: