Friday, July 29, 2011

How Memory Work


The memory you use in your computer is an extremely powerful tool for storing and transferring data inside your computer. In this article we will look at how the memory actually performs these operations.

Memory Theory - Just Get This if Nothing Else
A commonly used analogy for the performance difference in RAM and hard drive storage is that of a desk compared to a file cabinet. Your hard drive can be thought of as a file cabinet. It can hold a great deal of information and documents, but it often takes quite a while to retrieve what you're looking for inside of one. Memory, on the other hand, can be thought of like your desk. It can only hold a few key documents, but if you're constantly referring to the same few documents over and over, it makes more sense to have them within reach than to take them in and out of your file cabinet every few minutes. The time it saves to have these useful documents at hand, rather than having to walk to the file cabinet every few minutes, can become very significant. The same holds true for your computer. Being able to store more important documents in your extremely fast computer memory can save you a great deal of time while working.

Memory Cells
Each memory chip on a module contains millions of transistors and capacitors, which combine to store a single bit of data in a memory cell. The capacitor acts as a holding cell for the data, which is either a 0 or a 1, and the transistor allows the memory circuitry to read or change the value of the data held in the capacitor.

Random Access Memory Arrays/Grids
These transistors and capacitors are organized in the memory chip in what can be thought of as a grid, with rows and columns like on a chessboard, but with millions or billions of squares. The most common type of computer memory, RAM (Random Access Memory), is called Random because it can store data temporarily anywhere on the grid. Being able to store and retrieve data anywhere on the grid, rather than having to write and read data in sequence each operation, allows it to operate much faster than if it had to search through all of its stored data from the beginning to find what it's looking for. For example, imagine that you are reading a book. Now imagine if you had to start over and re-read the book from the beginning to get to the point you stopped, rather than being able to place a bookmark and start from there the next time you pick up a book. This is why Random Access Memory is an optimal solution for storing data temporarily.

Dynamic vs. Static RAM
The most common type of memory for use in computers is called DRAM (Dynamic Random Access Memory). DRAM is called Dynamic because unlike a hard drive, every time power is removed from the memory (such as when a computer is turned off), all data stored in the memory is lost. Another type of RAM, known as SRAM (Static Random Access Memory) maintains data as long as power is supplied to it and doesn't need to be refreshed like DRAM does. However, it is larger and much more expensive than DRAM, so it is only used in limited situations, such as cache memory, which will be discussed later.

Computer Memory
When a computer user wants to perform an action, such as write a new document, several interactions must take place to achieve this. First, the system's CPU (Central Processing Unit), which acts as the brain of the computer, calls upon the hard drive to load data relating to the program being accessed into the system's memory. The reason that program data is loaded into memory, rather than being accessed from the hard drive only, is that the transfer speed of the hard drive is about 60,000 times slower than memory access speeds! The memory acts as a temporary holding cell for instructions and data relating to the program you're using. Once the necessary data is loaded into memory, the CPU can call directly upon the memory for data it needs to perform operations, which allows the system to run smoothly and quickly.

Other Memory Related Components
The data transfer between the CPU, hard drive and memory is coordinated by the chipset, which is a component on the motherboard that acts like a traffic controller between most of the major pieces of hardware in a system. It directs data where it needs to go and handles requests between different pieces of hardware. A part of the chipset is the memory controller, which specifically controls the data flowing between the memory and CPU.

The actual data transfer between the CPU and memory occurs along the system's FSB (frontside bus), which can be thought of as a highway for data. The bus is a collection of wires running between the CPU and memory along which data signals are sent. Even though the CPU performs the bulk of the calculations in a system, memory performance is extremely important since the CPU relies on data stored in the memory. This is why the faster the memory, the better the system performance overall.

Cache Memory
Besides the system memory, the CPU also uses a type of memory called cache memory. This is a very small amount of memory located either on or very close to the CPU, which supplies the CPU with the most-accessed data. It is made of SRAM, which is high-speed memory that does not need to refresh its contents regularly, as DRAM does. It is many times faster than normal memory but is very expensive to produce, which is why it is only used sparingly. Cache memory stores the data that is accessed the most regularly by the CPU so that the CPU doesn't have to wait for the slower system memory every time it wants to access data.

The cache memory operates on the "80/20" rule, which says that in general, 20% of the data in a system will be used 80% of the time. Cache memory attempts to store this top 20%, and when a certain piece of data is being accessed more often than an item currently in the cache, the cache will drop the least-often used data and store the new piece of information. There are usually two levels of cache memory, Level 1 and Level 2. Level 1 cache is closest to the CPU, usually on the CPU itself, and is the fastest memory. Level 2 can either be on the CPU or the motherboard, and while still faster than memory modules, is slightly slower than the Level 1.

Memory Interleaving
An advanced feature of some motherboards in recent years is memory interleaving. Interleaving allows for more efficient data transfer from the memory by eliminating delays that existed previously. These delays are caused because whenever data is transferred from a memory bank (link glossaryBank def. 2), it takes about one clock cycle to recover and perform another operation. Interleaving alternates between banks of memory so that when one bank is recovering the other can be transferring data. This eliminates gaps in data transfer. This delay sounds small, but after millions of these transfers it can really boost system performance.

0 comments:

Post a Comment

 
Design by Wordpress Theme | Bloggerized by Free Blogger Templates | coupon codes