- Joined
- Oct 9, 2002
I found this interesting read, which makes you believe that setting your CAS timing..etc more agressively is totally un-necessary in real life.
The answer is: Processor cache, level 1 and 2.
The processor uses the cache memory 99.9% of the time, without need to access your system meory!!!!
Here is a quote:
Why Caching Works
Cache is in some ways a really amazing technology. A 512 KB level 2 cache, caching 64 MB of system memory, can supply the information that the processor requests 90-95% of the time. Think about the ratios here: the level 2 cache is less than 1% of the size of the memory it is caching, but it is able to register a "hit" on over 90% of requests. That's pretty efficient, and is the reason why caching is so important.
The reason that this happens is due to a computer science principle called locality of reference. It states basically that even within very large programs with several megabytes of instructions, only small portions of this code generally get used at once. Programs tend to spend large periods of time working in one small area of the code, often performing the same work many times over and over with slightly different data, and then move to another area. This occurs because of "loops", which are what programs use to do work many times in rapid succession.
Just as one example (there are many), let's suppose you start up your word processor and open your favorite document. The word processor program at some point must read the file and then print on the screen the text it finds. This is done (in very simplified terms) using code similar to this:
Open document file.
Open screen window.
-For each character in the document:
-----Read the character.
-----Store the character into working memory.
-----Write the character to the window if the character is part of the first page.
Close the document file.
The loop is of course the three instructions that are done "for each character in the document". These instructions will be repeated many thousands of times, and there are hundreds or thousands of loops like these in the software you use. Every time you hit "page down" on your keyboard, the word processor must clear the screen, figure out which characters to display next, and then run a similar loop to copy them from memory to the screen. Several loops are used when you tell it to save the file to the hard disk.
This example shows how caching improves performance when dealing with program code, but what about your data? Not surprisingly, access to data (your work files, etc.) is similarly repetitive. When you are using your word processor, how many times do you scroll up and down looking at the same text over and over, as you edit it? The system cache holds much of this information so that it can be loaded more quickly the second, third, and next times that it is needed.
How Caching Works
In the example in the previous section a loop was used to read characters from a file, store them in working memory, and then write them to the screen. The first time each of these instructions (read, store, write) is executed, it must be loaded from relatively slow system memory (assuming it is in memory, otherwise it must be read from the hard disk which is much, much slower even than the memory).
The cache is programmed (in hardware) to hold recently-accessed memory locations in case they are needed again. So each of these instructions will be saved in the cache after being loaded from memory the first time. The next time the processor wants to use the same instruction, it will check the cache first, see that the instruction it needs is there, and load it from cache instead of going to the slower system RAM. The number of instructions that can be buffered this way is a function of the size and design of the cache.
Let's suppose that our loop is going to process 1,000 characters and the cache is able to hold all three instructions in the loop (which sounds obvious, but isn't always, due to cache mapping techniques). This means that 999 of the 1,000 times these instructions are executed, they will be loaded from the cache, or 99.9% of the time. This is why caching is able to satisfy such a large percentage of requests for memory even though it has a capacity that is often less than 1% the size of the system RAM.
------------------END Quote---------------
Hope that was helpful...
Any comments?
The answer is: Processor cache, level 1 and 2.
The processor uses the cache memory 99.9% of the time, without need to access your system meory!!!!
Here is a quote:
Why Caching Works
Cache is in some ways a really amazing technology. A 512 KB level 2 cache, caching 64 MB of system memory, can supply the information that the processor requests 90-95% of the time. Think about the ratios here: the level 2 cache is less than 1% of the size of the memory it is caching, but it is able to register a "hit" on over 90% of requests. That's pretty efficient, and is the reason why caching is so important.
The reason that this happens is due to a computer science principle called locality of reference. It states basically that even within very large programs with several megabytes of instructions, only small portions of this code generally get used at once. Programs tend to spend large periods of time working in one small area of the code, often performing the same work many times over and over with slightly different data, and then move to another area. This occurs because of "loops", which are what programs use to do work many times in rapid succession.
Just as one example (there are many), let's suppose you start up your word processor and open your favorite document. The word processor program at some point must read the file and then print on the screen the text it finds. This is done (in very simplified terms) using code similar to this:
Open document file.
Open screen window.
-For each character in the document:
-----Read the character.
-----Store the character into working memory.
-----Write the character to the window if the character is part of the first page.
Close the document file.
The loop is of course the three instructions that are done "for each character in the document". These instructions will be repeated many thousands of times, and there are hundreds or thousands of loops like these in the software you use. Every time you hit "page down" on your keyboard, the word processor must clear the screen, figure out which characters to display next, and then run a similar loop to copy them from memory to the screen. Several loops are used when you tell it to save the file to the hard disk.
This example shows how caching improves performance when dealing with program code, but what about your data? Not surprisingly, access to data (your work files, etc.) is similarly repetitive. When you are using your word processor, how many times do you scroll up and down looking at the same text over and over, as you edit it? The system cache holds much of this information so that it can be loaded more quickly the second, third, and next times that it is needed.
How Caching Works
In the example in the previous section a loop was used to read characters from a file, store them in working memory, and then write them to the screen. The first time each of these instructions (read, store, write) is executed, it must be loaded from relatively slow system memory (assuming it is in memory, otherwise it must be read from the hard disk which is much, much slower even than the memory).
The cache is programmed (in hardware) to hold recently-accessed memory locations in case they are needed again. So each of these instructions will be saved in the cache after being loaded from memory the first time. The next time the processor wants to use the same instruction, it will check the cache first, see that the instruction it needs is there, and load it from cache instead of going to the slower system RAM. The number of instructions that can be buffered this way is a function of the size and design of the cache.
Let's suppose that our loop is going to process 1,000 characters and the cache is able to hold all three instructions in the loop (which sounds obvious, but isn't always, due to cache mapping techniques). This means that 999 of the 1,000 times these instructions are executed, they will be loaded from the cache, or 99.9% of the time. This is why caching is able to satisfy such a large percentage of requests for memory even though it has a capacity that is often less than 1% the size of the system RAM.
------------------END Quote---------------
Hope that was helpful...
Any comments?