• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Too much RAM is bad?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

BullGod1

New Member
Joined
Sep 9, 2011
I heard that, besides the fact that CPU cache is very expensive, another reason why CPU's have so little of it is also because it's not good to have too big of a L1/L2 cache. So according to the source, the same applies to RAM: having too much RAM is not good, and the ideal is to have only slightly more RAM than you actually need.

So is this true?

For example, if we have a normal workstation that never uses more than 3GB of RAM, would 128 GB of RAM slow you down compared to having only 4 GB (even if it's μs)? Let's assume that in both cases we use 2 sticks with same timings.

Thanks.
 
Assuming everything else is constant, no.

Who says more L1/L2 is bad? Yes it is expensive, but it is extremely fast. If there was a way to fit more on the chip in the same area without increasing cost, they'd do it.
 
Last edited:
I'm not sure that L1/L2 cache argument is correct - I think the limit is partly due to wanting it to be synchronous and there is limited space on the CPU die for cache, and the longer the circuit to the cache the longer the delay in accessing it.

I think it would apply if there had to be a design change when using so many sticks that the access time is increased because the furthest slots are much further from the CPU - but I would imagine using the same setup and just populating fewer slots/using less dense modules on the same motherboard versus filling up the slots with higher dense modules (assuming specification such as speed and timings are identical on these modules) shouldn't affect performance at all.

Although I could be wrong.
 
Assuming everything is constant, no.

Who says more L1/L2 is bad? Yes it is expensive, but it is extremely fast. If there was a way to fit more on the chip in the same area without increasing cost, they'd do it.

I'm not sure that L1/L2 cache argument is correct - I think the limit is partly due to wanting it to be synchronous and there is limited space on the CPU die for cache, and the longer the circuit to the cache the longer the delay in accessing it.

I think it would apply if there had to be a design change when using so many sticks that the access time is increased because the furthest slots are much further from the CPU - but I would imagine using the same setup and just populating fewer slots/using less dense modules on the same motherboard versus filling up the slots with higher dense modules (assuming specification such as speed and timings are identical on these modules) shouldn't affect performance at all.

Although I could be wrong.

+1. Cache is expensive, both cost wise and in chip real estate. They have to make trade offs between cache and performance in other areas of the chip, while keeping costs low enough that people will actually buy them.

As for actual full system memory, not really. I've heard speculation on this, but from what I've seen it's inconclusive. If anything, having too little ram will have an extremely larger impact than having too much.
 
The only reason too much RAM would be bad is if you're trying to OC, too many DIMMs can hinder overclocking as you have more RAM chips and you're only as fast as your slowest one. But this is extreme...bottom line, no too much RAM is a good thing.

As far as L1/L2/L3...it's cost that limits the amount of RAM that limits those. In an ideal world you would have a computer with all L1 cache (but if your L1 cache was 1TB it would have to be so far apart physically it couldn't be L1 because the speed of light would take too long to get back and forth for the clock frequencies we run...that could change in the future but doubtfully).
 
You can never have too much RAM. It's better for future proofing you parts in case you plan to upgrade in the near future.
 
in terms of some pc application purpose L cache is king, real time buffer for fast calculation. as for ram rare scenarios where parallel addressing is important it can be beneficial to have smaller ram modules spread across as many slots as you can occupy.

maybe with huge ram volumes the management of page file needs to be tinkered with. having a huge page file to have to write would leave it crunching away for longer than needed.

more is better in 90% of pc use scenarios in short.
 
Unless you need DDR4 then you wasted your future proofing on DDR3.

That's technically true, but prices for DDR4 will be through the roof the next few years once it releases, so unless the OP will be an early adopter then his efforts won't be in vain by getting, say 16GB at the highest freq. his board limits.
 
Does not mater if you are a early adaptor or late,when you upgrade your system to the next level there will be different ram in speed or DDR4 spec so don't get more ram than you need.
 
Historically, you could have too much cache. Now? I don't know.
The more you have, the larger the cache index is and the more the CPU has to sift through to find what it cached.
It still takes time to dig through the cache.
Now if it finds the thing it was looking for, it was worth it. If it doesn't find what it was hoping to find, then it has wasted CPU cycles that could have been spent doing something else and waiting for the result to show up from RAM.

The L3 cache controlled by the IMC sits between these things, it allows the CPU core(s) to page very quickly through their small caches, if they can't find whatever it is they send out to the IMC for it. The IMC can then deal with the interesting questions of whether it exists in L3 (big) cache or not, while the CPU cruises off to do something else while it waits to hear back from the IMC.

RAM is a different story entirely.

That's a fairly serious simplification of course, but close enough.

Future proofing on DDR3 is only futureproofing if you don't intend to upgrade your CPU in the future.
 
L3 cache (LLC) is not controlled by the IMC it is controlled by the Ring Bus.

The IMC controls the system memory DDR3.

So the CPU looks for data in L1 first then L2 then L3 then the memory controller is able to look up data address. Nowadays the CPU is much smarter and can go from L1 or L2 to the memory controller.

Ring Bus
QUOTE:With Nehalem/Westmere all cores, whether dual, quad or six of them, had their own private path to the last level (L3) cache. That’s roughly 1000 wires per core. The problem with this approach is that it doesn’t work well as you scale up in things that need access to the L3 cache.

Sandy Bridge adds a GPU and video transcoding engine on-die that share the L3 cache. Rather than laying out another 2000 wires to the L3 cache Intel introduced a ring bus.

The System Agent

QUOTE:For some reason Intel stopped using the term un-core, instead in Sandy Bridge it’s called the System Agent.

The System Agent houses the traditional North Bridge. You get a 16 PCIe 2.0 lanes that can be split into two x8s. There’s a redesigned dual-channel DDR3 memory controller that finally restores memory latency to around Lynnfield levels (Clarkdale moved the memory controller off the CPU die and onto the GPU).

systemagent.jpg
ringbus.jpg
Intel_Nehalem_arch.png
 
Last edited:
Wanna bet on that?
Think every single L3 cache on the planet is connect to a ring bus?
Phenom L3 cache? Ring bus? :D
P4 based server CPUs, Ring bus?

The ring bus is used, on Nehelem and newer chips, to communicate with the L3. It is, however, a bus, not a controller. Your own slide says it's an interconnect. Interconnect != controller.

On the pretty pictures you posted, nothing is specifically shown to be in charge of the L3.

As a note, on nehelem the L3 runs at IMC frequency. Terribly suggestive, that.

Try again.
 
I had a flashback to the old Intel 430VX chipset, with its cacheable RAM limit of 64 megabytes. Adding more RAM could actually harm performance. 430VX
 
The L3 cache controlled by the IMC sits between these things.
The IMC does not control the L3, the IMC controls the DDR3 memory.

The system agent contains the IMC and the picture above shows what it does.

QUOTE:For some reason Intel stopped using the term un-core, instead in Sandy Bridge it’s called the System Agent.

The System Agent houses the traditional North Bridge. You get a 16 PCIe 2.0 lanes that can be split into two x8s. There’s a redesigned dual-channel DDR3 memory controller that finally restores memory latency to around Lynnfield levels.

Try again.:D
 
Guess what lives in the traditional northbridge? The memory controller! :D
Similarly, the "System Agent" houses the memory controller.
 
The only disadvantages of too much memory are cost and power usage. (And developers having access to it arguably makes them "lazier" and more likely to make apps that use it inefficiently, but that's not a system problem...)

If nothing else, use it for tmpfs.
 
I have 48 million tb's of memory in my computer. It also has a 208 core processor with 20K HT:thup:,there for I win:facepalm:HEHE:rofl::rofl:

Just had to say that. Good morning everyone
 
Back