• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

whats with these 24gb DDR5 sticks

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Niku-Sama

Member
Joined
Jan 13, 2005
I see these in the news lately, 24 gig sticks o ram. Whats the deal with these?
are they having density problems and these are a stop gap to better 32 gig sticks?
 
Looks like because of architecture, it's possible to add some more capacity at a not much higher price. These kits are supposed to cost more, but when you check offers of brands like Corsair, then they list 48GB for $15 more than 32GB. It has more relaxed timings, but I'm not really sure how it's scaling as I have had no chance to test one yet.
I wonder why there are 24GB single sticks, not 32 GB. I just don't get where the problem is, as 32GB is supposed to be the next step in IC density.
I also don't get who would really need it. Gamers are fine with ~20GB max (some use more than 16GB but not more than 25GB). In servers, you use ECCR modules. So who would really take advantage of 48GB or 96GB desktop kits? Probably people will buy them thinking that's a good idea and believe what marketing says. There will still be single users who do much more on their desktops, but I guess they could simply go for 64GB or 128GB as these kits are available for some longer.
 
We already had discussion about the existence of 24GB modules in this thread:

From what I saw, they're priced between 16GB and 32GB modules of equivalent speed so it provides another option for system builders to optimise if they want to.
 
Lots of good discussion/info in the other thread as well, but feel free to ask something more specific here if it hasn't been covered already @Niku-Sama . :)
 
Last edited:
If the software can't use the RAM, then the RAM is just a waste. For example, If you never pass 16GB, then it doesn't matter if you have 32GB or 192GB RAM, it will perform the same (assuming that the RAM clock and timings are the same). However, higher RAM capacity usually means more relaxed timings and/or lower frequency. If I'm right, then 16GB and 24GB modules will perform not much different, but 48GB are already rated much lower.
CPU cores don't change much, especially in desktop computers. It's only that new software and CPUs with many cores work better in multitasking when the RAM is faster (more channels somehow force the use of higher total capacity RAM). You won't see the real difference until server-grade solutions on many tasks at once.

The short answer - new modules change nothing regarding performance.
 
Out on the web im seeing postings of 2 GB per core is low , 4 GB balanced and 6 GB of RAM Capacity per core is on the high side.

Thanks for transparency.
 
I'm curious... what's the support for their assertions (what made that believable enough to ask - not saying it isn't believeable, but just wondering)?

I don't understand why that would be... and would think it would vary quite dramatically depending on the type of load and processor used. I look at my system which has 32GB of RAM and 24 physical cores which is way less than 2 GB /core. But again, if you have a 64-core server and 384 GB of RAM, you're going to want more cores/threads/RAM to do the work, period.
 
For me maybe just simply everyday use and latency questions.

No evidence from common benchmark comparisons makes me believe more than 32 GB is needed.
 
For me maybe just simply everyday use and latency questions.
You see higher bandwidth in AIDA64 tests with the same kit/speed and higher core-count processors... not sure where it stops scaling, but I do recall seeing that. I don't believe it translates to acutal performance increases, but I could be wrong (@Woomack).

No evidence from common benchmark comparisons makes me believe more than 32 GB is needed.
This response doesn't really answer my question... feels like an answer to a different question, lol. But yeah, I can agree with that from a home user standpoint (as do many others in the other thread, lol). Most home users don't need more than 32GB for sure and 16GB is now the 'base' for a 'gaming' machine as you can eclipse that amount easily if you aren't paying attention (have lots going on and start a game, or play modded games). But servers can and do. I never heard about that rule of thumb/lopsided-ness before though.
 
Last edited:
If the software can't use the RAM, then the RAM is just a waste. For example, If you never pass 16GB, then it doesn't matter if you have 32GB or 192GB RAM, it will perform the same (assuming that the RAM clock and timings are the same). However, higher RAM capacity usually means more relaxed timings and/or lower frequency. If I'm right, then 16GB and 24GB modules will perform not much different, but 48GB are already rated much lower.
As I'm trying to work out in another thread, ram configuration does play a part, big or small depending on the use case. While I don't have any, DDR5 8GB modules are considered trash tier, 16GB modules are the new entry level and above that you may get some rank advantage, at the same speed and primary timings. I have seen this on DDR4 modules which I'm still testing/digesting the results. My laptop came with 1Rx16 modules which really suck. In enthusiast desktop land, ram is usually 1Rx8 or 2Rx8, with 2R giving an edge in some more ram intensive workloads. Running 1 or 2 DPC can impact performance, again assuming same clocks and timings.

So to the previous question this isn't exactly about total capacity but module capacity and configuration.

Out on the web im seeing postings of 2 GB per core is low , 4 GB balanced and 6 GB of RAM Capacity per core is on the high side.
Who is saying that? Maybe in a specific context that might make some sense. In a more general sense, it is more like enough vs not enough. For a gaming system, 8GB of system ram is only ok for much older and less demanding games. 16GB is a nice spot to be at although latest demanding titles may be starting to push upwards on that, especially if you run a ton of stuff in the background at the same time. I feel with DDR5 generation starting point should be 32GB, in part because of potential near future demand for games, but also because 8GB DDR5 modules apparently suck. This is regardless of core count.
 
As I'm trying to work out in another thread, ram configuration does play a part, big or small depending on the use case. While I don't have any, DDR5 8GB modules are considered trash tier, 16GB modules are the new entry level and above that you may get some rank advantage, at the same speed and primary timings. I have seen this on DDR4 modules which I'm still testing/digesting the results. My laptop came with 1Rx16 modules which really suck. In enthusiast desktop land, ram is usually 1Rx8 or 2Rx8, with 2R giving an edge in some more ram intensive workloads. Running 1 or 2 DPC can impact performance, again assuming same clocks and timings.

So to the previous question this isn't exactly about total capacity but module capacity and configuration.

Current DDR5 configurations are only 1Rx8, 1Rx16 and 2Rx8 (at least I haven't seen any other in desktops). Barely a few brands released 8GB modules, and I don't think that most of them are manufactured anymore as DDR5 prices went significantly down (literally 50% in a year).
You can check what's on QVL as brands like Gigabyte have very long lists with module details. However, most kits on QVL were never in stores. Sometimes I wonder why they even test them.
Laptops can have worse configurations, but I can't say much as I had no chance to test any DDR5 SODIMM yet.

DPC impacts the performance almost the same as single/dual rank. However, I was comparing DDR5 single and dual rank kits and there is barely any difference. In DDR4 it was significant in some tests. It's only that motherboard manufacturers won't guarantee 2 DPC setups above ~6000 while 1 DPC can go up to 8000+.
I'm trying to use 2-slot motherboards for tests as they usually overclock better and support higher RAM in general. 2-slot motherboards are only 1 DPC. I don't think I was even testing 2 DPC on AMD or Z790. I was testing it on Z690 but it was limited to 6400 with 2 sticks and 5600 with 4 sticks. As I remember, I had similar results at 5600 with 2 and 4 sticks.
 
DPC impacts the performance almost the same as single/dual rank. However, I was comparing DDR5 single and dual rank kits and there is barely any difference. In DDR4 it was significant in some tests. It's only that motherboard manufacturers won't guarantee 2 DPC setups above ~6000 while 1 DPC can go up to 8000+.
I still haven't got hands on with DDR5 at all yet. With DDR4 1R 2DPC = 2R 1 DPC or near enough. The difference with DDR5 is that 2 DPC seems to come with a much more severe max clock penalty than DDR4, regardless of rank, so that may lean towards going to 2R 1 DPC for best performance. e.g. For Zen 4 official supported speeds are 5200 for 1 DPC, 3600 for 2 DPC. Can I assume the 2R DDR5 modules probably start at 32GB? I still have no short term plans to get any so will see where market goes.
 
I still haven't got hands on with DDR5 at all yet. With DDR4 1R 2DPC = 2R 1 DPC or near enough. The difference with DDR5 is that 2 DPC seems to come with a much more severe max clock penalty than DDR4, regardless of rank, so that may lean towards going to 2R 1 DPC for best performance. e.g. For Zen 4 official supported speeds are 5200 for 1 DPC, 3600 for 2 DPC. Can I assume the 2R DDR5 modules probably start at 32GB? I still have no short term plans to get any so will see where market goes.

16GB supposed to be only single rank. 32GB in most cases will be dual rank, but we may see single rank too. I have no idea what is with 24GB modules, but probably 24GB will be single rank and 48GB dual rank.
As I said, I haven't seen any special difference between single and dual rank on DDR5. There is G.Skill 64GB 6000 review on the front page with some results vs single rank kits.
I recently tested Crucial 2x16GB 5600 kit and now I'm on 2x32GB 5200 kit with the same IC (both from new series, just released). The first is a single rank, and the second one is a dual rank. In AIDA64 and some other tests that I checked so far, I haven't seen any significant difference. I still have to check rendering and some more. Both are also overclocking up to 6200 CL32 on AMD (seems amazing when the kit is 5200 CL42 or 5600 CL46).
 
Two areas I found to be quite memory sensitive are Prime95 and Y-cruncher. These are known to do significant mixed read/writes which seem to benefit more from having more bank groups and/or ranks.

For games Watchdogs Legion is the one that seems to sale most with ram performance of the games I have. Most are more GPU limited until you get to silly settings.

Rendering type benchmarks that I've tried don't seem very ram sensitive at all. Cinebench family and Blender seem to do practically nothing with ram configuration.
 
Back