• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Ryzen2 rumoured to have up to 16 Cores

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
The rumoured 10-15% ipc increase is the more interesting thing, obviously will have to wait and see if any of these rumours are true or not.

As for the core increases I would expect them to increase the number of cores per ccx using the 7nm process. And yes if this is in the article I didn’t read it all. Could mean some nice gaming improvements for say the 3600 chip as there isn’t two ccx modules that need to communicate with each other.


 
I mentioned (speculated) something similar in another thread just yesterday. Good for AMD. Considering the boom in content creation, there could actually be a market for things like that if the pricing is right.
 
This is going to be very interesting next year/2020 when they get ready to release these chips. Dreaming of 8 Cores on a single CCX with rated speeds of 4.5 GHz (base clock) and 5.25 GHz (max boost) @ around $250 and no need to upgrade my MB.
 
This is going to be very interesting next year/2020 when they get ready to release these chips. Dreaming of 8 Cores on a single CCX with rated speeds of 4.5 GHz (base clock) and 5.25 GHz (max boost) @ around $250 and no need to upgrade my MB.

There was a video on YouTube. Can’t remember who it was. But basically it pointed out that a motherboard manufacturer had mentioned in its promotional material that the B450 and x470 boards were compatible with 8 cores upwards. They took that to mean that anyone with the first gen ryzen equipment probably won’t be able to use the 16 core processors. Obviously it’s all speculation so who knows.


 
I think AMD only promised socket compatibility to 2020.

They have state 2020 and possibly beyond. So we know for another 1.5 years minimum they plan to use A4 Socket. I would imagine they will do Zen2 and Zen2+ on A4 and then Zen3 will get a new socket in like 2022. I am clearly speculating of course, but Zen2 in supposed to be 2019/2020 and if Zen2+ goes similar to Zen+, 1 year later in 2021 they will launch Zen2+... But only time will tell, still if they aren't neck and neck with Intel or even beating them in gaming by 2021 I would be quite surprised.
 
"Gaming" is a red herring, promoted by Intel. The current Ryzens lack nothing in "gaming" performance. The few frame per second difference in benchmarks is irrelevant in real life gaming. If your CPU is the difference in your gaming experience then you need a better GPU.
 
"Gaming" is a red herring, promoted by Intel. The current Ryzens lack nothing in "gaming" performance. The few frame per second difference in benchmarks is irrelevant in real life gaming. If your CPU is the difference in your gaming experience then you need a better GPU.

I don't disagree with you and personally have no issues with gaming performance that I can attest to. However, this is technically the only field in which Ryzen is trailing the i5s and i7s, while dominating in the other fields. So why wouldn't AMD's goal be to take that field as theirs as well. That was my only point, it's something that would be fun to see. Also a significant increase in single core performance would justify an upgrade for me, otherwise the multi threaded power of this chip are more than enough for me, lol...
 
Gaming benchmarks is where Intel leads, though. It may be a marketing issue, but it's a non event IRL. And AMD is not far off at all in IPC and single core performance. Intel's slim "lead" (where it exists) is almost entirely clock speed, which is going to hurt them if things don't change for them. Team Red doesn't seem to suffer from the "core count up, clock speed down" syndrome to the extent Intel does. A mainstream 16c, or even 12c, at 4 GHz will absolutely wreck anything Intel has for the mainstream now. And given Intel's recent philosophy of "DO SOMETHING! ANYTHING!" I don't see any near term answers to that problem from them.

Having said all that, how many people really need 16c/32t email performance? I'm sure software engineers will eventually wake up and exclaim "Look! A bazillion cores! I was looking for a way to spend a month coding a 750 Kb, single function app!" but so far that segment of the whole has been more than a little reluctant to jump in with both feet. LOL
 
"Gaming" is a red herring, promoted by Intel. The current Ryzens lack nothing in "gaming" performance. The few frame per second difference in benchmarks is irrelevant in real life gaming. If your CPU is the difference in your gaming experience then you need a better GPU.

Help me here, if I'm CPU limited why do I get a faster GPU?

Not current any more, but I did try 980Ti SLI on my 1700 system, and let's just say the average FPS on that sucked, even when I overclocked the CPU. I was aiming to use it for high(er) fps 1080p gaming, but it was struggling to get much over 60fps. Moved the 980Ti SLI to my 6600k system, and it went up to the 100's.

I don't have a Ryzen 2000 to repeat similar with, but I saw a good improvement going from 6700k to 8086k with 1080Ti (CPUs stock). 6700k was varying between 60-120 fps depending on game loading. Once I moved the GPU over to the 8086k it was over 100 most of the time. Would a 2600X or 2700X have given similar results - I don't know.
 
I just meant as a general rule, if your gaming experience is riding on the CPU, there are other places to look. Like why would you be gaming at 1080p with a 1080Ti? I have no problem holding FPS over 100 with a 6700k and GTX 1070 at 1080p. You also went from 8c/16t to a 4c/4t. Turning off the HT (or whatever AMD calls it) may have made a difference. And the IPC difference between those chips was greater than exists now. The vast majority of 1080p displays are 60 Hz, so the difference between 120 fps and 150 fps is...nothing in real life.

It's also game dependent. Was the difference between the 6700k and 8086k from cores or clock speed? Throw in the vagaries of SLI (very game dependent), and are we talking benchmarks or actual gaming experience?
 
Help me here, if I'm CPU limited why do I get a faster GPU?

Not current any more, but I did try 980Ti SLI on my 1700 system, and let's just say the average FPS on that sucked, even when I overclocked the CPU. I was aiming to use it for high(er) fps 1080p gaming, but it was struggling to get much over 60fps. Moved the 980Ti SLI to my 6600k system, and it went up to the 100's.

I don't have a Ryzen 2000 to repeat similar with, but I saw a good improvement going from 6700k to 8086k with 1080Ti (CPUs stock). 6700k was varying between 60-120 fps depending on game loading. Once I moved the GPU over to the 8086k it was over 100 most of the time. Would a 2600X or 2700X have given similar results - I don't know.

Yes, simple answer give a 1080Ti and a 2000 series ryzen you could expect 100s depending on the title. I run 1440p and a 1080 and haven't seen under 120 FPS yet and more commonly have seen 144+, but most CPU intense thing I've played is COD WWII.
 
I just meant as a general rule, if your gaming experience is riding on the CPU, there are other places to look. Like why would you be gaming at 1080p with a 1080Ti? I have no problem holding FPS over 100 with a 6700k and GTX 1070 at 1080p. You also went from 8c/16t to a 4c/4t. Turning off the HT (or whatever AMD calls it) may have made a difference. And the IPC difference between those chips was greater than exists now. The vast majority of 1080p displays are 60 Hz, so the difference between 120 fps and 150 fps is...nothing in real life.

It's also game dependent. Was the difference between the 6700k and 8086k from cores or clock speed? Throw in the vagaries of SLI (very game dependent), and are we talking benchmarks or actual gaming experience?

I never said I was gaming at 1080p on 1080Ti, only on the 980Ti SLI. I've moved onto 1440p with the 1080Ti, where it is adequate. 4k is another story... anyway, on the 1700, I can't recall if I tried turning off SMT or not, as I might have. That's more to correct for the OS inability to schedule, as if they could, it shouldn't matter. And if you were looking to get into high fps gaming, 1080p is where it is most accessible. Doesn't have to be the majority. The difference even going from 60 to 100 is very noticeable, where the 1700 failed to deliver. It was in a game, FFXIV remaining as my main timesink. You could argue, as an older title, it may not be so thread optimised and therefore may benefit more from clock, but I'm not aware of any test data on it.
 
OK, I was asking for clarification. I didn't mean to sound like I was disputing your findings. Speaking for myself, I see zero difference between about 75 fps and anything higher. I still don't get gaming at 1080p with two 980Tis either, but that's a matter of preference and I don't need to "get" it. :D And I would have no clue as to what games are optimized for what setups without reading it from a trusted source or two. Everything I have read about Ryzen's actual gaming performance, with few individual exceptions, has been overwhelmingly on the side of Ryzen having no issues with in game performance. The only data I've seen otherwise has been game benchmarks. The overall verdict has been Ryzen gives up nothing in the gaming experience 90+% of the time. The other 10% has been largely anecdotal. I'm not convinced playing a game will allow someone to identify an Intel or Ryzen chip because they can "see" the 10-15% difference in frame rates over 100 fps.

A couple examples https://arstechnica.com/gadgets/201...s-ryzen-at-games-but-how-much-does-it-matter/
https://www.pcgamesn.com/best-cpu-for-gaming
 
The main reason I was trying 980Ti SLI at the time was that I happened to have it spare, and I knew a 980Ti by itself wasn't enough.

I am wondering, if AMD go through with "moar coars" for consumers, is there a point where software might benefit from SMT off as mentioned earlier? At least, for a lot of software that wasn't designed with so many threads in mind... it would help prevent a level of inefficiency.
 
Depends on how the CPU and software respond to the cache.

I run a 16c w/o HT because i simply dont need the threads.
 
Well it would make sense, look at the first gen Ryzen.. and threadrippers.. My Ryzen+ is better then a first gen 1900x threadripper., and now you got 32core threadrippers.. It wouldnt surprise me, specially when the current thread ripper has two dummy die/cores on it. They could easily fill it with two more dies making it 64 cores 128 threads.. I think Intel is going to have a big problem if this is true

- - - Updated - - -

Help me here, if I'm CPU limited why do I get a faster GPU?

Not current any more, but I did try 980Ti SLI on my 1700 system, and let's just say the average FPS on that sucked, even when I overclocked the CPU. I was aiming to use it for high(er) fps 1080p gaming, but it was struggling to get much over 60fps. Moved the 980Ti SLI to my 6600k system, and it went up to the 100's.

I don't have a Ryzen 2000 to repeat similar with, but I saw a good improvement going from 6700k to 8086k with 1080Ti (CPUs stock). 6700k was varying between 60-120 fps depending on game loading. Once I moved the GPU over to the 8086k it was over 100 most of the time. Would a 2600X or 2700X have given similar results - I don't know.



Cpu limitation is kinda meh to me. You could have a C2 duo and have a 1080ti and still get better fps then a 2700x and a R9-290. Its just the cpu will limit the GPUs full potentional
 
Back