• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Intel to release 22-core Xeon E5 v4 “Broadwell-EP” late in 2015

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
Yes! More cores in the enterprise market means it will eventually trickle down to enthusiast products I hope Bclk overclocking makes a comeback with these chips.
 
Yes! More cores in the enterprise market means it will eventually trickle down to enthusiast products I hope Bclk overclocking makes a comeback with these chips.

If I'm right then these chips, the same as Haswell based Xeons will have only 100 bclk option so all ~5-7MHz bclk OC. If it was possible then I would buy 8 core Xeon right now as price of the lowest version is about the same as 5930K.
 
If I'm right then these chips, the same as Haswell based Xeons will have only 100 bclk option so all ~5-7MHz bclk OC. If it was possible then I would buy 8 core Xeon right now as price of the lowest version is about the same as 5930K.

The very little experience i have with Xeon in LGA2011v3, is E5-2620. At the time i built that machine, I was considering replacing my own 5930K with a Xeon, since i dont really overclock anymore. But those plans stopped right after i got to know Xeon.

Maybe i am doing something wrong, but i was unable to get the Ram beyond 1866mhz, being that is was Crucial 2133mhz ram, that kinda put a dent in it, since im running 2800mhz in my own machine.

Motherboard was MSI X99 Sli-Plus.

I feel like something is lost, but ofcourse compared to core count, if that is all that matters, then surely it would be the better option.
 
If I ever win a lotto or lawsuit, I'd love to build a dual socket workstation with a pair of these and tweak the system for the most efficiency. It makes much more sense to get only the cores you will need at the highest core clocks. Go for the highest turbo speed for the frequent single threaded apps we will always need to run, but lots of computational power for multi threaded work.
 
The main problem with dual socket vs single for gaming/overclocking is that Xeons have lower base clock than i7 so when you get 2x2.4GHz 8 core Xeon then you will have about as high performance as on single 5960X overclocked to 4.8GHz. Even though 5960X cost a lot then overall cost for the platform will be much lower using single socket.
There are also other limitations like lower max bclk, lower max memory clock etc. Also I doubt we will see any platforms for Xeons that support overclocking like it was couple of years ago.

@pierre , I'm not sure why it's not working at higher memory clock. All servers which I was installing were working at 2133. At least I was ordering only servers with 2133 memory as price was the same as for 1866.
 
The main problem with dual socket vs single for gaming/overclocking is that Xeons have lower base clock than i7 so when you get 2x2.4GHz 8 core Xeon then you will have about as high performance as on single 5960X overclocked to 4.8GHz. Even though 5960X cost a lot then overall cost for the platform will be much lower using single socket.
There are also other limitations like lower max bclk, lower max memory clock etc. Also I doubt we will see any platforms for Xeons that support overclocking like it was couple of years ago.

@Pierre , I'm not sure why it's not working at higher memory clock. All servers which I was installing were working at 2133. At least I was ordering only servers with 2133 memory as price was the same as for 1866.

A NUMA system for gaming is an interesting idea, but I'd imagine the OS would allocate all the game's processes/threads on one socket, I can't imagine it would be good for a real-time app. like a game to be using non-local memory because of the additional latency and non-cacheable nature of non-local memory access. Has anyone ever tried gaming on a NUMA system?
 
Anyone that had an sr-2 and gamed on it has as I believe it's NUMA. You are all about this NUMA thing mags, lol.
 
A NUMA system for gaming is an interesting idea, but I'd imagine the OS would allocate all the game's processes/threads on one socket, I can't imagine it would be good for a real-time app. like a game to be using non-local memory because of the additional latency and non-cacheable nature of non-local memory access. Has anyone ever tried gaming on a NUMA system?

Non-local memory can be cached. However, whenever one NUMA region writes to a cacheline, it needs to follow an invalidation protocol to inform other regions that if they have that line cached, it should be invalidated.
 
I know a guy over at 2cpu.com who games on a dual socket Intel workstation with 36/72 cores/threads and has optimized everything for gaming. He also has 2x TitanX GPU cards. I think he benches too, but mostly for his own hardware validation purposes.
 
Non-local memory can be cached. However, whenever one NUMA region writes to a cacheline, it needs to follow an invalidation protocol to inform other regions that if they have that line cached, it should be invalidated.

OK, so I cache a copy of non-local memory in my local cache. How is the other CPU going to know I've cached it? It better not be a write-back cache either, otherwise we have the overhead of implementing the MESI protocol over a QPI link and maybe even across multiple levels of cache on a remote CPU.

- - - Updated - - -

I know a guy over at 2cpu.com who games on a dual socket Intel workstation with 36/72 cores/threads and has optimized everything for gaming. He also has 2x TitanX GPU cards. I think he benches too, but mostly for his own hardware validation purposes.

But does the OS schedule the processes/threads from a game to run on one socket? Or both? I doubt they run on both sockets.
 
OK, so I cache a copy of non-local memory in my local cache. How is the other CPU going to know I've cached it? It better not be a write-back cache either, otherwise we have the overhead of implementing the MESI protocol over a QPI link and maybe even across multiple levels of cache on a remote CPU.

The other CPU doesn't know and doesn't really care. It just has to send a cache invalidation notice when it writes to that cache line (or before it writes, in the case of write-back).

I am pretty sure the caches on all modern Intel CPUs are write-back now. Multiple levels of cache doesn't matter. They are all in the same hierarchy. They have to handle their internal invalidations all the time, even in non-NUMA setups.

Yes, cache coherence protocols have overheads. However, it's still many order of magnitudes better than not caching at all, since main memory is usually about 100x slower. You would have to REALLY screw up your cache coherence implementation to have a 100x overhead.
 
Back