• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

SOLVED 4k Gaming rig CPU question

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

drkcyde

Registered
Joined
Jul 8, 2008
Location
Honolulu, Hawaii
As the title of this thread explains I'm putting together a 4k display gaming rig. I am a huge fan of linus tech tips on youtube and he just posted a 4k gaming rig build guide.

After doing a bit of research the mobo/cpu combo I had decided for 4k gaming that best suited me was the 4770K Haswell paired with a Z87X-UD3H Gigabyte mobo. I choose this because I favor haswell architecture and with a bit of luck might be able to get a solid 4ghz+ overclock from a full out water cooling setup.

Linus, chose, Intel Core i7 4930K Unlocked Six Core Processor with HyperThreading paired with ASUS X79 Deluxe mobo. The only reason he gave in the video is because, "He needed all dem cores".

Is linus right in choosing the 6-core setup for 4k gaming display? Or is sticking with the tried and true haswells a better option?

For those interested the full youtube video can be found here, I highly recommend watching his other build guides and subscribing to his channel if you haven't already.
 
For those of you reading and wondering about this post I have discovered the answer.

The reason that linus chose his mobo/cpu combo was not based on the processor but actually the mobo. The 2011 mobo's allow two graphics cards to run at x16 each whereas the 1150 and 1155 boards only allow x16 and x8 with two cards installed. Running the 4k displays is far more dependent on gpu power than cpu power and therefore (for a 4K computer build) a 2011 board is necessary in order to get the most out of your SLI graphics cards.
 
So, what I gathered from this, Linus tripled the mobo/CPU cost for ~2-3% gain in an SLI setup?

He's a fool to do that, we don't saturate PCIe to warrant the jump to 3.0 x16 for that cost.

Pro tip: Save the money, go 4970K/Z97, and get two better GPUs. That'll get you WAY more than 3%.
 
Originally that's what I had thought, but to put into perspective how demanding 4k displays are... Here's a great link where both solo and SLI X16 slots (as in the cards are running both at x16, not one @x16 and one @x8). This graph shows that even with two cards running at x16 in SLI some games still won't even pump out 60fps.

** I have not been able to find a graph showing benchmarks for a x16 and x8 SLI setup on 4k displays. I would be VERY interested to see the difference

http://www.tomshardware.com/reviews/pq321q-4k-gaming,3620-4.html

In fact I just looked at every link on the first two google pages that popped up with the search engine "4k display benchmarks". And through both the first two pages no one has even attempted 4k gaming on anything but a socket 2011 board, and I garuntee its because you NEED both cards to be running at the x16 bandwidth to even consider playing 4k on ultra settings.

Also I have found 4 boards again thanks to tom's hardware that are z87 socket 1150 and can run two SLI cards at x16 x0 x16 x0, but they are not in my color scheme! First world problems.

http://www.tomshardware.com/reviews/z87-motherboard-three-way-sli,3703.html
 
Last edited by a moderator:
Performance losses on a Pcie 8x vs 16x is, as was already mentioned, around 1-2% on average. that was for a 680 and 7970... No slouches.

The 770s in the graph choked , as the article said, because of vram. There simply isn't enough horsepower either in a lot of cases to hit 60fps with AA @ 4k in the first place. Correlation is not causeation... ;)

As far as z87 and 16x/16x, that is done by a plx chip to make lanes where there are none. This also tends to add a bit of latency versus s2011 and it's native 16x/16x abilities.

I would be interested to see 4k sli/cfx results and it's effects on Pcie bandwidth though... I will Google when not mobile as there are sites that have done it.
 
Can you explain the math to me though? I don't understand where the 1-3% difference is coming from, and this is how I currently see it.

PCIE 3.0 x16 lane = 16,000 mb/s
PCIE 3.0 x8 lane = 8,000 mb/s

2 x16 lanes = 32,000 mb/s
1 x16 lane + 1 x8 lane = 24,000 mb/s

Difference = 25% better for the x16 duo cards
 
Can you explain the math to me though? I don't understand where the 1-3% difference is coming from, and this is how I currently see it.

PCIE 3.0 x16 lane = 16,000 mb/s
PCIE 3.0 x8 lane = 8,000 mb/s

2 x16 lanes = 32,000 mb/s
1 x16 lane + 1 x8 lane = 24,000 mb/s

Difference = 25% better for the x16 duo cards

both cards may be in a slot that is x16 and x8 but they are both running at x8 speeds.

just get a 295x2 gpu or a 290x crossfire setup and you should be fine. unless anywhere else here has good evidence to prove me wrong.
 
Well z87 isn't 16+8 first off. It's 16 or 8+8 or 8+4+4 unless you have a plx chip.

And raw bandwidth doesn't affect performance unless you are bandwidth limited. It's kind of like how for gaming 16gigs of memory is overkill already so bumping your system up to 32 gigs does nothing.
 
Well said supertrucker, I now understand this concept. How can you tell if your card is bandwidth limited?
Hoping earthdog or atminside can chime in on this one? I would guess that computer part websites like tiger direct, newegg, etc... Wouldn't post that there mobos run cards on whatever lanes if that wasn't true. Seems like false advertising to say one mobo can run at x16 x16 and another can run at x16 x8 when they are both actually running at x8 x8.

Dunno if that's true or not but if it is that seems rather shady.
 
Can you explain the math to me though? I don't understand where the 1-3% difference is coming from, and this is how I currently see it.

PCIE 3.0 x16 lane = 16,000 mb/s
PCIE 3.0 x8 lane = 8,000 mb/s

2 x16 lanes = 32,000 mb/s
1 x16 lane + 1 x8 lane = 24,000 mb/s

Difference = 25% better for the x16 duo cards
You do not 'add up' bandwidth. Each card uses its own lanes.

And +1 to the concept of adding more bandwidth will not help if it is not using it. Try this analogy... Which will have more flow? A 1gpm flow through a garden hose or a 1gpm flow through a firehose? Obviously the firehose has more POTENTIAL the provide more flow, but if the faucet(gpu) is still only 1gpm it doesn't matter what is attached to it. ;)
 
Last edited:
A Z97 board with a plx chip can run 16x/16x and a X79 can run a true 16x/16x. You only get 1-3% increase because the GPU uses on board memory for rendering the graphics, so the system memory and CPU sending information to the video by the PCI-E buss is not the limiting factor at 3.0/8x or 2.0/16x.

There are allot of products in PC world that have Advertising leading the customer only, that most of the time will be helpful in the distant future.
 
Last edited:
Back