• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Is the 2600k better then the i7 960 for gaming?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Timmie3054

Registered
Joined
Oct 3, 2010
I had to choose between

Option 1
AMD Phenom II x6 1100t
Crosshair IV Formula

Option 2
i7 960
Rampage III

Option 3
i7 2600k
Maximus IV

I chose Option 3. Did I make the best choice for gaming?
I would be using the GTX 580 regardless of what option I chose.
 
Yes opt 3 > all. /end thread.

Personally though I think the asus maximus is a little bit over priced, albeit a great board.
 
if your playing in a GPU bound resolution it wouldnt matter... if you spent more your time encoding A/V then yes 2600k over i7 or amd.
 
if your playing in a GPU bound resolution it wouldnt matter... if you spent more your time encoding A/V then yes 2600k over i7 or amd.

(not ripping on you personally twice in a row I swear)

actually several youtubers have done comparison tests where they take the same game, same memory, same clock rate same gpu, (obviously if you're talking cross platform case A will be dual channel case B will be triple but at least we're with the same brand and CAS.

Anyways sorry for incoherence it's 3:30 am here-- what they've found is that in many games the CPU being more efficient has a direct impact on performance. For example, Timetolivecustoms on Youtube found that on the same mobo same RAM same GPU, Dirt 2 got a FPS increase of nearly fifty percent when moving from a 950 to a 980x at identical baseclock and cpu speed (4Ghz).

Another example is Starcraft II, where having the identical graphics card memory brand and speed and timing, a 2600K will beat a 950 by about 20-25% in framerate. I find a lot of reasons that make me regret buying a 1366 system. I play SCII a lot :p.

2600K > i7 960 for gaming...and everything else.

It's even smaller and more efficient :rofl:

Kinda wish I had one.
 
if you spent more your time encoding A/V then yes 2600k over i7 or amd.

In heavily threaded apps I thought the AMD may be slightly quicker due to the extra cores, especially when overclocked. I'd still go with the Sandybridge though.
 
In heavily threaded apps I thought the AMD may be slightly quicker due to the extra cores, especially when overclocked. I'd still go with the Sandybridge though.

Every single comparison I've seen for every single app has the six core AMD chip not keeping up with the 4 core bloomfield chip and nowhere near the 4 core sandybridge chip. The 2600K is definitely the best value for a high performance CPU at the moment IMO. There are 2 or 3 things where gulftown beats it (barely) but a home user's not likely to do them and they don't justify spending 300% the money IMO.
 
Every single comparison I've seen for every single app has the six core AMD chip not keeping up with the 4 core bloomfield chip and nowhere near the 4 core sandybridge chip. The 2600K is definitely the best value for a high performance CPU at the moment IMO. There are 2 or 3 things where gulftown beats it (barely) but a home user's not likely to do them and they don't justify spending 300% the money IMO.

Sweet - that makes me feel a lot better. Maybe when Bulldozer comes out it may wipe the smirk off my face.
 
Sweet - that makes me feel a lot better. Maybe when Bulldozer comes out it may wipe the smirk off my face.

Doubtful. If rumors and real early data prove accurate...Bulldozer ~ Bloomfield < Sandy Bridge in clock for clock/core for core performance.

It might beat Sandy Bridge in massively threaded applications (the 8 core variants anyway) but so far it looks like Sandy Bridge will be Queen until 6 and 8 core Sandy chips come out and those will be the new Kings :p
 
Doubtful. If rumors and real early data prove accurate...Bulldozer ~ Bloomfield < Sandy Bridge in clock for clock/core for core performance.

It might beat Sandy Bridge in massively threaded applications (the 8 core variants anyway) but so far it looks like Sandy Bridge will be Queen until 6 and 8 core Sandy chips come out and those will be the new Kings :p

I wonder how long it's going to be before games become heavily threaded in the sense that they will be able to use 6+ cores. Is software written to only use X number of cores, or is some software written to use an infinite number of cores, just for future proofing?

I've heard that video encoding apps tend to be more heavily threaded than games, why is that?
 
(not ripping on you personally twice in a row I swear)

actually several youtubers have done comparison tests where they take the same game, same memory, same clock rate same gpu, (obviously if you're talking cross platform case A will be dual channel case B will be triple but at least we're with the same brand and CAS.

Anyways sorry for incoherence it's 3:30 am here-- what they've found is that in many games the CPU being more efficient has a direct impact on performance. For example, Timetolivecustoms on Youtube found that on the same mobo same RAM same GPU, Dirt 2 got a FPS increase of nearly fifty percent when moving from a 950 to a 980x at identical baseclock and cpu speed (4Ghz).

Another example is Starcraft II, where having the identical graphics card memory brand and speed and timing, a 2600K will beat a 950 by about 20-25% in framerate. I find a lot of reasons that make me regret buying a 1366 system. I play SCII a lot :p.
then they had to have been using a SLI setup or tri sli... as there is no way with one GPU using a really high res as like i said GPU bound resolution will there be a difference much less a 50% increase. if they were using a lower resolution that is CPU bound then i could see that happening... more detail what your tring to say as it makes no sense that you will see a 50% increase from a GPU bound resolution with a single card setup.

*edit*
i might have been out of the loop for a while but MIA's link shows exactly what im saying...
 
Yeah, lower resolutions are more effected by the CPU. If you have indentical systems, one Bloomfield and one Sandy Bridge at equal clocks, tested as, I dunno, 2500x1600, there isn't going to be much of a difference. There will be, for sure, a clear Sandy Bridge lead.

Also, SC2 is HIGHLY CPU dependent, of course SB will perform better. Doesn't take much more then a GTX460 to run SC2 at Ultra.
 
even here are two review or benchmark reviews of two games one i never heard of till now and the other crysis 2.
http://www.techspot.com/review/368-bulletstorm-performance/
http://www.techspot.com/review/367-crysis2-beta-performance/

also yes we run into the situation as knufire pointed out. that some games will be more CPU speed dependent vs GPU. that other will be more GPU vs CPU, so as far as speed increase with sandy it just depends on the game.

i do want to add that even though new HW is out that predicating the cpu speed needed is still the same or close to it. as i was mapping it out when core 2 and new gpu's where coming out back then. there really is now difference now vs back then...
 
Seems to prove what I'm saying too. On single and dual SLI, it pretty much showed SB leading by a couple (1-3) FPS in each test. The only super advantage was the crazy triple SLI setups where the Bloomfield was bottlenecking...
 
Seems to prove what I'm saying too. On single and dual SLI, it pretty much showed SB leading by a couple (1-3) FPS in each test. The only super advantage was the crazy triple SLI setups where the Bloomfield was bottlenecking...
also 1-3fps can be chalked up the variance in benchmarking... which is why i prefer sites that avg the numbers from 3-4 runs of the same bench/resolution/settings.

Yes, it does... was talking to the OP who hasnt been back since he started the thread. ;)

since you didnt quote anyone im starting to think your talking to your self. :p
well the OP last logged in at 1am today.
 
Back