• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

FRONTPAGE EVGA GTX 750 Ti FTW Graphics Card Review

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Overclockers.com

Member
Joined
Nov 1, 1998
NVIDIA has another architecture release day for us! The Maxwell GPU core has arrived at Overclockers in the form of an EVGA GTX 750 Ti FTW. The Maxwell GPU core architecture promises great power efficiency, low power consumption, and performance that surpasses its predecessors. This particular card also comes with EVGA's ACX cooler, which has tested very well in the past on other EVGA video cards. So, let's find out if EVGA put this new technology to good use as we run the GTX 750 Ti FTW through our testing procedure.
... Return to article to continue reading.
 
Do you know if there will be any 4gb versions of the card?
Can you do sli with these? (I want 4, or a multi gpu card equipped with them)
Performance per watt is incredible!

I'd love one/four :p

Also, can you try some yacoin mining using cudaminer? :D
 
Thanks for the review, looks like an awesome budget card :thup:

Do you know if there will be any 4gb versions of the card?
Can you do sli with these? (I want 4, or a multi gpu card equipped with them)
Performance per watt is incredible!

How do I know you didn't read? :p

The first thing worth noting is the lack of SLI support on the GTX 750 Ti series of cards. It’s understandable that SLI wouldn’t be a feature of this card, as it’s probably not in the class of cards most would consider for that type of setup anyway. The card supports up to three concurrent displays through the available dual-link DVI, HDMI, and DisplayPort connections. As we mentioned earlier, EVGA opted to include a 6-pin PCI-E power connector. Adding this power connector is said to provide a 25 watt boost (30% increase) in power delivery, which should improve overclocking potential.

Pricing on Maxwell based GPUs will run anywhere from $119 up to $169, depending on how much of a premium is put on partner card’s factory overclocks and proprietary coolers. Here is the breakdown of suggested reference design pricing NVIDIA passed along to us.
  • GTX 750 Reference – $119
  • GTX 750 Ti 1 GB Reference – $139
  • GTX 750 Ti 2 GB Reference – $149
 
This makes me happy for the 800 series.

Apparently another review site broke into the ~1420MHz range with their 750Ti.
Massive OC potential on these.
 
The 750Ti has me interested for my HTPC FM2+ system for the low power consumption and still being able to play games casually on the TV.

And no there won't be a 4GB version, unless Nvidia wants to sell them to dumb consumers (128bit memory doesn't like anything over 2GB AFAIK).
 
The lack of SLI I highly doubt has any effect on Ivans intended purpose for the cards. The 2Gb limit is fine for GPU purposes, but some of these new exotic cryptos really like alot of vRAM even if its slow. Ya never know some manufacturer might put together a 4Gb version. Heck they made a 4Gb GTX450, for what purpose who knows.
 
Yacoin / high N factor uses more memory per thread...each shader adds more memory requirements, ergo, low shader count (but faster per shader) cards will outperform big cores with small amounts of ram at yacoin mining.

R7 240 4GB = 2.8kh/s
7850 1GB = 1.5kh/s

(I can game on one card whilst mining on the other 3 too... :D)
 
WAIT A MIN!!!!!!!!!!! Doesn't "Unified Virtual Memory" mean that if you SLI now it will add up the ram now? :drool: I've been told this before by a few folks that seemed they knew what they were talking about unless they were wrong. This is the solution. Imagine 2 of these @ 4 GBs and 2 GPUs. :clap:

:bump:


SOB!!! It can't SLI! :bang head I just wasted a lot of energy. Maybe it doesn't need a SLI adapter like AMD?

This happens when I read the most recent comments even though I read the whole thread earlier and forgot about a few key things. lol
 
WAIT A MIN!!!!!!!!!!! Doesn't "Unified Virtual Memory" mean that if you SLI now it will add up the ram now? :drool: I've been told this before by a few folks that seemed they knew what they were talking about unless they were wrong. This is the solution. Imagine 2 of these @ 4 GBs and 2 GPUs. :clap:
l

Not sure about SLI and ram it may have changed, though I dont think so. However, even if it were additive you still would not be changing the shaders / ram ratio so it would have no performance improvement.
 
In SLI it is still mirror images on the vram, so 2GB each on two cards is still 2GB.

Thanks for the info guys. :)
 
Sorry I should have been more accurate in my wording. From my understanding, from now on (Maxwell) when you add more than one GPU to SLI, it will combine the Vram now as opposed to the way it has been. That's what I am told by some when you see Nvidia talk about "Unified Virtual Memory" in its roadmap. I didn't believe it at first too but was convinced thats what it meant.
 
http://www.anandtech.com/show/7515/nvidia-announces-cuda-6-unified-memory-for-cuda

The end result as such isn’t necessarily a shift in what CUDA devices can do or their performance while doing it since the memory copies didn’t go away, but rather it further simplifies CUDA programming by removing the need for programmers to do it themselves. This in turn is intended to make CUDA programming more accessible to wider audiences that may not have been interested in doing their own memory management, or even just freeing up existing CUDA developers from having to do it in the future, speeding up code development.

I dont see where it talks about that...but it may have been a bit over my head on the first read, LOL!
 
Back