• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

rtx 40x0 talk

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Intel says it beats 780m currently, and yep, like usual, next gen from a competitor will beat it.

I'd like to see fps numbers instead of marketing.
Intel, as usual, compares their default TDP (not turbo) to AMD's TDP. The only problem is that the Ryzen 7840U is up to 30W TDP, and the Core Ultra 7 165H is up to 115W TDP. If we even compare AMD's CPU+iGPU TDP separately (as they do, for some reason, in earlier chips), then it will still be less than 60W vs 115W for Intel. If we unlock TDP for 780M, then it keeps max clocks for much longer and performs better. This is why some last-generation laptops and mini PCs have 100W TDP options, even though CPUs have ~60W TDP limit.
I'm not saying that new Intel iGPUs are bad or worse than AMD's. I generally see a problem comparing CPUs/GPUs when they're highly limited by the computer's cooling or power design.

AMD 780M has been available on mobile computers for some months. Intel Core Ultra is not available in anything, if I'm right. 8000 series APUs will have their premiere soon too, but if I'm right, they will still use the 780M.

I remember when Intel was trying to prove their IGP is for gamers, showing a video made on discrete graphics. It wasn't even funny as all who knew how it performed also knew it was impossible to make it run so fast. It was just sad that a company like Intel shows fakes and pins official statements to that. Since then, they have made a huge improvement, but it's still far from graphics for modern games. AMD goes with their IGPs into handheld consoles, where it works pretty well because of much better results at lower TDP. Intel IGPs are barely ever used for gaming. At least something more demanding than Sims. Steam still shows that Intel IGP is one of the most popular graphics. It's because of how popular simple games are. Now we could think about the definition of a gamer. When we say gaming, then we generally think about more demanding titles and AAA games.

I don't follow that argument. If a 3060 is good enough to game, why not a 4060? A problem I see on forums are some posters. They often talk about what should have been, and they're not actually playing games. 3060 is at least comparable to current gen consoles and will give a great experience on any modern game if appropriate settings are used. There is a bit of a PCMR mentality but consoles are closer to low presets, and some PC gamers seem to resist considering anything below high. If iGPU is good enough, then 2x that is still good enough.

I didn't say that RTX4060 isn't good for gaming. It's just not much better than RTX3060 (looking at user needs), and it gets hiccups in titles that use higher VRAM when, for some reason, the RTX3000 series does not (even when cards have the same VRAM). I was reviewing RTX4060 8GB and RX7600 8GB. Both were pretty bad at anything above 1080p. They're designed for 1080p, and neither manufacturer is hiding that, so it's no problem. Both work well at 1080p. However, considering prices and what these cards have to offer, people much more often think about a secondhand RTX3060, which is 12GB, or RTX3060Ti/3070, or XBOX/PS5. They often consider skipping the RTX4000/RX7000 series. Even on OCF, we had threads about it, but more about the one step higher shelf, so mainly questions about RTX3070/4070 or RX7800/6800XT.
Most people don't buy only a graphics card, so RTX4050 will be a part of a $1k PC, where for half price, you get a console that gives a better experience. Those who upgrade older computers don't consider RTX4050 as an upgrade. RTX4060/RX7600 with their 8x PCIe bus are also not the best options for older computers.
 
Last edited:
I remember when Intel was trying to prove their IGP is for gamers, showing a video made on discrete graphics.
Was this a very long time ago? At least during the time they hired Ryan Shrout, about last 5 years, they have been open about their testing methodology. During that era I recall people calling out Intel for using an Intel optimised version of some benchmark, when it turned out Intel went out of their way to use an AMD patched version so it was as good as it got at the time.

it gets hiccups in titles that use higher VRAM when, for some reason, the RTX3000 series does not (even when cards have the same VRAM).
Got an example of this? I saw the usual scenarios where settings were picked to make higher VRAM work well where lower didn't, but I don't recall seeing a claim at the same VRAM the newer generation did worse. For the former case, pick appropriate settings and it is fine.
 
Was this a very long time ago? At least during the time they hired Ryan Shrout, about last 5 years, they have been open about their testing methodology. During that era I recall people calling out Intel for using an Intel optimised version of some benchmark, when it turned out Intel went out of their way to use an AMD patched version so it was as good as it got at the time.
I saw this at an Intel conference in my city, and the same was globally claimed back then. I don't remember exactly but 10 years ago or something.

Got an example of this? I saw the usual scenarios where settings were picked to make higher VRAM work well where lower didn't, but I don't recall seeing a claim at the same VRAM the newer generation did worse. For the former case, pick appropriate settings and it is fine.
The results are in my review on the front page. The same for MSI RTX4060 and later Gigabyte RX7600. Metro or FarCry 6 were having big problems once the VRAM was near 8GB. Some other tests were acting weird too. On older cards, when you reached max VRAM, then the card was slower, but 20-30% slower. On these cards, it was about 90% slower. Literally, tests couldn't pass or were like 1-5 FPS. I wasted a lot of time rerunning everything and reinstalling the OS as I thought there was something wrong with it.

Here is one example. RX6600XT and RTX3070 are 8GB cards. RTX3070 is maybe faster, but not ~6 times faster. RX6600XT is supposed to be slower.

GBRX7600_t8.jpg


I can't find the post right now, but we were talking about IGP, 780M and Intel IGPs. I said that no one decided to use Intel IGP in handheld consoles and it looks like MSI decided to do that right now:

I wonder what about the power draw as AMD-based consoles can barely last for 2h on the battery. Intel is a higher wattage CPU and there is more RAM and some other things. I don't think it goes the right way, but we will see.
 
I don't remember exactly but 10 years ago or something.
They've cleaned up a lot in the last 5 years or so. Not saying perfect, but definitely better than that long ago.

The same for MSI RTX4060 and later Gigabyte RX7600. Metro or FarCry 6 were having big problems once the VRAM was near 8GB. Some other tests were acting weird too. On older cards, when you reached max VRAM, then the card was slower, but 20-30% slower. On these cards, it was about 90% slower. Literally, tests couldn't pass or were like 1-5 FPS. I wasted a lot of time rerunning everything and reinstalling the OS as I thought there was something wrong with it.
I have to now wonder if this is any way related to the video Kenrou linked yesterday. That was more AMD vs NV, but is there something something more going on?

I can't find the post right now, but we were talking about IGP, 780M and Intel IGPs. I said that no one decided to use Intel IGP in handheld consoles and it looks like MSI decided to do that right now:

I wonder what about the power draw as AMD-based consoles can barely last for 2h on the battery. Intel is a higher wattage CPU and there is more RAM and some other things. I don't think it goes the right way, but we will see.
Do we have Windows based game testing of Meteor Lake yet? Early Phoronix results suggested perf/W of Meteor Lake CPU cores were still not great vs AMD, but the GPU side turned tables (at least maybe until new APUs release). It may also be that power consumption could be better optimised for a gaming handheld vs a general purpose laptop. A lot of perceived problems with Intel are from having a more relaxed attitude to peak power limits, or lack thereof.
 
Don't forget that Nvidia drivers have more overhead than AMD's, that will skew the testing as well...
 
Do we have Windows based game testing of Meteor Lake yet? Early Phoronix results suggested perf/W of Meteor Lake CPU cores were still not great vs AMD, but the GPU side turned tables (at least maybe until new APUs release). It may also be that power consumption could be better optimised for a gaming handheld vs a general purpose laptop. A lot of perceived problems with Intel are from having a more relaxed attitude to peak power limits, or lack thereof.

There are leaks around the web, but I don't think there is any proper comparison, not forced by marketing.
Many mobile AMD CPUs have about half of Intel's TDP at max turbo/boost clock. IGP is not much different for both brands, and it should perform about the same (5 or even 10% difference is not really significant as it can be improved with faster memory or drivers). I'm wondering mostly about the battery life with Intel CPUs, and that's why I'm surprised that MSI decided to use Intel CPUs for their handheld. All current handhelds have pathetic battery life and MSI's specs suggest it will be even worse than ASUS or Lenovo.
In a couple of months should be ready new Switch and it's promised to deliver a much higher performance. Here I only wonder if they stay with Nvidia and use any dedicated chip or move to AMD (somehow I doubt it will be Intel).
 
Don't forget that Nvidia drivers have more overhead than AMD's, that will skew the testing as well...
I've not looked into it in depth since it seems like something that might affect the ultra-low end. Once you're mid range and up it doesn't seem so relevant.

Many mobile AMD CPUs have about half of Intel's TDP at max turbo/boost clock. IGP is not much different for both brands, and it should perform about the same (5 or even 10% difference is not really significant as it can be improved with faster memory or drivers). I'm wondering mostly about the battery life with Intel CPUs, and that's why I'm surprised that MSI decided to use Intel CPUs for their handheld. All current handhelds have pathetic battery life and MSI's specs suggest it will be even worse than ASUS or Lenovo.
Let's see the actual results before writing it off. I know Meteor Lake launch last year was largely a paper one to say they released it last year, but thought more people would have found one by now for more testing. Looking at one major UK seller, they have some Ultra 7 models listed as expected 22nd so still a couple weeks off.

In a couple of months should be ready new Switch and it's promised to deliver a much higher performance. Here I only wonder if they stay with Nvidia and use any dedicated chip or move to AMD (somehow I doubt it will be Intel).
I've always viewed the Switch as its own niche. Its primary market seems to be you get it if you like Nintendo stuff, even if some cross platform releases gets ports to it. Games sell a system more than hardware specs alone. There has been speculation on which nv offering it will be based on. Not seen any serious suggestions of it going elsewhere.
 
How so? Feels like something accounted for in the results already, no? Kind of 'is what it is'?
I don't know if it is or not since so few people talk about it (even though it's a thing, regardless of which GPU's it hits), so... Don't know?
 
I don't know if it is or not since so few people talk about it (even though it's a thing, regardless of which GPU's it hits), so... Don't know?
I was with Mack's line of thinking. I don't think it matters until you're in true budget land for CPUs. There's so many cores/threads available in most SKUs, it's (my guess) that very few CPUs would be hindered by driver overhead. If you look at 14th-gen, you're down in i3 land (4P+0E). i5's start from 6P+4E(+HT eventually).

Additionally, in all the results we've seen, be it high-end or potato CPUs, that overhead has been there. So, while it's true, it's accounted for without mentioning it (to me)..........until you get to the real potato or older parts with fewer cores/threads/IPC/clocks. How low down the product stack does that line start, or how steep is the curve is...how fast of GPU sets that off.............................. good question!
 
Still feels like price gouging...

01.jpg
02.jpg


 
If it happens, I think this is on the better end of realistic expectations. 4070S gets a nice uplift for the same MSRP as 4070, although in the 9 months or so since 4070 came out its price has dropped ~10% or so. Still an improvement on perf/$ for those that care about that metric.

As 4070 owner I'm ok with this. I had use of it for ~9 months already and there's always something new on the horizon. If I want to push 4k more the TiS is potentially interesting.
 
That was one of the pricing rumors, to be more competitive with AMD and cheaper than what they are 'replacing'.
 
The RTX 40 Super series is now official. Including price, performance claims, specs, and more! Here's all the RTX 40 super information and some of my thoughts.


01.jpg
 
is this you or are you pulling off the YT video? I keep thinking these are your YT vids because they aren't quoted, lol
:rofl: Sorry, quote from the video, thought you'd know when it's me by now, I'm never oh so classy in my writing :rofl:
 
Does anyone know if the new super cards have a different power connector?
I found a picture of a Gigabyte 4070 Ti Super & it has the 16 pin connector. I haven't seen a 4070 Super yet, so don't know what it is using.

edit: Found pictures of the 4070 Super FE & it has the 16 pin too.
 
Last edited:
"WARNING: fake 4090 hit the US market" ⇾ 4090 16gb laptop core on a 4080 PCB with a 4090 cooler, all with decent legit looking stickers, supposedly came from Amazon

 
Back