• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Why did Intel give up on development of i7-5775c type CPU's?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
@mackerel
I'd hope you could use the 3070.
I hadn't forgotten about this but was distracted by other things last month. I just moved the 3070 into the 5775C system and about to start testing once game updates finish downloading.

The 3070 barely fit in the case as it is so long. I'm also kinda on the low end of power recommendations with only a 550W PSU so hope the GPU will be ok with that under full load. At least the CPU isn't very thirsty.
 
It's a 220W card... I don't imagine your CPU taking up 280W. You'll be fine if the PSU can output its nameplate value.
 
If you remember back to its launch, some had problems with short term power spikes causing PSU protection to kick off even though the average power was fine. Nvidia are probably playing safe and recommend a 650W unit for 3070, perhaps also assuming a more heavily loaded system.

The game patch downloads will take forever. Seems like everything has 10's of GB to update and my potato internet isn't helping.
 
I remember and still think you'll be ok game testing. :)

Absolutely both NV and AMD play it safe with their PSU ratings. Not everyone buys a high quality PSU. Some buy trash that can't support its own ratings.
 
What type of RAM was the eDRAM on the 5775c? Since it was dual-ported (I believe it could read and write simultaneously) I'm guessing it wasn't DDR4 or DDR3. Wasn't the eDRAM a derivative of the eDRAM found on Intel's APU(?) for the original Xbox 360? Is the eDRAM an ASIC? Or a custom IC developed by Intel?
 
AFAIK, There are no details about eDRAM used by Intel, except they claim that it's two-way memory and has 50GB/s each path (100GB/s max total). If you compare it with any other cache, you will see it's very slow. The whole idea is to provide an additional cache, which is faster and has much lower latency than RAM (to cover RAM's delays). In reality, it was almost as fast as RAM, so pointless.
Back then, Intel noticed it was slow and pointless, so it discontinued it quickly. I assume it was also limiting the CPU's frequency while next-gen chips were already much faster.
We may see it in the upcoming CPU generations as the technology improvements give a chance actually to manufacture a fast and large L4 cache. Intel confirmed it, but it's hard to say if it will be any good. We can also see new/upcoming CPUs with LPDDR5 and other things like that. In a couple of years, we may see everything integrated. It's close to that already, and it will kill the whole PC idea. On the other hand, it will be required to shorten the traces and improve latencies and bandwidth.

You can read about LPCAMM modules as it will be introduced soon. Later, there will probably be a new standard for desktop RAM.
 
The whole idea is to provide an additional cache, which is faster and has much lower latency than RAM (to cover RAM's delays). In reality, it was almost as fast as RAM, so pointless.
Back then, Intel noticed it was slow and pointless, so it discontinued it quickly. I assume it was also limiting the CPU's frequency while next-gen chips were already much faster.
It was FAR faster than any ram of the era. For my Prime95-like workloads I was basically ram bandwidth unlimited, and purely core limited. I could run single channel DDR3 and not be affected in performance. It however has the same limitations as any other cache. It is best effective for data sizes that fit in it, so it isn't universally applicable. If you look at iGPU performance of eDRAM enabled CPUs, they were by far the fastest then too. It proved useful in previous hwbot competitions. While it only ever made it to desktop with Broadwell, it existed for several generations in mobile, before and after it.

I'm also not aware of any limitation to CPU speed because of its presence. If you compare Broadwell-C and Broadwell-E, they reach similar clocks with the latter not having that eDRAM. The limitation in that case was more likely that these CPUs were 1st gen 14nm and had some more work to do, which was improved from Skylake onwards.
 
How is that far faster when it was limited to 25/50GB/s, and DDR3 controllers (at the specified stock frequency) were limited to 25GB/s? Higher store DDR3 kits could make 30-40GB/s in dual channel. The fact it was caching with lowered latency maybe made it faster, but the bandwidth wasn't really so much higher. The practical usage of the L4 cache was barely visible back then. If it were so great, then it wouldn't end on a few CPUs.

L4 wasn't limiting Broadwell CPUs as they're designed for that. It was limiting everything that was in plans for the next gen. Intel clearly had no faster option in development. More complicated production, higher costs, and poor sales of Broadwell-C probably affected that too.
 
Fastest DDR3 I ever had was 2400, giving up to 38GB/s in dual channel. But most people didn't have that, as it was only really affordable when DDR3 was reaching end of life. 1600 was about the popular limit so we're around 25GB/s in dual channel.

Intel never intended to sell many Broadwell-C ondesktop. They were very late, and I think they only launched it because they wanted to say they launched it. There were only two desktop models which were essentially repackaged mobile offerings. Give or take a month we're talking it being current between April when Broadwell launched to September the same year for Skylake. There are mobile Skylake CPUs with eDRAM but as DDR4 (+LPDDR) got faster they dropped it.
 
In 2020 anandtech did a retrospective of the i7-5775c, it consistently came in 4th place to mid-pack in 95th percentile benchmarks, sometimes beating out 6700ks and Ryzen 5 3600s while consistently beating out contemporary 4790ks:
https://www.anandtech.com/show/1619...ive-review-in-2020-is-edram-still-worth-it/14

The eDRAM, according to the anandtech article, featured a serial interface to the CPU. This must've reduced the number of I/O pins required from the CPU to access it.

A reddit forum member had this to say about the i7-5775c:
A march 2022 video on YouTube regarding i7-5775C shows that an i7 4790k at 4.9 GHz with 2200 MHz CL8 RAM is still 5-10% slower than an i7-5775C OCed 4.3 GHz with the same RAM. There is this polish review of 2018 iirc that showed i7-7700k OC 5.0 GHz was still slower in many games than the 5775C OCed to 4.2 GHz.

The bidirectional, dual-ported nature of the eDRAM is key here: DDR3, DDR4, DDR5 are not bidirectional, you can read or write but not both. There are even turn-around time penalties for switching a DDR module from read to write. None of this applies to the eDRAM, which can both read and write simultaneously.

Anandtech's CPU 2021 benchmarks still show the i7-5775c in the top third (maybe even the top 1/4) in terms of perf.:
 
Last edited:
lol now your making me want to get a the i7-5775c, even though i have lower version. it is just not installed yet in the board, god im such a slacker the older i get.

i found its xeon counter part, only thing that sticks out to me. is the xeon version is using a higher base multi vs the i7, with a higher boost clock.
 
Last edited:
The bidirectional, dual-ported nature of the eDRAM is key here: DDR3, DDR4, DDR5 are not bidirectional, you can read or write but not both. There are even turn-around time penalties for switching a DDR module from read to write. None of this applies to the eDRAM, which can both read and write simultaneously.
While I have read of this before, I never gave it much thought. I wonder if this is why it behaves so well in Prime95-like workloads. They can be approximated as roughly 50% reads and writes (actually slightly more reads, but close enough). This is why I find 2R/channel to be incredibly performant vs 1R in this use case.

i found its xeon counter part, only thing that sticks out to me. is the xeon version is using a higher base multi vs the i7, with a higher boost clock.
Also faster ram support. I'm guessing they might be better binned samples.
 
Back