• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

latency: Xbone's DDR3 vs. PS4's GDDR5

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

magellan

Member
Joined
Jul 20, 2002
I've read that one of the PS4's deficiencies relative to the Xbone is that the GDDR5 has more latency than the the DDR3 on the Xbone.

The GDDR5 is running at 1375Mhz (5500Mhz effective) while the Xbone's DDR3 is running at 2133Mhz. If the memory clock cycle on the PS4 is (1/5.5Ghz =.18ns) and the
memory clock cycle on the Xbone is (1/2.13Ghz = .47ns) the timings on the GDDR5 memory would have to be more than twice as slow as the Xbone to result in more latency. Are the timings on GDDR5 really that slow?

What's hilarious is to see the Xbone fanbois ranting and raving about the 32 MiB eSRAM that has less than 20GiB/sec more bandwidth than the PS4's 8 GiB of GDDR5.
 
No matter what latency there is ... both consoles are already slow and old. I only expect that game developers will cut what they can in future titles so players won't see lag like it was in some xbox360 games.
 
No matter what latency there is ... both consoles are already slow and old. I only expect that game developers will cut what they can in future titles so players won't see lag like it was in somemost xbox360 games.

and some games did lag pretty noticeably
 
All this controversy over the component that has the smallest impact on system performance. It's amusing.

Because of the UMA architecture of the two consoles I'd imagine memory bandwidth is more relevant to these consoles than it is PC's and discrete video cards.

I've read articles that suggest the whole reason the Xbone is inferior in performance to the PS4 (i.e. because it has to run some AAA game titles at lower resolutions than the PS4) is because of its inferior, DDR3 based, UMA architecture. Some say the 32 MiB eSRAM is too small to make a difference, but some others have said games just need optimization to properly make use of the 32 MiB eSRAM.

When you think about it, the Xbone GPU is accessing texture data stored in memory running at 2133Mhz on a 256-bit data bus that is shared with the CPU. Even my ATI 4870 had more dedicated bandwidth than that.
 
I've read that one of the PS4's deficiencies relative to the Xbone is that the GDDR5 has more latency than the the DDR3 on the Xbone.

The GDDR5 is running at 1375Mhz (5500Mhz effective) while the Xbone's DDR3 is running at 2133Mhz. If the memory clock cycle on the PS4 is (1/5.5Ghz =.18ns) and the
memory clock cycle on the Xbone is (1/2.13Ghz = .47ns) the timings on the GDDR5 memory would have to be more than twice as slow as the Xbone to result in more latency. Are the timings on GDDR5 really that slow?

What's hilarious is to see the Xbone fanbois ranting and raving about the 32 MiB eSRAM that has less than 20GiB/sec more bandwidth than the PS4's 8 GiB of GDDR5.

I've bought neither console and may never buy either of them. I imagine most patrons of these forums fall in the "PC Master Race" category rather than the "Dirty Consoling Peasants" category. (tongue in cheek - there're plenty of reasons a console is more convenient than a PC). So I don't really have a horse in this race. However, I've got to say, I see more people glee'ing about PS4's GDDR5 than about the XB1's ESRAM. I think you have it mostly the wrong way round when it comes to who crows about memory the most. And oddly enough, your post is actually an unprovoked piece of bragging about PS4 memory, so irony and all that. ;)

Anyway, what I was posting to say was that you've overlooked something in your post. Yes, the ESRAM in the XB1 may be "less than 20GB/sec" (note - as an old C programmer, I say **** the bolloxed up and useless SI-ification of GB. It makes zero sense for a numbering system that uses Base-2 and was just pushed by marketing ****wads). However, two things on this. Firstly, 20GB/s is actually a pretty nice boost not to be turned down. Secondly, and the main thing you've overlooked, is that it can do this IN BOTH DIRECTIONS AT ONCE.

Unlike the DDR memory (which is most of what the XB1 has and all of what the PS4 has), the ESRAM can be read from and written to simultaneously. In the right sort of operational scenario, that's effectively double the bandwidth.

The ESRAM is quite small (32MB). What MS believe, is that you can fit an entire useful set of operations in there and thus have that sub-set of operations run. For a worked example, suppose you can fit most of your character textures into the ESRAM (I believe that to be plausible, with only occasional swapping in and out).

Now in the normal case your process would be:

DDR --(texture data)-->GPU.

And that would be a very frequent operation as you drew and re-drew characters doing things.

Utilizing the ESRAM, it becomes

DDR --(texture data)--> ESRAM(once only) --(texture data)--> GPU

As you can see, once the data is loaded into the ESRAM, the entire character texturing process is using the significantly faster ESRAM. You kind of dismissed 20% difference earlier. I do not. 20% can be quite significant even though in this example it relates just to one sub-set of the process of creating each frame.

I'm not sure the above is a great example. How about Lighting Maps. These are actually pretty processor and memory intensive (everything is relative terms of course). If you could get most of that in 32MB, then that's a great boost. You'd change the below process:

Code:
DDR5@170GB/s 
  --> CPU (does calculations) 
    --> DDR5 (CPU sends results back to DDR5, then requests next data) 
       --> CPU (does more calculations)
          --> DDR5 (CPU sends results back to DDR5, then requests next data) 
             --> CPU (does more calculations, etc).
to

Code:
ESRAM@190GB/s
  -->CPU (does calculations)
    -->ESRAM (receives results)
    -->CPU (gets data and does calculations at same time as ESRAM recieves results)
    -->ESRAM (receives results)
    -->CPU (gets data and does calculations at same time as ESRAM recieves results, etc.)

When you're reading and writing at the same time, rather than waiting for a write to memory to finish so you can then read from it, your 20GB/s improvement becomes a 40GB/s improvement.

ESRAM is actually potentially very useful with the main concern being the size of it. If MS had doubled it to 64MB or up to 128MB, now that would be amazing and definitely have some effects. What it comes down to chiefly, is what can a game developer fit in that 32MB? A Lighting Map or a Shadow Map? Probably, I should say. Textures for main characters? Perhaps. Pushing it though. Still, the potential is there to get some significant real world benefits from it. In some circumstances significant benefits over the PS4's model.

What I'm really trying to get at though, is it's COMPLEX. Both consoles have so far seemed very similar to me. And they're both going to deliver comparable performance, I suspect. By far the largest factor in anyone's decision should rationally be the games available and what networks your friends are on. All we really know at present is the following:

(1) The PS4 in default state has a small memory speed advantage.
(2) The ESRAM added to the XB1 can add significant advantages which can reasonably be thought to offset (1) and quite possibly be an actual advantage of the PS4's set-up.
(3) Developers need to actively code to get these improvements.

What we don't know is how willing to do (3) developers are right now. As low-level programmers tend to be pretty smart people (modesty aside), they are usually pretty keen to take advantage of new toys and see what they can squeeze out of the metal. However, deadlines, legacy code and all that. It will take a while. Right now, PS4 has the advantage simply because it takes more thought to make use of the ESRAM. However, making blanket and dismissive statements about the memory approach of EITHER console, is inappropriate.

It's a very complex area.

And both PS4 and XB1 are going to be nothing to gaming PCs, as always. ;)
 
Anyway, what I was posting to say was that you've overlooked something in your post. Yes, the ESRAM in the XB1 may be "less than 20GB/sec" (note - as an old C programmer, I say **** the bolloxed up and useless SI-ification of GB. It makes zero sense for a numbering system that uses Base-2 and was just pushed by marketing ****wads). However, two things on this. Firstly, 20GB/s is actually a pretty nice boost not to be turned down. Secondly, and the main thing you've overlooked, is that it can do this IN BOTH DIRECTIONS AT ONCE.

Unlike the DDR memory (which is most of what the XB1 has and all of what the PS4 has), the ESRAM can be read from and written to simultaneously. In the right sort of operational scenario, that's effectively double the bandwidth.

The ESRAM is quite small (32MB). What MS believe, is that you can fit an entire useful set of operations in there and thus have that sub-set of operations run. For a worked example, suppose you can fit most of your character textures into the ESRAM (I believe that to be plausible, with only occasional swapping in and out).

Now in the normal case your process would be:

DDR --(texture data)-->GPU.

And that would be a very frequent operation as you drew and re-drew characters doing things.

Utilizing the ESRAM, it becomes

DDR --(texture data)--> ESRAM(once only) --(texture data)--> GPU

As you can see, once the data is loaded into the ESRAM, the entire character texturing process is using the significantly faster ESRAM. You kind of dismissed 20% difference earlier. I do not. 20% can be quite significant even though in this example it relates just to one sub-set of the process of creating each frame.

When you're reading and writing at the same time, rather than waiting for a write to memory to finish so you can then read from it, your 20GB/s improvement becomes a 40GB/s improvement.

ESRAM is actually potentially very useful with the main concern being the size of it. If MS had doubled it to 64MB or up to 128MB, now that would be amazing and definitely have some effects. What it comes down to chiefly, is what can a game developer fit in that 32MB? A Lighting Map or a Shadow Map? Probably, I should say. Textures for main characters? Perhaps. Pushing it though. Still, the potential is there to get some significant real world benefits from it. In some circumstances significant benefits over the PS4's model.

What I'm really trying to get at though, is it's COMPLEX. Both consoles have so far seemed very similar to me. And they're both going to deliver comparable performance, I suspect. By far the largest factor in anyone's decision should rationally be the games available and what networks your friends are on. All we really know at present is the following:

(1) The PS4 in default state has a small memory speed advantage.
(2) The ESRAM added to the XB1 can add significant advantages which can reasonably be thought to offset (1) and quite possibly be an actual advantage of the PS4's set-up.
(3) Developers need to actively code to get these improvements.

What we don't know is how willing to do (3) developers are right now. As low-level programmers tend to be pretty smart people (modesty aside), they are usually pretty keen to take advantage of new toys and see what they can squeeze out of the metal. However, deadlines, legacy code and all that. It will take a while. Right now, PS4 has the advantage simply because it takes more thought to make use of the ESRAM. However, making blanket and dismissive statements about the memory approach of EITHER console, is inappropriate.

It's a very complex area.

And both PS4 and XB1 are going to be nothing to gaming PCs, as always. ;)

SI introducing new units for base 2 makes perfect sense. The metric system is base 10, it shouldn't have the same prefixes that denote base 10 being used for base 2. These new units will also help avoid the too old issue of hard drive manufacturers advertising their drives using base 10 metric prefixes that don't jive w/the reported base 2 capacities in *ix or Windoze.

"Utilizing the ESRAM, it becomes

DDR --(texture data)--> ESRAM(once only) --(texture data)--> GPU"

There are games that regularly use more than 1 GiB of texture data and some games that use more than 2 GiB. Even if you have a dual ported 32 MiB ESRAM, it you're storing texture data in it, there's going to be a lot of texture data thrashing -- texture data thrashing that ultimately relies on the weakest link of relatively slow DDR3 memory.

There are already games that are forced to run in a lower resolution on the Xbone than the PS4.
 
SI introducing new units for base 2 makes perfect sense. The metric system is base 10, it shouldn't have the same prefixes that denote base 10 being used for base 2. These new units will also help avoid the too old issue of hard drive manufacturers advertising their drives using base 10 metric prefixes that don't jive w/the reported base 2 capacities in *ix or Windoze.

I'm old. There never was a problem with hard drives being advertised in normal MB, GB, etc. The typical buyer didn't need to know whether 1MB = 1024KB or 1000,000,000 Bytes in order to see that a 120MB hard drive was larger than a 80MB had drive when they were comparison shopping. How possibly could it matter to them? The metric versions were pushed not because there was confusion, but because it enabled marketing people to say their drive was 420MB against someone else's that was only 400MB. (with a tiny bit of text at the bottom saying something that most people didn't understand). There wasn't any confusion UNTIL the metric versions started being pushed. And then suddenly you had advertised capacities that no longer matched up since with what the computer actually says is in there and the rest of us are beset by questions from confused lay people wanting to know why they don't appear to get what they paid for. Meanwhile you now have units floating around that are useless to the people who actually understand and work with them (such as myself) and yet give no advantage over the originals to the people who don't work with them.

What is indisputable is that prior to the metric push, anyone with the remotest involvement with actual memory knew what a MB was and everyone else could just compare like for like and there was no confusion anywhere barring a few seconds at the start of learning programming where someone learned what binary was and that therefore the prefixes had a different value in computer memory. And that after the marketing-led push (during which there were endless complaints about being cheated and missold things), we have had fifteen years of confusion, argument, led by mostly non-programmers and non-engineers trying to change the meaning of a term used primarily by programmers and engineers. Which is precisely backwards.

Anyway, tangent from an old C programmer over.

There are games that regularly use more than 1 GiB of texture data and some games that use more than 2 GiB. Even if you have a dual ported 32 MiB ESRAM, it you're storing texture data in it, there's going to be a lot of texture data thrashing -- texture data thrashing that ultimately relies on the weakest link of relatively slow DDR3 memory.

I already wrote that the most likely cases for this would be light maps, shadow maps, etc. Very intensive things that can be placed in 32MB. You skipped all those parts of my post and honed in on the one area that I explicitly said was illustrative. And even there, you misrepresented what I wrote. I didn't say "all of a game's texture data". That would be silly as one is obviously not trying to cram an entire game's texture data in the ESRAM at once. Not sure how you could possibly respond to my comment with a reference to the sum of all texture data in an entire game.

Instead, what I explicitly wrote was that you could maybe fit in textures for some main or recurrent characters. Just a quick check now shows that a complete texture map for a character in the original Dragon Age game was typically around 12-13MB. So yes, you could probably get your main character into that, probably a small party, e.g. the four team members you have at any one time in Dragon Age. And of course if you have a lot of a generic enemy - dozens of orcs or zombies or something, you only need one copy of that texture in the memory and thus make a substantial gain to performance as a developer.

You haven't actually addressed what I wrote - you used words from what I wrote to form your own argument to defeat. Saying that a game may regularly have more than 1GB of texture in it in no way relates to whether you might be able to get significant gains out of storing textures in ESRAM. I don't care if the game has a dozen planets and spaceships you visit. If I can get the textures for my main character and the current level's aliens that I draw every single frame, that's a boost.

And I'll close this section by coming full circle and repeating what I started with, that the textures were an illustrative example. The real killer would be putting Lighting and Shadow Maps in there as these can be really processor intensive and for which the simultaneous read and write, as well as the higher bandwidth, could be a significant asset.

There are already games that are forced to run in a lower resolution on the Xbone than the PS4.

I'm aware of CoD. What are the other games that are lower resolution on XB1 than on PS4?

Anyway, you're blurring issues. Do you not think that the stronger GPU in the PS4 has a lot more to do with the different native resolution than the small DDR memory disparity? (If tone is lost by text, then the answer to the above is "of course the difference is overwhelmingly more to do with the GPU disparity".

But of course we're arguing different things, as evidenced by your sudden shift into a different area of attack on XB1. I'm arguing very simply that comparing the different memory approaches of the XB1 and the PS4 is very far from simple and actually a very interesting area with significant positives on both sides. And you're (based on your skipping of what I actually wrote and shifting ground into areas of the GPU), more interested in trying to make the XB1 look worse than the PS4. The irony of your initial post being essentially 'haw haw - fanbois trying to put down the PS4 are wrong! PS4 is better' when in fact your post is actually the first instance here of someone making a partisan attack, seems to have escaped you.

I find the XB1 memory approach more interesting and with clear growth potential. Whether XB1's "baseline is a bit slower but here's a tool you can use to get select parts much faster if you know what you're doing" will in the long-run it will balance or beat the simple upfront approach of PS4's: "you don't have to do anything, it's just faster than XB1's default already", who knows. MS are not idiots and they have people who know a lot about programming and a lot about hardware. For example, XB1 can support DirectX 11.2+ which brings in (amongst other things), something called "Tiled Resources" which it implmements at Tier 2. This essentially lets you overlap GPU memory with system memory so you can render things with far larger textures than might be expected. Now what does that sound like it might synergise extremely well with? Well, the ESRAM, obviously. ;) I find it highly unlikely XB1 engineers weren't talking with DirectX engineers.

That's why I find it misplaced to start waving around XB1's 900P vs. PS4's 1080P native in a game like its a battle standard. Games on both consoles are going to look highly similar for the most part and we'll see steady improvement on both consoles as developers start building libraries of tools and techniques to take advantage of what each platform is capable of. Especially things like Mantle which will be a big thing. Not only for the performance implications but for the increased synergy between PC development and console development.

So I guess in conclusion, I've argued for two main points against what you wrote:

(1) The difference in memory approaches has gains and losses on both sides and is not going to be a determining factor in any way. Both consoles are going to be so close in terms of graphics quality and speed
(2) You have fundamental misconceptions about how the ESRAM will be used and the advantages that it can bring. It's actually pretty cool and has some great potential once developers start using it.
 
Last edited:
I

I already wrote that the most likely cases for this would be light maps, shadow maps, etc. Very intensive things that can be placed in 32MB. You skipped all those parts of my post and honed in on the one area that I explicitly said was illustrative. And even there, you misrepresented what I wrote. I didn't say "all of a game's texture data". That would be silly as one is obviously not trying to cram an entire game's texture data in the ESRAM at once. Not sure how you could possibly respond to my comment with a reference to the sum of all texture data in an entire game.

Instead, what I explicitly wrote was that you could maybe fit in textures for some main or recurrent characters. Just a quick check now shows that a complete texture map for a character in the original Dragon Age game was typically around 12-13MB. So yes, you could probably get your main character into that, probably a small party, e.g. the four team members you have at any one time in Dragon Age. And of course if you have a lot of a generic enemy - dozens of orcs or zombies or something, you only need one copy of that texture in the memory and thus make a substantial gain to performance as a developer.

You haven't actually addressed what I wrote - you used words from what I wrote to form your own argument to defeat. Saying that a game may regularly have more than 1GB of texture in it in no way relates to whether you might be able to get significant gains out of storing textures in ESRAM. I don't care if the game has a dozen planets and spaceships you visit. If I can get the textures for my main character and the current level's aliens that I draw every single frame, that's a boost.

And I'll close this section by coming full circle and repeating what I started with, that the textures were an illustrative example. The real killer would be putting Lighting and Shadow Maps in there as these can be really processor intensive and for which the simultaneous read and write, as well as the higher bandwidth, could be a significant asset.



I'm aware of CoD. What are the other games that are lower resolution on XB1 than on PS4?

Anyway, you're blurring issues. Do you not think that the stronger GPU in the PS4 has a lot more to do with the different native resolution than the small DDR memory disparity? (If tone is lost by text, then the answer to the above is "of course the difference is overwhelmingly more to do with the GPU disparity".

But of course we're arguing different things, as evidenced by your sudden shift into a different area of attack on XB1. I'm arguing very simply that comparing the different memory approaches of the XB1 and the PS4 is very far from simple and actually a very interesting area with significant positives on both sides. And you're (based on your skipping of what I actually wrote and shifting ground into areas of the GPU), more interested in trying to make the XB1 look worse than the PS4. The irony of your initial post being essentially 'haw haw - fanbois trying to put down the PS4 are wrong! PS4 is better' when in fact your post is actually the first instance here of someone making a partisan attack, seems to have escaped you.

I find the XB1 memory approach more interesting and with clear growth potential. Whether XB1's "baseline is a bit slower but here's a tool you can use to get select parts much faster if you know what you're doing" will in the long-run it will balance or beat the simple upfront approach of PS4's: "you don't have to do anything, it's just faster than XB1's default already", who knows. MS are not idiots and they have people who know a lot about programming and a lot about hardware. For example, XB1 can support DirectX 11.2+ which brings in (amongst other things), something called "Tiled Resources" which it implmements at Tier 2. This essentially lets you overlap GPU memory with system memory so you can render things with far larger textures than might be expected. Now what does that sound like it might synergise extremely well with? Well, the ESRAM, obviously. ;) I find it highly unlikely XB1 engineers weren't talking with DirectX engineers.

That's why I find it misplaced to start waving around XB1's 900P vs. PS4's 1080P native in a game like its a battle standard. Games on both consoles are going to look highly similar for the most part and we'll see steady improvement on both consoles as developers start building libraries of tools and techniques to take advantage of what each platform is capable of. Especially things like Mantle which will be a big thing. Not only for the performance implications but for the increased synergy between PC development and console development.

So I guess in conclusion, I've argued for two main points against what you wrote:

(1) The difference in memory approaches has gains and losses on both sides and is not going to be a determining factor in any way. Both consoles are going to be so close in terms of graphics quality and speed
(2) You have fundamental misconceptions about how the ESRAM will be used and the advantages that it can bring. It's actually pretty cool and has some great potential once developers start using it.

WRT your argument about the metric system, why should the metric system prefixes be re-defined to suit the demands of engineers and programmers? Why should the commonplace use of the base 10 metric system (outside of the USA) be re-defined?

A concrete example of the Xbone's deficiency is "misplaced"? Why don't you tell that to the developers of CoD or BF4? Did they not try hard enough? Are they incompetent? Why do both CoD and BF4 run at higher resolutions on the PS4 than the Xbone? Is running at a lower resolution an indication of the Xbone's equality w/the PS4?

This is to say nothing about the other issues seen in Xbone vs. PS4 comparisons, particularly in BF4:

1. no AA
2. forced black textures
3. Missing Ambient Occlusion

But I'm not sure anymore on Xbone's specs, because I seem to remember reading it had 2 GiB of dedicated GDDR VRAM, which makes my texture memory argument irrelevant.

BTW, your double the bandwidth argument for dual-ported memory is in error. You don't get double the bandwidth, period.
A full duplex gigabit NIC doesn't give you 2 gigabits in bandwidth -- ever.
 
whats it matter both consoles are already darkening textures just to keep decent frames on current games. 5 years down the road they are gonna just keep "optimizing."
 
Back