• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Why are m.2 SSDs so disappointing in real world performance?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
No, I am talking about PCI-e 4x m.2 NVMe drives vs. SATA 6.0 SSDs. Like for example the Evo 850 SSD vs the 960 Pro NVMe m.2 drive. All the tests I've seen show little to no difference in real-world performance for application loading times.

example:




Like I said, STR was never the bottleneck. You're not going to see another reduction in seek times like we did from spinning rust to SSD without some kind of breakthrough in physics. And at a certain point, even seek times mean nothing, because the CPU still needs time to do its part of the work.
 
Like I said, STR was never the bottleneck. You're not going to see another reduction in seek times like we did from spinning rust to SSD without some kind of breakthrough in physics. And at a certain point, even seek times mean nothing, because the CPU still needs time to do its part of the work.

Sure, but SSDs dont have seek times so their performance in loading a program is mostly limited to their read speed. Thus it stands to reason if an SSD with a read speed of 500 MB/s takes 10 seconds to load a program, then an M.2. drive with a read speed of 2000 MB/s should take 2.5s to load that same program. But that's not happening.
 
SSDs still have an access time. Look at the quoted random read speeds. That's where you can put more money for more speed than sequential shows.
 
Sure, but SSDs dont have seek times so their performance in loading a program is mostly limited to their read speed. Thus it stands to reason if an SSD with a read speed of 500 MB/s takes 10 seconds to load a program, then an M.2. drive with a read speed of 2000 MB/s should take 2.5s to load that same program. But that's not happening.
There are seek times...

Also, loads are random reads, not sequential, so while sequential reads may be 2000 MB/s random reads are lower.
 
Just to illustrate the random read rates, in my laptop I got a Crucial MX300 525GB bought primarily as it was cheap for the capacity at the time. It is M.2 form factor but SATA interface. This is rated at 92k random read iops. In my latest system I went higher end, with a Samsung 960 Evo 500GB. This is rated at 330k for QD32, and 14k for QD1. Crucial doesn't split them out, but presumably it is the high QD test condition as best case for them. So, in a heavily threaded random seek environment the Samsung could be over 3 times faster than the Crucial. Sequential read is even bigger, at 530 and 3200 GB/s respectively. Game load times are noticeably reduced over my main system, which also uses a SATA SSD although I can't be sure which one (either Crucial MX200 or Sandisk Plus/Ultra). I can't be sure if it is due to random or sequential, but it does make a difference. I have ran my own benchmarks on them, but they're at home so I can't look it up right now.
 
Well, guess what? I have a Samsung 960 Pro m.2 NVMe currently set up as a boot drive and I have a Samsung 850 EVO 2.5" SATA due to be delivered tomorrow or the next day. Maybe I should do a review/comparison article?

But, if you're only buying the NVMe drives to cut down game loading times, I think you'll be disappointed. That's like buying a Corvette to get to work quicker while driving through rush hour traffic. Oops, you picked the wrong lane to be in and now you're stuck behind a smelly trash truck and a Prius just passed you. But, take that Vette out on the open highway... zoom, that Prius is getting smaller and smaller in the rearview mirror.

Instead of game load times, let's try comparing those same NVMe and SATA SSD drives when you are doing video editing, producing, and rendering. Bet you'll see mighty impressive results.
 
Last edited:
Games the same as most applications which are using many small files base on random transfers, not sequential. Most NVMe SSD have only high sequential transfers while random are barely better. The same effect you can get setting RAID0 on couple of SATA SSD.
Because everything is cached then read bandwidth counts much more than write. Fast RAM can speed up storage a bit.
In general the fastest NVMe SSD have about 50MB/s random 4K read ( at home IOPS and deep queues mean nothing ). The fastest SATA SSD have about 40MB/s random 4k read. Typical new generation TLC SATA SSD has about 30-34MB/s. Here you have the difference which really matters in most applications.

Where can you use NVMe SSD ? In server/workstation applications, databases etc. We were actually talking about it many times on OCF and there were many posts with results just maybe on a bit older SSD.

Simply in most cases it doesn't matter what SSD you have, what matters is that it's SSD. Also buying SSD because system boot is faster seems stupid. Difference is like 3-4 seconds max between SSD while most system files are cached and drive is barely using them so after boot, drive can speed up almost only additional applications.

Run benchmark like PCMark 8 or 10 and you will see how big difference is between SSD in popular applications and 3D tests which can cover most games. I will only tell you that I made many SSD reviews and in most cases PCMark was showing +/- 10% difference between SATA and PCIE SSD.
 
Last edited:
( at home IOPS and deep queues mean nothing )

Agreed that in home use scenarios we're unlikely to hit queue depths deep enough to reach peak random figures.

I never looked into it in detail, but I thought there was a relation between iops and random transfer rates, in that because they're 4k you multiply to get effective speed. I haven't actually tested this out though... will be going home soon, will see if I can dig out my benchmarks.
 
Not really, IOPS in some NVMe SSD are above 300k, in SATA SSD are in best case about 100k while difference is as I said about 50 vs 40MB/s in 4K read. IOPS depend on controller, cache etc. Weaker controllers can still get the same max bandwidth but IOPS will be lower and high queue bandwidth will be also lower. There are many various combinations. Sometimes TLC SSD results are better in sequential bandwidth and simple operations while MLC will beat them in more complicated random and high queue operations.

Btw. in servers couple of years ago there were almost only SLC SSD. Now almost all are MLC. I guess they move to TLC when process will be improved and durability higher. Right now there are TLC SSD which have higher endurance than last gen MLC or even new MLC. Crucial MX300 is TLC and has higher endurance than MX200 or BX300 MLC.
 
Here's some bench numbers I took previously:

Sandisk Ultra II 960GB (SATA 2.5") TLC
CrystalDiskMark Reads: Seq Q32T1 558 MB/s, 4k Q32T1 384 MB/s, 4k 31.2 MB/s

Crucial MX200 1000GB (SATA 2.5") MLC
CrystalDiskMark Reads: Seq Q32T1 556 MB/s, 4k Q32T1 286 MB/s, 4k 29.3 MB/s

Above two are my bulk storage SSDs in my main system. Next up are M.2 drives used as boot drives.

Samsung PM951 256GB (M.2 NVMe) TLC
CrystalDiskMark Reads: Seq Q32T1 1596 MB/s, 4k Q32T1 717 MB/s, 4k 45.8 MB/s

Samsung SM951 512GB (M.2 AHCI) MLC
CrystalDiskMark Reads: Seq Q32T1 2181 MB/s, 4k Q32T1 573 MB/s, 4k 37.9 MB/s

Samsung 960 Evo 500GB (M.2 NVMe) TLC 48 layer
CrystalDiskMark Reads: Seq Q32T1 2889 MB/s, 4k Q32T1 837 MB/s, 4k 51.2 MB/s

Crucial MX300 525GB (M.2 SATA) TLC 32 layer
CrystalDiskMark Reads: Seq Q32T1 525 MB/s, 4k Q32T1 334 MB/s, 4k 29.3 MB/s

And for fun, a hard disk:
Hitachi 7K1000 1000GB (SATA 2.5") 7200rpm
CrystalDiskMark Reads: Seq Q32T1 113 MB/s, 4k Q32T1 0.65 MB/s, 4k 0.27 MB/s


For single thread random reads, SATA seems to be around 30MB/s ball park. The NVMe SSDs are around 50% higher than that. The heavily queued speeds at least suggest there is little penalty to multiple programs doing random disk access at the same time... it isn't linear scaling, but it is still better than pure single thread random.

As for scaling between iops and transfer speeds, going back to the MX300 525GB of 92k, multiply by 4k, gives 368 MB/s, not far from my measured 334 above (single run, other variables may apply). Similarly for the 960 Evo 500GB, 330k and 14k for 32QD and 1QD respectively, would be equivalent to 1320 MB/s and 56 MB/s. Maybe a bit short on the former, but the latter is not far off. The difference is likely down to the exact methodology used. I only used CDM for convenience as I have run it previously.

The hard disk... poor hard disk. In single thread random it is 100x slower than a SATA SSD. I think this is why I find HDs almost painful to use now. The lab PCs at work are still on HDs and I hate them... I've agreed in principle to get one upgraded to SSD but the speed our work IT moves at, it'll probably be obsolete before I get it.

In case anyone isn't familiar with M.2 AHCI, it was a short lived format bridging between SATA and NVMe. It is the same AHCI as used by SATA, but doesn't have SATA bandwidth limitation. It doesn't have the latency optimisations of NVMe so it may hurt randoms a bit. I got mine just before NVMe was widely available.
 
Last edited:
...which again illustrates precisely what people have been saying throughout this entire thread. NVME offers a nice boost in SUSTAINED read/write performance over SATA SSD, but when you're loading a few 30 mb blocks at a time there's just no opportunity for the format to strut its stuff. Gamers, and people that use their systems primarily for web browsing and other light tasks would be better served deploying their resources elsewhere as they're unlikely to see much benefit from switching to nvme drives.
 
Yep, and then there's hotrodding overclocking fools that MUST have the fastest of everything. Which wouldn't be so bad except they have to brag about it too.

attachment.php


I see there is one benchmark that both SATA and NVMe are about the same.
 
lol ^^ ...and me with my crappy 2400/1400 nvme drive. ...I feel so ashamed. :D

...course I can almost buy two of em for the price of your drive. mwahahaha
 
Right again. That's why I got a Samsung 850 EVO 2.5" SATA being shipped right now. It was half the price of my 960 Pro m.2.
 
I have been tempted to go the other way and get 1-2 more 850 evo's for some raid-0 action

Not a bad choice if you already have one 850 evo. Starting from scratch I'd favor a single 1TB nvme over 2x500Gb SATA drives if you can find the nvme drive on sale. Even the cheapest 1TB nvme drives should come in at least 2x as fast in sustained throughput vs RAID0 SATA SSD drives. Looks like the Plextor 1TB nvme is back up to $436 so that's a roughly 25% premium over 2x500Gb 850 evos at this time, at least at the egg. That was only an 11% premium last week as the plextor drive was on sale for $386. Naturally if you're not doing a bunch of sustained reads and writes then neither RAID0 or nvme drives will justify their premiums.
 
The thing I don't like about RAID0 (apart from the data loss risk, that is) is that it can take longer to boot up because the RAID interface has to load during post. Kind of defeats the purpose to some extent.
 
I have been tempted to go the other way and get 1-2 more 850 evo's for some raid-0 action

Per the thread topic, though, RAID 0 only makes things worse. RAID 0 might give you better STR (if and only if the bus your drives are on can sustain both at once), at the penalty of worse seek times (not that they're anything but negligible on an SSD anyway, but still), and striping by its very nature will force more random reads for any file that crosses stripe boundaries, which as already pointed out are slower.
 
Then there's the name itself. RAID 0 isn't RAID at all as there's no redundancy.
 
So then the big question is why do SSDs have access times at all? HDDs have them because the head needs to physically move around so the computer spends all day waiting for the head to constantly move into position. But SSDs have no moving parts so why cant an SSD transfer 5000 files of 5MB at the same rate as one at 25,000 MB? Does RAM scale the same way?
 
Back