• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

These fast, fast m.2 drives... not so much.

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Archer0915

"The Expert"
Joined
Nov 3, 2008
How fast is fast enough? I do much want to throw fasted drives in but honestly, after a point I (not anyone else, I am saying me) seen no noticeable difference. Yes I have 3 in a raid and get between 10 and 16 GB per second average but I just do not much difference between that and one at 7.

Is it just me? I mean benchmarks are clear but I still find myself waiting on stuff.

Okay, let the criticism begin
 
What happens to that data after it is transferred? Chances are the limit is elsewhere. I think I can feel the difference between SATA and NVMe, and in some situations, the difference between one with and without DRAM. Differences aren't that big.
 
Personally I don't see any difference between a decent SSD and an M.2 in regular windows use/boot, everything loads in seconds and even multi-gigabyte game installations/patches are extremely quick. Between SSD/M.2 and a HDD on the other hand, it's night and day (even a 7k/10k rpm w/256mb cache).
 
I'm a raid nut so I have 6 spinner drives in 3 different raid configs and I have a pair of M.2 drives also in raid, just cause :LOL: I've been running raid-0 since forever. I'm hoping the prices on large (4TB+) drop so I can migrate and dump the spinners. For now the spinner drives are super cheap and right now are working for my needs just fine.

M.2 drives are plenty fast solo, but raid-0 is soo much fun ;)


 
I mean, you're at the point of diminishing returns. The benefits in striped RAID configs are generally only seen in large transfers. How often are you moving large files around? Otherwise, 4k performance hasn't changed much and that's what gives you that snappy feeling.

How fast is fast enough for me? A single high-performance PCIe 4.0 NVMe drive keeps me happy. I have a combination of M.2 devices (3x), SATA SSD (x1), and a spinner for cold storage.
 
I mean, you're at the point of diminishing returns. The benefits in striped RAID configs are generally only seen in large transfers. How often are you moving large files around? Otherwise, 4k performance hasn't changed much and that's what gives you that snappy feeling.

How fast is fast enough for me? A single high-performance PCIe 4.0 NVMe drive keeps me happy. I have a combination of M.2 devices (3x), SATA SSD (x1), and a spinner for cold storage.

The only time it is ever noticable is drive to drive transfers. I have 3 other m2 drives. Movie transfers. All I was getting at is these drives are getting Stoopid fast and you are correct. I posted this thread as a thinking thing. Hate seeing people spend on a faster drive when memory, CPU or video may bring more bang.
 
I do see big differences between the spinners as well but on normal or regularly used files, not much. I am a big Cache lover. I have all my caching on the raid 0, why? It started years ago with virtual memory. I moved it from the boot spinner to my storage spinner. It made a difference. Now I use caching for everything.
 
I moved it from the boot spinner to my storage spinner. It made a difference.
That's an old school tweak! A lot of us did that back in the day. It made a difference then because of two reasons. First, in theory, there is less I/O on a HDD used for storage than one that has an OS on it. Second, if you were to get the most out of moving the cache, you'd put it at the fastest part of a HDD and make the size static (beginning - so the heads don't have to travel as far looking for information...applies to first reason). These days, I don't imagine there to be any benefit in moving the virtual memory off the OS drive if it's a PCIe M.2 or SSD... I/Os are so fast, any that may be going on with the OS you wouldn't even know it as it's not trying to move heads and seek to find it on a platter - the whole point of moving cache to non OS drive.

EDIT: Thinking about it... putting it in R0 could actually slow things because the virtual mem is, like most OS things, small files. I thought R0 increased latency on those small files for some reason. If that's true (someone......?....... anyone??? lol) it may be counterintuitive to move the cache off a solid state drive.
 
That's an old school tweak! A lot of us did that back in the day. It made a difference then because of two reasons. First, in theory, there is less I/O on a HDD used for storage than one that has an OS on it. Second, if you were to get the most out of moving the cache, you'd put it at the fastest part of a HDD and make the size static (beginning - so the heads don't have to travel as far looking for information...applies to first reason). These days, I don't imagine there to be any benefit in moving the virtual memory off the OS drive if it's a PCIe M.2 or SSD... I/Os are so fast, any that may be going on with the OS you wouldn't even know it as it's not trying to move heads and seek to find it on a platter - the whole point of moving cache to non OS drive.

EDIT: Thinking about it... putting it in R0 could actually slow things because the virtual mem is, like most OS things, small files. I thought R0 increased latency on those small files for some reason. If that's true (someone......?....... anyone??? lol) it may be counterintuitive to move the cache off a solid state drive.
I cache the big spinner and use the raid for all caching. My justification is not response time it is keeping the boot/program drive lanes free of non direct program related items.

Is it better? It should be and if it is not a significant negative impact let me live in my blissful ignorance.

My philosophy is use everything to balance the load. Total system throughput
Post magically merged:

I cache the big spinner and use the raid for all caching. My justification is not response time it is keeping the boot/program drive lanes free of non direct program related items.

Is it better? It should be and if it is not a significant negative impact let me live in my blissful ignorance.

My philosophy is use everything to balance the load. Total system throughput

The drive controller has more bandwidth than the drives.
 

If I have five internal drives but keep most data moving on only part of the available bandwidth then I am not optimized.

Say a single fast drive vs a raid. Well same base principle. That raid uses more ports and we will say that every transfer saturated the available bandwidth.

Wait we will say every drive always saturated all that is available to it.

If I spread things out and use more of the total bandwidth available to the controller I have created the ability to multitask with less bottle necks from the drives. Instead of pulling everything off of one drive items are pulled off of several. Same principle as Raid but not a speed increase in data throughput for one item but allowing more needed data from multiple items.
 
I didn't get enough sleep. What cache? Swap file is not cache?

During HD era, I tried to have more ram than average since Windows cached disk accesses. After first launch, loads came out of ram, until it gets evicted by something else. I think Windows still does that, but I can't say I feel the difference between that and loading from a good SSD. Maybe in part because modern software is so bloaty the CPU time dominates anyway.
 
I didn't get enough sleep. What cache? Swap file is not cache?

During HD era, I tried to have more ram than average since Windows cached disk accesses. After first launch, loads came out of ram, until it gets evicted by something else. I think Windows still does that, but I can't say I feel the difference between that and loading from a good SSD. Maybe in part because modern software is so bloaty the CPU time dominates anyway.

Swap file and I did (do) caching. A few years back I actually had half my memory set up as a ram drive and had... it is complicated.

Have used this https://www.romexsoftware.com/en-us/primo-cache/index.html but not any more.
 
Last edited:
I haven't read everything but just to add something and maybe answer some questions...

The real gain in SSD performance is in access times and random operations. You also see the sequential bandwidth, but almost only when you move a lot of data at once. In day to day work you see mostly random reads. Random writes are cached, so any delays are not that noticeable.

RAID makes access times worse and a low queue random operations are the same or worse than with a single SSD. You can translate it into Q1T1 results in CrystalDiskMark, if you wish to compare it.
RAID 0/10 is faster if you work on something much more complicated with multithreading support. Usually, you can see it in enterprise environment, large databases and other things like that.

One more thing is that most applications are using data which is already loaded to RAM and are not loading much more other data later. This is why starting the application takes the longest time. So what's important is to have fast reading speed and/or access time to many small files - faster SSD controllers = more IOPS. Once everything is in RAM, then operates on this data. Most data on the drive is not linear, but spread on the whole drive, this is why random read and access time is the most important.

I can agree that all the noise about the fastest SSD is mainly only marketing as we can't really see the difference between higher PCIe 3.0, 4.0 and 5.0. SSD. However, if we take a look at results for much more demanding software that loads a lot of data, then the fastest SSD are actually much faster. However, it usually doesn't matter on our home PCs.
The same, marketing is translating their "top speed" SSD as sequential bandwidth. PCIe 5.0 SSD are advertised as the fastest because can reach 12GB/s+. Who cares when the performance in random operations is about the same as for mid-shelf PCIe 4.0 SSD.

RAM disk, even though in theory has much higher bandwidth than a SSD, it performs about the same in home/office environment.
 
I'm in the process of reviewing my first PCIe Gen5 x4 NVMe drive. During large file transfers, it is noticeably faster than even PCIe 4.0. For all other benchmarks, it's naturally faster, but it's not as tangible as the large file transfers.
 
I haven't read everything but just to add something and maybe answer some questions...

The real gain in SSD performance is in access times and random operations. You also see the sequential bandwidth, but almost only when you move a lot of data at once. In day to day work you see mostly random reads. Random writes are cached, so any delays are not that noticeable.

RAID makes access times worse and a low queue random operations are the same or worse than with a single SSD. You can translate it into Q1T1 results in CrystalDiskMark, if you wish to compare it.
RAID 0/10 is faster if you work on something much more complicated with multithreading support. Usually, you can see it in enterprise environment, large databases and other things like that.

One more thing is that most applications are using data which is already loaded to RAM and are not loading much more other data later. This is why starting the application takes the longest time. So what's important is to have fast reading speed and/or access time to many small files - faster SSD controllers = more IOPS. Once everything is in RAM, then operates on this data. Most data on the drive is not linear, but spread on the whole drive, this is why random read and access time is the most important.

I can agree that all the noise about the fastest SSD is mainly only marketing as we can't really see the difference between higher PCIe 3.0, 4.0 and 5.0. SSD. However, if we take a look at results for much more demanding software that loads a lot of data, then the fastest SSD are actually much faster. However, it usually doesn't matter on our home PCs.
The same, marketing is translating their "top speed" SSD as sequential bandwidth. PCIe 5.0 SSD are advertised as the fastest because can reach 12GB/s+. Who cares when the performance in random operations is about the same as for mid-shelf PCIe 4.0 SSD.

RAM disk, even though in theory has much higher bandwidth than a SSD, it performs about the same in home/office environment.
Thanks for the reply. I always make these posts to convince myself I do not need to buy stuff. I sort of look for consensus by not asking a question. Simply get a discussion going and see where it goes.
 
I haven't read everything but just to add something and maybe answer some questions...

The real gain in SSD performance is in access times and random operations. You also see the sequential bandwidth, but almost only when you move a lot of data at once. In day to day work you see mostly random reads. Random writes are cached, so any delays are not that noticeable.

RAID makes access times worse and a low queue random operations are the same or worse than with a single SSD. You can translate it into Q1T1 results in CrystalDiskMark, if you wish to compare it.
RAID 0/10 is faster if you work on something much more complicated with multithreading support. Usually, you can see it in enterprise environment, large databases and other things like that.

One more thing is that most applications are using data which is already loaded to RAM and are not loading much more other data later. This is why starting the application takes the longest time. So what's important is to have fast reading speed and/or access time to many small files - faster SSD controllers = more IOPS. Once everything is in RAM, then operates on this data. Most data on the drive is not linear, but spread on the whole drive, this is why random read and access time is the most important.

I can agree that all the noise about the fastest SSD is mainly only marketing as we can't really see the difference between higher PCIe 3.0, 4.0 and 5.0. SSD. However, if we take a look at results for much more demanding software that loads a lot of data, then the fastest SSD are actually much faster. However, it usually doesn't matter on our home PCs.
The same, marketing is translating their "top speed" SSD as sequential bandwidth. PCIe 5.0 SSD are advertised as the fastest because can reach 12GB/s+. Who cares when the performance in random operations is about the same as for mid-shelf PCIe 4.0 SSD.

RAM disk, even though in theory has much higher bandwidth than a SSD, it performs about the same in home/office environment.
This (a much more detailed explanation of what I said earlier - like!!!) is why I moved past constantly tweaking things in my OS... because some of the things we held true from back in the day don't matter today (like Windows swap file on a different HDD when using an SSD) because of technology. The tangible benefits are exaggerated by benchmarking numbers (12K is faster than 8K!!!). If you don't know how to interpret the data, you can get bamboozled easily. Like not working with large files so R0 offers almost no benefit and can slow down OS things). So to me, things like that feel like a waste of time to set up and configure unless it's a notable (read: can feel the butt dyno) benefit for my use case. Adding items also tends to add complication and problems...it's like an onion. I follow the K.I.S.S method for 24/7 operations (tweaking and testing is a whole other story!) and it has worked out well for me over the recent several years, lol.

I guess to summarize an answer for the OP............. a single NVMe drive (NO RAID), no RAM DISKS, no nothing, is plenty for 99% of home users. A single NVMe disk is fast enough (for me). Users don't have to go through mental gymnastics to optimize their systems these days. Modern systems aren't leaving off a 10-20% difference like they did, say 10 years ago... more like 1-2%. For me, the most thought that goes into my builds for storage is putting the drive in the right slots (fastest for the drive, native controllers for SATA if possible) and partitioning it to protect files from the OS, allowing for an easy OS restore via image and not having to DL games again. But to be moving Windows swap file to another SSD/NVMe or creating a RAM disk (again for MY use case) isn't 'the way'. As Woomack described above, so few people can utilize the actual benefits (including the OP), it ends up being just another layer of complication and something that can fail and in the case of moving your Windows swap file, is actually slower on RAID0 due to the nature of the file size in the swap file.


EDIT: My current storage config...

1x 2TB NVMe (fastest I have). Partitioned for OS/Apps (500GB) and Games (1.5GB)
1x 1TB NVMe for other Games
1x 500GB NVMe ('slowest' PCIe 3.0) for the latest FS game since it's a pig for space...but can easily go on another NVMe
1x 2TB HDD for backups and cold storage (another is offsite at my mom's place that I bring back occasionally)

EDIT2: Also, using RAID increases costs significantly (at least 2x)... so for my cheap self, I better make damn sure I'm getting a tangible benefit if I have to pay 2x+ for my storage (read: any part).
 
Last edited:
EDIT2: Also, using RAID increases costs significantly (at least 2x)... so for my cheap self, I better make damn sure I'm getting a tangible benefit if I have to pay 2x+ for my storage (read: any part).
I'm picking an edge case here, but say you wanted 8TB of SSD, it's probably cheaper to get 2x4TB or 4x2TB than 1x8TB, with same model series for each. Otherwise cost/capacity doesn't vary much as long as you have enough slots available.


I recently rearranged my SSDs trying to eliminate SATA ones as I set up for the future. I still have one left in my system as it is a pain to move large game installs around. The other data is low enough performance I'm not exactly rushing to remove the SSD.
C: Samsung 980 Pro 2TB - OS + Games (bigger, high performance priority)
D: Crucial MX500 1TB - Games (bigger, low performance priority) + bulk data (video storage, AI experiements)
E: Samsung PM951 256GB - Older games - This is a very old NVMe so sequential speeds are very weak by current standards, only around 3x SATA reads, lower than max SATA writes.
F: Samsung SM951 512GB - Older games - This is a rare M.2 AHCI SSD. SATA protocol at PCIe speeds.
H: Intel Optane 900p 280GB - Games (smaller, high performance priority)

To me I'd lump SSDs into 2x2 groups: DRAM or not - affects responsiveness, and SATA or NVMe - affects sequential speeds. I don't think I've experienced a HBM SSD yet so don't know how to categorise it in the DRAM or not question. It does make a big difference when doing some intensive operations like Windows Update or game patches. I'm keeping an eye open for the next sale on a 2TB NVMe as that could replace the two older Samsung SSDs and the SATA.
 
I recently rearranged my SSDs trying to eliminate SATA ones as I set up for the future.
My SSDs eliminated themselves recently. 2 died in the last two weeks ...

A new generation of DRAM-less SSD is great. Some are not far from the best PCIe 4.0 SSD. I have Predator GM7 2TB SSD in my gaming PC right now, and most benchmarks are at about Samsung 980 Pro level. At the same time, the SSD keeps ~45°C.
 
Last edited:
Back