- Joined
- Mar 7, 2008
I got the 900p 280GB M.2 version. It is actually a 2.5" drive which has a U.2 connector, and also in the box is a cable that goes from U.2 to M.2. Note the 2.5" drive is thicker than common modern drives. Anyway, I also bought a M.2 to PCIe adapter card as I thought I might need it in case I decided to copy between M.2 drives in systems without multiple connectors.
I wanted to test two things:
1, does where you connect it affect performance? M.2 slots, chipset PCIe, CPU PCIe?
2, does the CPU speed affect results? This follows on from an Intel sponsored white paper by Shrout Research, showing that CPU power saving can impact performance.
Key parts of test system:
Asrock Z370 Pro4 bios 1.3
2x8GB Samsung B-die ram at 3600C16
i3-8350k at 4.0 or 5.0 GHz.
R7 260X
Win10-64 FCU
Tests run were CrystalDiskMark 6.0.0 x64 and AS SSD 2.0.6485.19676. Each was run 3 times, and the best of the 3 used for comparison. Default settings were used.
Here's an example CrystalDiskMark result. Of interest to me is the 4k QD1 read rate, shown here around 281MB/s! This compares to my best flash SSD, an Evo 960 500GB, which gets around 55MB/s in that test.
I have a table of results, and while I think of how to present the results, I'll just write some observations on where the difference are.
Test configurations:
1, 900p connected to mobo M.2 connector at bottom of board
2, 900p connected via PCIe adapter to chipset PCIe slot
3, 900p connected via PCIe adapter to CPU PCIe slot (GPU moved temporarily to chipset slot)
4, same as 3, but with CPU OC 25% to 5.0 GHz (also used for above CDM result)
Comparing 1 and 2, there is no significant difference (<5%), which could be expected as they're both going through the chipset.
Comparing 2 and 3 is more interesting. Sequential reads consistently dropped over 5% on both. There is also some similar effect for high QD/transfers, but to lesser extent. Randoms did increase in performance, low single digit % for reads, up to double figures for writes.
Comparing 3 and 4 didn't show a change in sequential speeds, but it did bump up random reads and writes. CDM Q32 increased 14/17% respectively for reads/writes. Q1 around 5% for both, AS SSD 7% for both. Optane is so fast your CPU might be a bottleneck!
Back to my original questions, for the first the answer is yes, but I'm not clear why. Randoms are more latency sensitive, and by not going through the chipset it likely gives a slight edge to performance. I'm not sure why sequentials would actually drop. For the 2nd question, that's also another yes. Faster CPU = faster transfer rates. Combine the two, comparing the onboard M.2 connector, to a CPU connected PCIe slot, biggest gains are T1 randoms in CDM, with 12-20% increase. AS SSD similarly 9-13%. So... not insignificant!