• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Quick schooling on NVM drives

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
did you install the raid driver with your win7 install?
I have read that they perform better with the raid driver.

mine is the 500 gig 960 evo.

Nope, no raid driver. I installed the Samsung 2.1 (I believe, 2.x for sure) driver so Win 7 could install itself on the drive.

This is also the 500 gig 960 evo.
 
here is mine with stock clocks on 6700k but have the c-states kilt and speedstep kilt as well
this is also the 960pro as a secondary drive 960-pro-512gb.jpg
 
I like 4k read performance on these samsung drives. Still not as high as we could expect considering sequential bandwidth but about 20-30% higher than most of the competition.

Well, looks pretty decent to me:

View attachment 187323

I meant results in RAID are not much higher than on a single drive. Single drive performance is as good as it should be. The same is on all NVMe drives in RAID. On 2 drives you could expect to pass 5GB/s when single drive has 3GB/s but no .. max is near 4GB/s and it's only in single tests while in most tests you can't see much above 3.5GB/s.
 
Last edited:
I agree Bart, once more than 1 drive on nvme you dont gain like you think you would. I actually got better scores with 4 sata ssd's in raid for pcmark7 than a raid of nvme sammy pro drives.
I expect that in time they will get drivers better or chipset capabilities better for raid with these little monsters
 
Are you guys certain that both m.2 slots are offering the full bandwidth? Not all z170 boards, and even less on older boards, have two full speed pcie x4 slots. One also has to consider the sharing of bandwidth with other devices...
 
I'm testing my drives on MSI Z270 Carbon which has 2x PCIE 3.0 x4 M.2 slots. Single drives on both give the same results. There shouldn't be any sharing when M.2 has PCIE x4 connection and even if then I'm not using any additional devices. About the same results are in some reviews around the web like here:
http://www.vortez.net/articles_pages/samsung_960_pro_raid_review,7.html ~3.5GB/s max in ATTO
http://www.tweaktown.com/articles/7553/samsung-950-pro-pcie-gen-3x4-nvme-ssd-raid-report/index4.html ~3.4GB/s max in ATTO
http://ocaholic.ch/modules/smartsection/item.php?itemid=3955&page=3 up to 4.1GB/s

I have Patriot drives which are a bit slower than Samsung 960 but I get about the same max bandwidth in RAID0.
 
their bandwidth is derived from the dmi3.0
I think if you can get on the main pcie slots then you can get away from the dmi capabilities
 
I loaded up one of my 950pro 256gb drives with xp and ran the 960pro from there and here are some results. this is with c-states and speedstep both off sammy960pro-xp.jpg
 
why and how do c states and speedstep effect drive performance, doesn't the cpu just ramp up and stay ramped?
and please explain DMI 3.0 a little, it has also come up in another thread I have going about pcie.
 
DMI is like a bus connecting CPU and chipset. All data is passing through it as CPU is making some calculations. Here is how it looks like on Z170:
Intel-Z170-chipset-block-diagram_w_600.jpg
as you see DMI is between CPU and chipset

Considering mobo design it's not possible to connect M.2 SSD to anything faster without PCIE card in first PCIE slot as only that is connected directly to CPU via PCIE x16 3.0. Everything connected to chipset will have max PCIE x4 3.0 speed so ~4GB/s.
I don't like wikipedia but I can't find anything better right now so here is link:
https://en.wikipedia.org/wiki/Direct_Media_Interface
"DMI 3.0, released in August 2015, allows the 8 GT/s transfer rate per lane, for a total of four lanes and 3.93 GB/s for the CPU–PCH link."

If I'm right then you can't overclock DMI. At least on my motherboard there is only option to set it to 1.0, 2.0 and 3.0 mode and nothing else. PCIE can't be overclocked in new chipsets too while in older series it was helping in max bandwidth in RAID.

CPU speed affects caching performance. It's the same in all Intel chipsets which are using Intel storage controller. Actually in AMD too. When you overclock CPU and memory then storage bandwidth is sometimes higher ( not always ). Usually you can see better random bandwidth like 4k read/write in CrystalDiskMark.
 
My NUC's with pcie m.2 slots offer basically the same speed- even in raid0 on the skull canyon. but the NUC's dont allow me to cull c-states so they wont to max numbers as well. But the bandwidth is the same, even at lower clocks
 
Back