• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

PCIe 2.0 SSD or RAID0

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

DhRomeo

New Member
Joined
Sep 16, 2014
It's weird making a first post. You see noobs being flamed all the time for not looking up their question better, and in all honesty, the answer is most certainly locked in the pages of one of these internet forums, but after running in circles for the last 6 months, I am offering up a sacrifice to the computer gods and laying myself at the mercy of the internets.

Angels and ministers of grace, defend us.

My question is really, How the christ does my motherboard work, what's the difference in Throughput speeds for a SATA 3 intel controller vs the two marvell 6Gbps SATA controllers, and why is there so much bs on the internet about PCI express Solid State Drives sometimes performing slower than a SATA 3... something..

http://www.overclockers.com/forums/showthread.php/750073-Benefits-of-PCI-E-SSD-over-SATA-III

Posts like that make me rethink building my own computer. I wanted something fast, something that could open a program in a couple of heartbeats.. instead of the grinding wait I'm used to. I have ADHD, and every time I have to sit and wait on an operation on this computer, like opening a new map or something, the waiting gives my wretched little brain the chance to get bored and wander off. I don't want something that scorches every benchmark, just something that can open a spreadsheet in one click. Apparently it takes becoming an enthusiast to get a chip and system to perform at acceptable speeds. Fine.

So I read, everything, and wanted to be informed before I bought. I eventually settled on the Asus P6X58D-E motherboard. I know I wanted the ability to change the RAM timings, and maybe overclock, so I got the 950 Bloomfield to stick in it. Unlocked processor, and a board that allowed you to take the RAM to 2000 if you wanted. Cool beans.
I had a 500GB seagate drive, and when I originally built the machine in 2011, I bought the Intel X-25M 120GB SSD. I don't know exactly what all the cache sizes mean, but in reading on the internet, it just sounded like the intel controllers were more compatible. I have no idea though.

It was lightning fast, and I used a ASUS ENGTX560 DCII OC/2DI/1GD5 GeForce GTX 560 (Fermi) 1GB 256-Bit GDDR5 PCI Express 2.0 x16 Graphics Card to finish it off.

I installed the OS on the SSD, it was just set up in AHCI mode, plugged into one of the SATA 3 ports on the mobo, and the Seagate was plugged into one of the two marvell 6Gbps ports. I had all of my documents and pictures and game files on the seagate. I have had nothing but bad luck with HDDs. The only HDD I've bought in the last 5 years that's still working is a 500GB seagate external hard drive that's just used as a backup. I've had a 1.5TB, 1TB, 2TB, seagate and a 750 WD hard drive all crash, RMA'd... and their replacements gave SMART warnings not a month in. More RMA's and "refurbished" drives to replace my "brand new" drives from the store, and I just decided never to deal with HDDs again. You'd think one out of the nine drives they shipped me would be stable. I just don't have the money to keep buying drives that don't work, that their comapnies won't stand behind. I'm ahead by just flushing the bills down the toilet and saving the frustration of dealing with another RMA.

My personal theory is that the UPS guys are having too much fun with HDD's. Or, it could honestly just the bumpy truck ride being shipped across the country that makes the drives close to failure by the time all the shaking and bumping comes to an end at your doorstep?

At any rate, I'm in the market for a new hard drive, and while the Intel SSD is great, I ran AS SSD, and I'm getting read/write speeds around 250/100. I know that drives slow down as they fill up, and I know that SATA 3 has a limit of 300 Mpbs (37.5MB/s) and that a PCIe 2.0 SSD would operate closer to 2.0's 500 Mbps? Or are these numbers per lane, and you take the specs of the motherboard to figure out what the data rate would be?

1.jpg

I've read about some people having issues with SLI or Crossfire with using a PCIe 2.0 SSD and their graphics cards. Apparently some motherboards only support X number of total lanes in their PCI controller?? (I may be completely getting the terminology wrong).

The P6X58D-E lays out like this:

Intel Socket 1366
Intel X58 Chipset
6 x DIMM (Max. 24GB) DDR3 2000(O.C.)/1600/1333/1066 Hz Non-ECC, Un-buffered Memory
Triple Channel Memory Architecture
Corsair XMS3 DDR3 6x2GB @ 1600

3 x PCIe 2.0 x16 (x16, x8, x8 or x16, x16, x1)
1 x PCIe x1
2 x PCI

What does it mean x16, x8, x8 OR x16, x16, x1?

If you have 3 devices plugged into the PCIe slots, what do the numbers mean? The number of lanes per PCIe slot? Do these numbers only pertain to GPUs when trying to figure out SLI configurations? Or would there be any kind of bandwidth limitations if I had my GPU plugged into one PCIe 2.0 slot, and the Plextor 512 GB PCIe 2.0 SSD that I'm thinking of getting, plugged into the second PCIe 2.0 slot, with the third 2.0 slot left open?

Are there any bandwidth limitations on the motherboard if you add a PCI device to one of the other PCI slots

I go on newegg looking for a new hard drive, and it becomes the frickin manhattan project...

I look at OCZ's Revo Drive3 and they tout speeds of 1000 MB/s (which is 8000 Mbps?) Does it mean anything that Plextor won the Flash Memory Summit 2014 Best of Show, That's why I was looking at their drives, that and the reviews I found. They seem like they do it right..

I found *tons* of comments just like these on all of the other aftermarket PCIe SSD's:

Pros: Installs in systems with no fast sata slots

Cons: Real world performance is about 1/5th advertised speed. 300-350mb/s.
Over rules any other raid controllers in the bios. Rendering them unbootable
Incompatible with most motherboards

Other Thoughts: Get some reliable intel or Samsung ssds instead.

Am I wrong in thinking that quality matters? And installation expertise? In my mind, there's no way of knowing whether or not the reviewer knew what they were doing. Could they have been installing the SSD into a mobo that does the PCIe like x16, x8, x8, and had their GPU set up in the #1 slot, thereby clothes-lining the bandwidth on the controller for the SSD? Or does it not even work like that?

Or would a RAID0 with 4x 120GB SSD's plugged into the Intel SATA 3 controllers give better random read/write times? I can't imagine that SATA in a RAID would be faster than a PCIe 2.0 connection straight to the motherboard. But I don't know.

If someone could explain this all in english, I'd be greatly apperciative, because I don't know what to listen to.
 
First of all, excellent first post. Kudos to you!

I *think* a motherboard has a maximum amount of lanes that the PCIe slots can use. So you could use a single 16x card, or two 8x cards, or four 4x cards. Or, rather, you could use two 16x cards, but they'd only run at 8x speed each (not saying the max is 16x, that's just to illustrate the point). Granted, I could be entirely wrong about that. I'm sure other, more knowledgeable folks, will chime in shortly. You don't see a lot of non-commercial folks using PCIe SSDs. The fact that they take up real estate that could otherwise be used other expansion cards may be the reason, or maybe they're cost prohibitive compared to their SATA counterparts... I'm not sure.

For your every day use (and outside of benchmarks), you probably won't see any sort of appreciable "seat of the pants" difference a budget SSD like the 840 EVO vs one of the higher end ones.
 
From http://www.thessdreview.com/daily-news/latest-buzz/understanding-m-2-ngff-ssd-standardization/

"Well, the PCIe Ver. 2.0 is capable of a maximum transmission rate of approximately 500MB/s per lane. Four lanes (X4) enables speeds up to 2GB/s (4x500MB/s) as we might see in the Samsung XP941 and LSI SandForce SSDs. PCIe 2.0 X2 will allow speeds up to 1GB/s (2x500MB/s) and we will soon see this first hand with the Plextor M.2 PCIe X2 SSD, named the M6e."

The M6e in the 512 GB is what I'm looking at getting, wondering if the 1GB/s is really possible to hit, or come close to. I really do notice a difference in load times for games and programs when I put the OS on the SSD. Now that the physical died, I only have civ 5 loaded on my computer (only have 110 GB after format..), on my SSD, and it's NOTICEABLY faster when loading. I remember reading that all physical drives top out at speeds around 180-200 Mbps, regardless of the SATA controllers theoretical limit, and that the first SSD's were able to move data faster than the SATA 3 controller.

Before SSD's, the only way to get a fast OS, that loaded the system files from the hard drive into the memory with lightning speed, was to set up a RAID0 and hope to christ you backed up recently. At that point, the bandwidth limits of the SATA controller started to become a limiting factor, leading to the adoption of just wiring the damn solid state drives right into the mobo via the PCIe slots.
 
The M6e in the 512 GB is what I'm looking at getting, wondering if the 1GB/s is really possible to hit, or come close to.
It most certainly is. :)

I really do notice a difference in load times for games and programs when I put the OS on the SSD.
I would imagine considering it's ~4x faster than a HDD throughput wise and exponentially faster access time wise.

I remember reading that all physical drives top out at speeds around 180-200 Mbps, regardless of the SATA controllers theoretical limit, and that the first SSD's were able to move data faster than the SATA 3 controller.
Correct. Though maybe not the first SSD's... but subsequent generations most certainly did... and now they can saturate the SATA6 throughput.

At that point, the bandwidth limits of the SATA controller started to become a limiting factor,
Not sure that was the case considering you get the full throughput on each sata port.


For your every day use (and outside of benchmarks), you probably won't see any sort of appreciable "seat of the pants" difference a budget SSD like the 840 EVO vs one of the higher end ones.
+1
 
Cool beans. I'm still not sure on how different motherboards work, and what the setup of 16, 8, 8, or 16, 16, 1 is, and how bandwidth is used in PCIe slots and controllers. I want to make sure that I don't limit the bandwidth of the GPU by adding in the SSD, or vice versa. Should I try asking in the GPU part of the forums, where the SLI and Crossfire guys lurk?
 
I had to learn about chips.

My question above is actually answered with another question: "What chipset do you have, and how many PCIe lanes are on the die?"

Building a computer system is not for amateurs. I didn't even know what all the capabilities were of the chip I bought, or the motherboard I purchased for it.. I saw the exclamation points on newegg and went, "oooooh, that must be gooooood."

I chose the Intel 950 because, when I bought it in 2010, it was the fastest, unlocked chip I could buy for under $500. The LGA 1366 architecture and the X58 chipset replaced the front side bus with a point to point interconnect (Intel's QuickPathInterconnect) allowing a data throughput of 25.6 GB/s. At the time, it was Intel's high end chip.

The X58 has 40 PCIe lanes that are arranged in two ×16 links, DMI link and "spare"-based link. When used with the ICH10 I/O Controller Hub with ×4 DMI connection the "spare" supports a separate ×4 PCIe connection.

X58 PCIe ports support full PCIe 2.0 bandwidth (up to 8GB/s per ×16 link) and each ×16 link may be divided into total 16 lanes in any combination of ×8, ×4, ×2 or ×1 ports. They also support all features of line-reserved wiring, which means that in the combinations of (×16 + ×1/×8) slots, often used on the motherboards, not only ×1 or ×8 cards may be installed into the ×1/×8 slot, but ×4 cards should work as well (if not disallowed by the motherboard BIOS.)

The X58 chipset itself supports up to 36 PCI-Express 2.0 lanes, so it is possible to have two PCIe ×16 slots and one PCIe ×4 slot on the same motherboard.

The Plextor M6e SSD that I'm looking at getting only saturates 4 lanes of the PCI bus. The core of the M6e is an M.2 SSD designed for PCI Express 2.0 x2. That’s the speed that can be provided on M.2 slots by today’s mainboards with Intel’s 9 series chipsets. The desktop M6e combines that SSD with a PCIe x4 adapter. Considering that the "core" needs only two PCIe 2.0 lanes, the M6e card can be installed in a PCIe 2.0 x2 slot.

As for SSD makers, they are theoretically prepared to launch products that might make full use of even four PCI Express lanes but Intel deliberately limited the speed of the M.2 and SATA Express ports in its Z97 and H97 chipsets to the PCI Express 2.0 x2 variant whose bandwidth is no higher than 1 GB/s.

The Intel X99 chipset, for the LGA 2011-v3 Intel chips supports PCIe 3.0, PCIe 3.0 achieves twice the communication speeds of PCIe 2.0 through various architecture and protocol management improvements. Now just wait for SSD's to come out primed for PCIe 3.0.

PCIe 2.0: Clock-5GHz, Bandwidth (x16) 16GB/s, Data Transfer Rate: 5 GT/s
PCIe 3.0: Clock-8GHz, Bandwidth (x16) 32GB/s, Data Transfer Rate: 8 GT/s


So, comparing true 1GB/s data transfer rates from the PCIe SSD, in an older motherboard like mine (which needs the Plextor AHCI controllers) compared to saturating 2 or 4 of the SATA 3 ports I have with SSD's in a RAID0, according to what I found, should look like this:

SSD WR Raid.png

2xSSD raid.png


I guess the answer to my question is, does my chipset/motherboard support enough PCIe lanes (and PCIe 2.0 slots) to support your GPU(s), and the 2 lanes that the SSD will saturate? Most of them do. I think a lot of the problems that people will have in trying to install a PCIe SSD with an older motherboard is that, the newest M.2 SSD's don't use AHCI, which was designed for hard disk drives. AHCI protocol doesn't support parallel processing of data access requests whereas the new NVMe is optimized for PCI Express and nonvolatile memory.

Older motherboards with older BIOS's will have IDE or AHCI.

Also, you'd have to install the PCIe SSD into a PCIe 2.0 slot. If your motherboard has 3 PCIe slots, but one is 2.0 and the rest are just plain PCIe, either your GPU or your SSD will be clothes-lined.
 
Great summary. You are probably the most impressive new poster I have seen cross these forums lately... :)
 
This is one of the reasons I opt for a dedicated controller. Taking X99 for example, specifically the Asus RVE, and my configuration, I have 2 GPUs, sound and controller. The controller would sit in the 4th slot (normally), which gives me 8 lanes. If I have that slot used, the M.2 slot is disabled, thus if I use the M.2 slot, I can't use that for other things.

I'd rather have a dedicated controller, which is going to give me more BW anyway, and use it for added capacity. PCI-E SSD vs RAID0? I say RAID of PCI-E SSDs. :) Now I just need a PCI-E SSD RAID controller. Awaiting the inrush of SF3700 components and a controller that will handle it all.
 
Back