• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

raptor 36 in raid 0 don't use more than 2..

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

gaza

Member
Joined
Jul 21, 2002
Location
MI
i have been runing some test with raptor drives in raid 0... what i found out was that using 2 or 3 drive will give you little better read time the wright will not go up much.. the seson for this is not the raid card i am using for tried 2 of them.. it is the 32 bit buss... when i dumped 4 raptors into a 64 bit bus, my god i got some thing like 180mb/sec.. and more some times with a 16k stripe.. i know that most of you will not be droping the cash on more than 2 raptors, just some thing you should keep in mind.. and if you do have a need for shuch speed you probably know this alrady..
 
Yup, it's insane to use more than 2 Raptors on a 32BIT | 33MHz 133MB/s PCI BUS since it can't handle more than that.
 
Exactlly. 133mbs is all you can do on 32 bit PCI. Correct me if im wrong, but PCIx should take care of that problem soon though.
 
K1ll1nT1m3 said:
Exactly. 133mbs is all you can do on 32 bit PCI. Correct me if imp wrong, but PCIx should take care of that problem soon though.

Well the most recent releases from VIA & NVidia on the K7 platform support SATA RAID 0 or 1 natively on their respective SouthBridges;

VIA KT600 = 8X VLink 533MB/s
NVidia NF2 = Hypertransport 800MB/s

PCI Limitations are not an issue with them BUT you are limited by the number of SATA Ports.
 
ok im kinda confused on this.
64 bit bus? and 32 bit bus?
can you explain what the bits and bus's are and how used?

i mean we talking 32bit OS on a pci bus? compared to a 64bit OS on a pci bus?

my mobo i know is slowing my raptors down in raid0. soon it will go bye bye and looking into 4x raptors would be insanley fast but what would have enough bandwith to use them on?
 
The 32bit or 64bit bus has nothing to do with the operating system, its purely a motherboard thing. Your typical ATX motherboard has 32bit/33mhz PCI slots. Many server & dually motherboards have 64bit/66mhz PCI slots. You get more bandwidth from a 64bit PCI slot than you do with a 32bit.

If you shop for RAID controllers you will see that a great many of them support 64bit slots, but are backwards compatable with 32bit.
 
great explanation!
so for the atx guy wanting more is anything out that has 64bit busses yet? or need to just wait on pci-x?

i dont run servers,or smp boards is why i ask.

sonny:i thought the nf2 mobos used a sata controller built into the 133mb/s pci bus? this is why the intel mobos with the integrated sata on the SB has much more bandwith.
has been a big selling point to me to try intel rig for once anyhow.
 
I'm also very sceptical about 2+ raptors actualy being faster. In theory adding drives will increase your bandwidth, but in my experiance adding drives is not a good idea.
I've tested 2, 3, and 4 U320 10k SCSI drives in a compaq DL380 server. Bandwidth gain from 2-3 was about 10%, and from 3-4 was negligable (some tests the 3 was faster and others the 4).
Ok so that was all the good that came from more discs...oh I also did this with a 4 cahnnel IDE controller and results were almost identicle (I used 80gb 8mb ATA 133 drives with a ATA133 raid controller as well)....the mb/s rate was barly over 100 so PCI bus wasn't an issue for the IDE's...with SCSI it was onboard so it was incredibly fast.

Here are the cons:
seek times are terrbile. Your at the mercy of the slowest disc. Your array failure is much much higher, your spending a hell of a lot more money, and producing a ton more heat.

So you gain 10% in bandwidth....and lose everything i just mentioned...I would stay away from more than 2 discs in a RAID 0.
 
There are some Xeon single processor boards with the 64bit bus, but usually they're intended for server use and do not have AGP slots.

PCI-X is the current revision of the 64 bit slots. What you are thinking of is PCI-Express. That will include video cards and other interface cards.

The controllers are housed within the southbridge, but have multiple peer bus connections, thus the higher available bandwidth.


ajrettke: was that in a multiple channel controller or single? If it was single channel, then you were probably saturating the SCSI bus and hence no further improvement was possible.
 
To use four Raptors ( any drives really ) in raid0, you really need at least a 64 bit PCI bus. Unless its a new onboard raid controller. PCI Express will remove the PCI bottleneck. You can also find this on alot of new server boards. I have only seen a few PCI Express cards though. I also thought PCIExpress will becoume mainstream next year. Meaning most motherboards will have these on them.

As a side note, the AGP slot will be replaced with a PCIExpress as well. ( i have seen info saying the next intel chipset will do this, right??? ) Link

The 32bit and 64bit busses are how wide the pipe is. Basicly, like a 2 lane road versus a 4 lane road. In 32bit it can only send 32bits of data per cycle, 64bit is 64bits per cycle.

Here is a good link for anyone interrested in the PCI standards.

Sorry, I say PCIx but I meant PCI Express ( I know there is a difference ).

Edit: I have wanted to talk about this for a while now. I am guessing this is the right post for it. What are the differences in PCI-X and PCI Express? What is PCI-X going to be used for?
 
Last edited:
deathstar13 said:
sonny:i thought the nf2 mobos used a sata controller built into the 133mb/s pci bus? this is why the intel mobos with the integrated sata on the SB has much more bandwith.
has been a big selling point to me to try intel rig for once anyhow.

My apologies for not being clear on the NForce 2 chipset, not that I care about it:D. What I failed to make clear is that the MCP-T2, the latest version of the NF2 chipset, already has built in SATA controller on the new SouthBridge. The SouthBridge that Abit decided to use againts in their new AN7. The current NF7 - S that you are familiar with still uses the SI SATA Chip so it is still a bottleneck as you mentioned:beer:
 
most informative thread ive read in a while!
im highley thinking of a switch to intel mobos just for the fact of the way the sata is handled.

but then again in the next few months many new standards will be popping up.
and thanks for explaing the diff of pci-X and pci express.

all this would be so much more simpler if we could afford solid state drives connected directly to the cpu and memory buss :D
 
Here are pics of 32BIT & 64BIT PCI Slots on a K8 platform;

i_s2885.gif


AGP 8X 110W Pro
64BIT 133 | 100 MHz PCI - X
64BIT 133 | 100 MHz PCI - X
64BIT 100 | 66 MHz PCI - X
64BIT 100 | 66 MHz PCI - X
32BIT 33MHz PCI
 
Thats a nice pic, glad you added the text below it. ( i removed my original post )

Im still not clear on the differences. PCI Express, PCI-X and 3GIO???
 
K1ll1nT1m3 said:
Thats a nice pic, glad you added the text below it. ( i removed my original post )

Im still not clear on the differences. PCI Express, PCI-X and 3GIO???
thats is why you would need a pci-x card to run a 64bit bus.

anyhow the pic helps me alot. get this i thought every time i seen those dual opti mobos "why in the hell so many ISA slots?" duh!

what they reminded me of anyhow.
 
Back