• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

[O/C] Another Look at the Gigabyte X58A-UD7 Rev. 1.0

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Overclockers.com

Member
Joined
Nov 1, 1998
Another Look at the Gigabyte X58A-UD7 Rev. 1.0
by Ross

x58a-ud7_board-300x199.jpg

The Gigabyte X58A-UD7 motherboards have been out for a while now. Chances are, if you’ve been interested in this board at all, you have had no problem finding information on it. So why sit down and write yet another article about it? For starters, it’s a terrific board, worthy of some more space on the Web and this particular article includes some benchmark results against a rival board that some might find interesting.

... Return to the article to continue reading.

Discuss this article below. If you are interested in contributing to the front page (www.overclockers.com), please feel free to contact splat, mdcomp, or hokiealumnus. For the latest updates, follow Overclockers.com on Twitter (@Overclockerscom).
 
I really like Gigabyte boards, but with X58 I gave eVGA and Asus a shot. eVGA failed b/c you can't get into the BIOS with RAID enabled, the screen flashes too fast to hit "Del" even if you're rapidly pressing it during boot. The R3E is okay, but I miss using the EP45-UD3P and P55A-UD7...

Great job on the article, I enjoyed the read :thup:
 
I have a soft spot for giga boards, and thought the write-up was very nice.

and let others rag on slow mode, if it gets you higher clocks with ht on for 2d tests I am all for it. thats how I got the 920 to 5270 for wprime.
 
Thanks guys. I thought it was a good comparison to do. I actually wanted to do it months ago, but real life has a habit of getting in the way.

I hear that dejo. Slow Mode is your best friend for certain benches on locked CPUs. That was a perfect CPU/board combo you had. Almost 5.3GHz on a 920 is no easy task, let alone benchable :thup:

MattNo5ss, I loved the EP45-UD3P for memory clocking. I have a couple screen shots of it somewhere, but was able to do well over >1600 5-5-5 on Ballistix 8500s with it :eek:
 
once gpi link speed is over 9ghz, all bets are off for stability. or even benching for many chips. give me a board with slow mode capability every time for 2d benches.
 
Thanks. Yep, it's tough competition and the UD7 does really well :thup:

once gpi link speed is over 9ghz, all bets are off for stability. or even benching for many chips. give me a board with slow mode capability every time for 2d benches.
Even over 8K is tough on some 1366 procs (like my particular 950), but look for over 11K QPI in my next article (S1156) :eek:
 
Very nicely written.

However, with the mean difference between the two often under 1%, it would be interesting to see the standard deviations.

e.g.,:

Board 1: mean1 +/- standard deviation1
Board 2: mean2 +/- standard deviation2.

It would be interesting to check if the following intervals intersect.

[mean1 - sd1 , mean1 + sd1 ]
[mean2 - sd2 , mean2 + sd2 ].

If these do intersect, (i.e., if mean1 < mean2, and yet mean2 - std2 < mean1 + std1), then it's not as likely that the two differ (statistically) significantly, and you'd have a good chance of beating a board1 time with board2 by running board2 many times. (Benchers, please don't take offence by the word "significant"--I'm using it in the technical, statistical sense!)

Thanks for the interesting write-up! -- Paul
 
I agree Paul. It would be very cool to have more data points and a standard deviation done on all of them, but it's just too time consuming a project to run 10-20 or more of each bench on each board. Granted, the single best score is what counts for benchers at the end of the day, but waiting for something to pop up that's a couple deviations out (even if the deviations are small) isn't what I was after. I was just comparing the general performance between the boards since not everyone uses them only for benching. If there is any truth to random sampling though, these are a close representation and a good indicator for the purpose here :)
 
Ross, thanks for the response.

I completely agree there--I personally wouldn't have the patience! (Although even with n=3, you can get an estimate of the standard deviation. It's just really rough.)

And agreed--this gives a good idea of the overall performance of each board. So it's interesting, too. It looks like from a non-benching perspective, these should perform nearly identically well.

And once again, very nice job in doing these thorough tests, with a great presentation of the results. :thup: -- Paul

PS: Using the following estimator of the "true" standard deviation: If µ is the sample mean, and xi are the n samples, then

σ ≈ ( Σ( xi - µ ) / (n-1) )^½

(I would kill to have raw HTML to let me make that prettier.)

Test1: (using 5 significant figures, since your data have 6)
Gigabyte: 505.07 ± 0.018193 (seconds)
Asus: 508.78 ± 0.094453 (seconds)

intervals:
Gigabyte: [505.05 505.09] seconds
Asus: [508.68 508.87] seconds

Those don't intersect, so it's very convincing. You can see it in the raw data--very little variability.

The 3DMark results are interesting to me in one respect--at a quick glance, the first run is always the lowest score. I'm guessing that's because more of the test data are in cached memory for the subsequent runs. I used to see the same thing in MATLAB's built-in "bench" benchmark.

I don't have time to do the stats for the 3DMark results, but these appear to have bigger variances. It might be worth it to run those 4 times and toss the first result on each to get rid of caching effects. You'd probably see much smaller variability then.
 
Nice Paul! If I ever do another comparison like this, it will be 4-5 runs each and I'll know who to contact for crunching the numbers :D

BTW, those aren't the order of the results from the tests, they are simply the order of the screen captures in the folders (sort by name) ;) After I started putting them into tables though, it seemed to make sense to keep them sorted in ascending order so lowest->highest on each could be seen at a glance.

I appreciate you sharing. It's definitely something to keep in mind for the next time :thup:
 
Nice Paul! If I ever do another comparison like this, it will be 4-5 runs each and I'll know who to contact for crunching the numbers :D

...

I appreciate you sharing. It's definitely something to keep in mind for the next time :thup:
And if you don't want to call up macklin every time... :beer:

In excel, STDEV is the function you want (note not STDEVP).

On statistical significance, you can look at TTEST (which has been unexplained...yet), probably with third and fourth arguments being 2 and 2. We say results are "statistically significant" if this value is below a somewhat arbitrary threshold.

What this value actually means, given some set of assumptions, is the probability that the results we're seeing occur IF the two populations are the same. It is important to note that this value is NOT the probability that the two populations are the same. Somewhat more symbolically:
Pr(observations | assuming no difference between the groups) != Pr(no difference between the groups)

PS - I remember a very similar discussion on significance (both statistical and practical) on another front-page article about a year ago, but it did not get this in depth, IIRC.
 
And if you don't want to call up macklin every time... :beer:

In excel, STDEV is the function you want (note not STDEVP).

On statistical significance, you can look at TTEST (which has been unexplained...yet), probably with third and fourth arguments being 2 and 2. We say results are "statistically significant" if this value is below a somewhat arbitrary threshold.

What this value actually means, given some set of assumptions, is the probability that the results we're seeing occur IF the two populations are the same. It is important to note that this value is NOT the probability that the two populations are the same. Somewhat more symbolically:
Pr(observations | assuming no difference between the groups) != Pr(no difference between the groups)

PS - I remember a very similar discussion on significance (both statistical and practical) on another front-page article about a year ago, but it did not get this in depth, IIRC.

Nicely done! :)
 
Dang, I need to dust off my statistics books (and apparently, any Excel books I might have too). Thanks for the heads up Omsion, I'll definitely take a look at it for the next head-to-head article :thup:
 
Nice write-up Ross. Wondering if write to read (DD & DR) being 1 click off on R3E would maybe explain some of its latency lag, everything else is tighter so it's not likely but might be worth testing. I need to dig this board out for some stuff, I'll check it out mid-jan. The way Sandy Bridge is looking we'll be playing X58 for some time to come.
 
Back