• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

[O/C]Windows Showdown: 8 Operating Systems in 6 Benchmarks

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
Wow. Great read. I have a copy of XP x64 that i never got around to installing and always wondered how it fared. Not well it seems.
 
All too often people sacrifice the stability of a 32-Bit OS in cases when 3.5 GB of RAM is more than enough to cover their needs.

I considered 64-bit only as part of a multi boot so that I would have the option of booting into 64-bit when I really need it... but so far I could not find justification for over 3.5 GB of RAM for my personal use which includes use of older programs incompatible with 64-Bit OS.


On my triple Windows 7 / XP / Vista boot [all 32-Bit], I have certainly found Vista to be slower in a way that I can feel in comparison to Windows 7 and Windows XP.


I suppose benchmarks measure things once they get going and real life also includes getting them to go, which is what I mean by feeling faster vs. slower.

Ever tried to run windows x64 with min. 8gb ramm and no page file ? I promise you that it flies vs a x86 os when the x86 system starts seriously swapping :)
I do use photoshop a lot and especially with some heavy filters I always end up with a system that starts swapping madly to the disks (Raid0 on Areca 1680ix w. 4GB ).

The performance difference is then suddenly very noticeable in a x64 system with 8GB (or more) vs. x86 with it's memory limitations.

If I were a avid gamer or bencher I would have gone much longer than Gautam did in his stripping - I would start off with MicroXP and stripped it to the bone. And of course ; no AV running in the system - actually not a single start up program at all.

If it all is about performance - I think MicroXP is a good place to start ;)

From my experience with both Vista and 7 - I see (saw with Vista - didn't try it after SP1) that XP always loaded programs faster, search was faster (without indexing on), rendering was faster +++

I guess (hope?) that this is things that Microsoft will sort out in 7 - remember that XP had a lot of problems in the start too, XP got good after SP2..

EDIT: I guess a lot of the oldtimers that doesn't get impressed of the eyecandy that 7 offers will keep on running XP till the bitter end ;)
 
xtreeme said:
EDIT: I guess a lot of the oldtimers that doesn't get impressed of the eyecandy that 7 offers will keep on running XP till the bitter end

Color me one of those old-timers. I have 64-bit Vista Ultimate sitting in a box after a year of using it. Just installed XP-64 on my new 1.5 TB drives. The performance difference is tangible, again in day-to-day use, searching, task-switching, etc. And not in any way optimized; I'll have to check out these suggestions.

The ID3 tag bug in 64-bit XP may be what pushes me over to linux, though. :)
 
Very interesting article Gautam! Thanks for all your hard work! :)

I understand your motivation in magnifying differences both graphically and in word choice, and I have to say that it hits home for an audience of competitive benchmarkers. Such an audience IMHO rely on statistical deviations as much as hardware in their quest to beat the next guy (since, after all, glory goes to the man with the best single datapoint, not the best statistical average :D).

However, as somebody who doesn't competitively benchmark I agree with macklin -- the magnification of tiny differences just misleads me. I'm not a statistician so I can't comment on just how statistically significant things are (or aren't), but the OSes aren't as differentiated as the language and graphs suggest.

If I may, I have a few suggestions:
  • Try to make your intended audience more clear. Your second paragraph could be read as targeting competitive benchmarkers, but that's not how I read it.
  • If the site supports it, rollover graphs would be wonderful. You could keep the magnified views by default, but show the zero-based ones if the reader rolls their mouse over.
  • Tone down the language a little and/or couple it with language that emphasizes these differences are very small but possibly quite significant to a competitive benchmarker.
  • Make your data available. I've seen very few sites do this (there could be a reason why, but I don't know), and I think it would be interesting to provide readers with the raw data should they wish to do a more in-depth analysis.

JigPu
 
Those are great points, JigPu.

Indeed, you could use the data as a follow-up article for non-benchmarkers, because your results are also significant to us, but with different conclusions (as I mentioned above). It's interesting that the same data tell different stories depending upon your target. I'd be happy to help write a very short follow-up note / article-ette.

It's actually funny, because we end up using the same (software) tools for very different purposes.

Since we have both target audiences here, it might be a nice way to get further mileage from your great, hard work. Also, it might be nice to have our "cultures" intermingle.
 
Are you a benchmarker? If not, XP64 does fine. The difference is miniscule. ;)

Well...not really. But my main reason for installing it would have been to upgrade from XP 32-bit, probably expecting a big performance increase. I suppose what i meant to say was that it doesn't look like it would have been as much of a performance upgrade as i thought.

I guess it helps to clarify. :D
 
I didn't read through all the posts, so maybe its been covered. But I think what really comes out of this article is that OS does not have a huge impact on performance. For somebody trying to be in the tops for benchmarking stats its fine, but even that varies depending on exactly what you are doing. Most of these graphs are actually quite poorly displayed (in a statistical sense). The superPi graph is a superb example of this. Windows 7 64 looks 70% slower than windows xp 64 in this graph. In reality it is 0.56% slower.

That being said, it is certainly interesting to see the effect that the operating system has on a given system for various applications. I wish more of us had the resources and time to do similar benchmarks in order to be able to compare with different configurations.
 
Very interesting article Gautam! Thanks for all your hard work! :)

I understand your motivation in magnifying differences both graphically and in word choice, and I have to say that it hits home for an audience of competitive benchmarkers. Such an audience IMHO rely on statistical deviations as much as hardware in their quest to beat the next guy (since, after all, glory goes to the man with the best single datapoint, not the best statistical average :D).

Yes and perhaps unsurprisingly no one on the hwbot forum raised the issue of the graphs not starting at zero. Their issues were mainly that they wanted more hardware configurations tested. :p I suppose each set of audiences behaved somewhat as expected. The benchmark junkies determined to know nothing other than what they should be using to get every last point out without much thought, while plenty of you guys considering things a bit deeper.

About the tone and all of that...I guess what I probably should have stated up front is that this began for the benching team. In fact, it was in the private team lounge in a much less refined state for months, but I was asked to make it public. So the nature of the testing and the conclusions was from the getgo intended for them. (It's also why it remained private...using Vista over XP was somewhat of a "trade secret" that's been used successfully to grab some records)

One other example that might hit home to a lot of people here is that if you were to take 3% off of 4000MHz, it'd put you 3880. However, I can ensure that many members of this forum have gone through great lengths to get that extra 3%. ;)
 
I noticed the same thing reading the hwbot thread - they had the data points and that's all they were concerned with. The difference between audiences and frame of reference is certainly interesting.

Out of all the sites that picked up your article (about half a dozen highly relevant community sites), www.hwbot.org and www.madshrimps.be had the most on point evaluation and commentary. Props to them.
 
One other example that might hit home to a lot of people here is that if you were to take 3% off of 4000MHz, it'd put you 3880. However, I can ensure that many members of this forum have gone through great lengths to get that extra 3%. ;)
That's certainly true - the question for some members of your audience, then, is how consistent that 3%, if not random noise, carries over to real world usage. ;)

:thup:
 
About the tone and all of that...I guess what I probably should have stated up front is that this began for the benching team. In fact, it was in the private team lounge in a much less refined state for months, but I was asked to make it public. So the nature of the testing and the conclusions was from the getgo intended for them. (It's also why it remained private...using Vista over XP was somewhat of a "trade secret" that's been used successfully to grab some records)

One other example that might hit home to a lot of people here is that if you were to take 3% off of 4000MHz, it'd put you 3880. However, I can ensure that many members of this forum have gone through great lengths to get that extra 3%. ;)

This exactly what I'm talking about, G posted that in the lounge right before the last Forum Warz. I can tell you that this was our Bible to setup our rig. Should I remind you how we did on the last Warz:salute:
 
That's certainly true - the question for some members of your audience, then, is how consistent that 3%, if not random noise, carries over to real world usage. ;)

:thup:

It's not "random noise" and it is consistent. Even from a statistical viewpoint, if a data point is 5 deviations from the mean then the error is certainly statistically significant. In fact I'm not clear what reasoning you guys are using to dismiss a certain percentage as being "insignificant."
 
Last edited:
Thanks for the hard work Gautam.... I know it must have taken hours and hours to accomplish and is very much appreciated! Not many people would have bothered with such an exhaustive effort, kudos.

....sorry to see some people giving you headaches.

..... lol Bob, you might catch grief for saying that, but +1 brother I'm with you.
 
You see G that thing should have stayed in the lounge.....

I disagree. This kind of discussion is healthy and enlightening for all of us. We all learn something and are forced to reassess and strengthen our arguments. Sometimes we find we were wrong (and can be thankful for new knowledge, saving money, or whatever), and sometimes we find we were right but now have a deeper understanding of why (and have more effective arguments for the next time).

There's a great risk when a group keeps itself isolated because it doesn't want to hear contrary opinions or analyses. The group loses out because it develops a monoculture that's susceptible to unchallenged dogma. The broader community loses out because they don't get the group's in-depth expertise. When both work together, both are enriched. They just have to learn one anothers' vocabularies and motivations.
 
Thanks for the hard work Gautam.... I know it must have taken hours and hours to accomplish and is very much appreciated! Not many people would have bothered with such an exhaustive effort, kudos.

....sorry to see some people giving you headaches.

..... lol Bob, you might catch grief for saying that, but +1 brother I'm with you.

Again, a well-done work comes out stronger after tackling constructive criticism. It's part of how we learn and evolve.

I believe that Guatam's work is in this category: well-done work that will emerge all the stronger.

Firewalling ourselves from differing points of view isn't healthy or conducive to understanding. If our analyses can only convince people who agree with us, then they probably aren't very good analyses. Fortunately, that's not the case here. :)

I think there's a good opportunity here to intermingle and strengthen the bonds within our diverse community. Again, I'd like to extend my offer to G to do something together as a follow-up. I'm learning a lot as I read through here.
 
Ok let me put it in a different perspective, if you want a new car and want advice on what is more cost/performance effective you will probably look in Car and Drivers, but if you already have the car and want to get the most out of it you will probably look in Muscle Car. You see my point, this comparo was done for a Muscle car audience not for the Car and Drivers reader. You seem to don't understand how much work it involve to get let's say 1 seconds less in wprime, and G reference guide help us accomplish that. I can assure you that switching from Vista 32 to Win 7 won't let you get your email faster
 
Thanks. I appreciate the difference.

You seem to don't understand how much work it involve to get let's say 1 seconds less

I wouldn't say that. In fact, I greatly appreciate and admire how difficult it is. I myself would never have the time, patience, or budget to do that. But I admire seeing what's possible, and I appreciate that pushing the envelope of the hardware helps advance the state of hardware for the rest of us. At the absolute very least, what you do (1) helps us figure out what hardware has enough quality to survive 24-7 heavy-duty use in less extreme settings (e.g., a 5% overclock applied to a cancer simulation), and (2) pushes the hardware manufacturers to improve their top-end products, which in turn improves the mid- and lower-end products as well. It's a win for everyone. I don't think anybody denies that. And nobody denies that there are benefits to the broader community far beyond this.

What we have here is an interesting discussion. You're presenting work that started in a niche but is interesting to everyone. You're finding different points of view on the same data. That's enlightening for all of us. It's not that somebody or other "doesn't get it." It's that they have a different frame of reference.

The data may or may not be statistically significant. Some plots are, some may not be. I believe most individually are. Nonetheless, a near-NULL result is extremely interesting for the general readership, and the individual results are interesting to the benchers. We all win here. And I think taking care to remember that we are a broader audience is valuable. We gain data that we didn't have before, even if for different conclusions. It's a beautiful case of getting twice as much out of the same data than previously thought. That's a benefit of opening up to a broader group--you find things you would not have otherwise expected.

That's been the case for me. I've been exposed to the thoughts and methods of a completely new group. Aside from reading a few "world record LN2 overclock" articles here and there, this is new to me. And I gained for it. So thanks for opening up. Don't let constructive critiques scare anyone away--it means that we're genuinely interested and want to learn more. You might just get some new recruits for it.

Opening yourself up and presenting your work to a broader, often skeptical audience is challenging and scary. I know exactly how this feels, because I do it every day as a mathematician working on cancer and molecular/cellular biology. The discussions can be heated and draining, but you learn so much and advance your knowledge and your presentation skills so much, that you always come out the stronger for it.

I've also found that the more I learn, the more education I acquire, the more I find myself able and willing to say "I was wrong. I hadn't thought of it that way. That's interesting. That has so much more meaning than I had appreciated. That's deep, and I think I can use it."
 
Last edited:
Back