• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Do we loose Windows optimization with mobiles in desktops?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
i wonder if the difference in boot speed is evidence to this. i'm not certain but when i install windows with oced cpu that's just set by multi and fsb i think i get slower boot speeds. since the most recent install i did when i had bios set to xp 3200+ i've noticed my comp boots super fast and continues to do so (even though cpu now set at 200x12). when i install stuff after format i keep all things equal yet sometimes my boot speed is way slower than other times. i've heard many people mention slower boot times with oced chips (especially 2500s) so maybe windows, when formatted with chip oced, doesn't recognize the extra cache or other optimizations of the barton.

i don't have the time to test this theory but someone here may.
 
Wow we really opened a can of worms here.

So, I think the consensus here is that if the Bois says Unknown CPU type, So will windows. With that, we might be giving up features of the XP in some applications best case and all applications in worst case.

We don't care about the power features so if we could get the MB manufactures to add the mobiles to the bios and simply recognize them as their desktop companions that would be great. I think Abit would be the first target as most people here use them. I don't have one so could one of you make the request and see what they say.

From my past experience with Bios companies this might not be Abit's responsibility but rather Award's. They make the base Bios and custom tune it for the special features on a particular board.

Any thoughts? Worst case maybe they would answer some of our questions.
 
I'd like to see us figure out just what it is we're loosing out on if Windows doesn't recognize the system CPU as AuthenticAMD.

I know it's not boot speed or general speed, if it's anything, it's specific applications. But what applications are loosing what?
 
so why hasnt someone done a realworld bench against the mobile barton and standard barton at identical clock multi mem settings?
65stang said he has both and im sure many others here do also. i believe pcmark would be the ideal testbed seeing as windows media encoding is very reliant on cpu optimization. id be very interested in these results.
 
The differences will not show up when measuring regular performance benchmarks, boot up time and general speed.


The differences may only show up when using specific applications that may be designed to take advantage of AMD specific features.


But which applications, if any?
 
need to know

almost all apps use the extra fetures

any compression app (movie encoders decoders) (winrar, winzip) (games) (3d modeling apps) almost everything

it would be good to know as i just ordored an mobile proc my self

can someone do some test to verify this

thnx
 
The differences will not show up when measuring regular performance benchmarks, boot up time and general speed. The differences may only show up when using specific applications that may be designed to take advantage of AMD specific features. But which applications, if any?

Microsoft optimized Windows XP's DirectX interface for AMD processors and also optimized applications like Windows Media Encoder for 3DNow!™ technology. (www.amd.com)

if a generic media encoder, one of the most cpu intensive applications a modern computer is expected to run, wont show if these optimizations are in effect, than what would you be expecting to use them? these cpu optimizations are built into nearly any real world application or at least base these programs are written on top of.

Stop scaremongering and someone run a bench before this whole thing gets outta hand. Honestly i dont believe your OS relies on the BIOS for checking the availability of these optimizations. when you run CPUID or such it says your cpu supports these enhancement does it not? anyways, im near positive that there is nothing to be worrying about, although i would like to see some numbers to set things straight.

Go encode a file in Divx (highly optimized, just choose your flavor of instruction)

-JRR
 
someone use a few apps to verify please

some apps have an area that tells u what is avail as far as speciall instruction sets.

in "Virtual Dub" Options>Preferences>Cpu

you can select what you want turned on.

so if you know ur mobile supports certain settings check em on then test how many secs it takes to encode a file then turn all settings off and test.

it will at least tell u if they can work or not
 
i just tried encoding a mpeg with a whole slew of different optimizations. sse, mmx 3dnow 3dnow2, fpu, sse2. and with all of them i got different results. from 1.5 minutes to encode this file down to 65 seconds using sse2. so id say were fine. now stop your worrying :)
 
Prevail, the extra overclocking headroom mobiles offer should be enough to compensate for any differences that may exist between applications detecting AuthenticAMD vs Unknown CPU so there's another reason to stop worrying.

This whole issue is primarily for educational purposes and not to scare people away from the best overclocking CPUs on the market now.


Do you not agree that benchmarks are designed to show comparisons between different CPUs so by definition they should not be optimized?

Running something other then standard bechmark apps is needed but it needs to be exact same system with exact same FSB x multiplier speeds.

Also, let's not forget:

Gnufsh said:
I think that an important question is: Do the apps check at runtime to see if optimized instructions are supported themseleves, or do they just ask the OS?
 
Last edited:
No one has a regular barton to test? Not sure what you'd test but it seems worthwhile since these chips are hotter than haddies right now and everyones doing it. I wish I did'nt spill water all over my 2.4C/IC-7about now:(
 
for what im gonna be doing i need everything to work right on that chip

i will not be able to OC my FSB like most ppl.

i might be able to goto 140mhz and thats it (if im lucky)

i play Lomac flight sim so i dont want to go in thinking well some instructions dont have to work cause it will be made up for by the OC potential.

no im counting on it all working with some ocing potential

I hope everything will work just like a non mobile barton

thats what we really need to know

1 more thing

so all mobile bartons will say "Unrecognized proc" in bios and also when u right clik the "My Computer" icon and select properties?
 
Prevail said:
i just tried encoding a mpeg with a whole slew of different optimizations. sse, mmx 3dnow 3dnow2, fpu, sse2. and with all of them i got different results. from 1.5 minutes to encode this file down to 65 seconds using sse2. so id say were fine. now stop your worrying :)


i didn't realize bartons supported sse2. in fact i'm pretty sure they don't. why would it be faster using an instruction set that is not supported??
 
cujo said:
i wonder if the difference in boot speed is evidence to this. i'm not certain but when i install windows with oced cpu that's just set by multi and fsb i think i get slower boot speeds. since the most recent install i did when i had bios set to xp 3200+ i've noticed my comp boots super fast and continues to do so (even though cpu now set at 200x12). when i install stuff after format i keep all things equal yet sometimes my boot speed is way slower than other times. i've heard many people mention slower boot times with oced chips (especially 2500s) so maybe windows, when formatted with chip oced, doesn't recognize the extra cache or other optimizations of the barton.

i don't have the time to test this theory but someone here may.
It's doubtful. If it doesn't recognise it as the correct processor, you'll lose a few specific optimizations, probably mostly doing with what instructions programs use, for example SSE. Having booted linux both with and without SSE enabled on my CPU, I can tell you it makes no noicable difference in boot times. Of sourse, the kernel is entirely integer (except for the part that calculates bogomips, but that's not really all that usefull), and doesn't usually compile with SSE support anyway (not that the kernel doesn't support using SSE on the computer, in fact itturns on SSE support, butit doesn't itself use SSE). It may be that something strange is going on in your IDE/PCI bus. If they're locked, perhaps there is more overhead when they are not at an easy ratio of the FSB?

Prevail said:
i just tried encoding a mpeg with a whole slew of different optimizations. sse, mmx 3dnow 3dnow2, fpu, sse2. and with all of them i got different results. from 1.5 minutes to encode this file down to 65 seconds using sse2. so id say were fine. now stop your worrying :)
Well that doesn't tell us a lot, because you forced the optimization support on. And, like I said, usually the BIOS enables SSE support on Athlon XPs, not the OS. The problem we are talking about is programs that have support for various optimizations (MMX, 3DNow, SSE). If they determine which code to run based on information they get from the OS which gets it from the BIOS which doesn't recognise the processor correctly, then they don't use the proper optimizations and performance is lost.

I don't know how programs/OSes actually do this, but I would think that to determine which optimizations to use, they would have to look at the feature flags of the processor. These tell of support for things like 3dnow, sse, sse2, and MMX. It would be very bad to just use them by CPU type, because like I said, SSE is not necessarily enabled on Athlon XPs. IF the program just detected it as an AXP and ran some SSE code on a AXP that did not have SSE enabled, bad things could happen. If, however, it checked to see if the SSE flag was present, it would only run SSE code on CPUs wit SSE support actually enabled.

Just for fun, here's the CPU in my laptop:
lappy src # cat /proc/cpuinfo
processor : 0
vendor_id : AuthenticAMD
cpu family : 6
model : 8
model name : mobile AMD Athlon(tm) XP 1800+
stepping : 0
cpu MHz : 1500.698
cache size : 256 KB
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 1
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 mmx fxsr sse syscall mp mmxext 3dnowext 3dnow
bogomips : 2957.31

If I compile a kernel that doesn't enale SSE on my cpu at bootup, "sse" does not appear under flags.
 
hopefully however the bios reads what proc is plugged in the mobo has nothing to do with how other apps pull the bios/proc

in that website article i posted above i tlaked to the person who did that articel on another irc channel ( i think i found an overclock channel) and he said it is true that the bios does not recognize the chip right but he told me that the mobile barton is the exact same thing as the non mobile (and im think yea i know the core is the same) then he told me he determined this by doing that suite of tests

if u look at the benchmarks it looks like its were it should be but its still not a sure thing unless someone takes 2 bartons (same freq) 1 mobile and 1 not and try to run same tests

so what we really need is someone who can get their grubby hands on these chips to verify that they perform exactly the same in all the tests we can think that needs to be thrown at it
 
Maybe some directX tests to determine if directX is using the correct optimizations? Something that might choose which instructions to use at runtime based on the OS (not one you can force to use instructions).
 
i got my mobile 2400+ in today. the bios does indeed say unknown cpu type, but once i got into windows hardware manager showed mobile athlon under processors. maybe we should all head over to microsoft.com and download the amd processor driver, i have the latest one.
 
Back