• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Getting a nvidia x86_64 package into a ia32 userland

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

BusError

New Member
Joined
Oct 31, 2008
I'd like to get the nvidia-xxx 64 bits modules, but in 32 bits userland. Does anyone have a trick for it ? Possibly getting the 'source' package (yes I know they are binary but they must have a .deb 'source' package anyway)

The thing is that I compiled a nice 64 bits kernel that runs perfectly with my 32 bits install, but the binary driver fails to build (link) so I'm stuck in VESA :/

Oh I'm on ubuntu 8.10
 
I'm not fully understanding. Do you have a base 64bit system and then you simply installed the 32bit libraries? I don't see why the 64bit drivers aren't working for you. Did you download the ones fro, the nvidia website? Or are you trying to get them via apt?
 
No I'm doing the contrary. A 64 bits kernel, and a 32 bits userland. It gives me the advantage of both.
I also assumed it would work, but I think they separated different binary objects into the different arch so the one installed for ia32 just won't link with my shinny new kernel...
 
I think I see what you are saying. 64-bit kernel with 32-bit programs running on it? To me this seems to be pointless, you do not get much out of the 64-bit kernel other than maybe the use of >4GB's of RAM.

But.... if you are running a 64-bit kernel then you must install a 64-bit nvidia driver, the 32-bit will not work, the programs running on top of the kernel are not effected by this (although I'm not sure about Xorg, this could be your problem if it is not 64-bit). There is also no source anything from Nvidia for a GFX driver. You will need the kernel source and headers available for the driver when you run the installer. I think it links the libraries based on what it sees in the source.

:soda:
 
I'd like to get the nvidia-xxx 64 bits modules, but in 32 bits userland. Does anyone have a trick for it ? Possibly getting the 'source' package (yes I know they are binary but they must have a .deb 'source' package anyway)

The thing is that I compiled a nice 64 bits kernel that runs perfectly with my 32 bits install, but the binary driver fails to build (link) so I'm stuck in VESA :/

Oh I'm on ubuntu 8.10

No.
 
Just like 64-bit Windows has WoW and a system32 folder, a 64-bit Linux distribution has a lib32 folder. Running a 64-bit environment does not prevent you from running 32-bit apps.

MRD or one of the other Linuxy people can correct me if I'm wrong, but AFAIK, there is no such thing as a "64-bit kernel in a 32-bit environment." If your build target is x86_64, you've got to have the 64-bit libraries to build it. Enabling 36-bit PAE support for 64GB memory doesn't make it 64-bit. Selecting the "Core2 or newer" option in the kernel config simply optimizes for the architecture, it doesn't determine whether the kernel is running 32-bit or 64-bit. If your build target is still i686, you're compiling 32-bit.

What output do you get from
Code:
uname -a
 
Last edited:
Just like 64-bit Windows has WoW and a system32 folder, a 64-bit Linux distribution has a lib32 folder. Running a 64-bit environment does not prevent you from running 32-bit apps.

MRD or one of the other Linuxy people can correct me if I'm wrong, but AFAIK, there is no such thing as a "64-bit kernel in a 32-bit environment." If your build target is x86_64, you've got to have the 64-bit libraries to build it. Enabling 36-bit PAE support for 64GB memory doesn't make it 64-bit. Selecting the "Core2 or newer" option in the kernel config simply optimizes for the architecture, it doesn't determine whether the kernel is running 32-bit or 64-bit. If your build target is still i686, you're compiling 32-bit.

What he has described to us is a 64-bit kernel really with *both* a 64- and 32- bit userland. ('multilib'.)

You cannot load a 32-bit module into a 64-bit kernel.
 
I didn't ask for a 32 bits module. The nvidia modules are built on the fly for whichever kernel is running, so they will match, however, the nvidia links with /binaries/ that are architecture specific, and the "i386" .deb doesn't contains all the binaries.

What I'm trying to do is /possible/ because the nvidia binary drive works on a x86_64 (full) system. I was just asking a simple question about packaging here, before being up nosed by people who think they know better.

+ A 64 bit kernel is more efficient than PAE.
+ A 64 bits kernel with ia32 userland allows all the ia32 binary apps to run without compatibility problems (Xilinx etc)
+ A 64 bits kernel allows more address space even in the 32 bits process.

And it's not just me who's saying it's a good idea, Linus too.
 
Unless you can get nVidia to give you the actual source code to their driver, you're not going to succeed without some major hacking. Good luck with that.

Off topic:

+ A 64 bit kernel is more efficient than PAE.
+ A 64 bits kernel allows more address space even in the 32 bits process.

Your standard 32-bit process can't see 64 bits of address space, regardless, so the second isn't true. As the second doesn't apply, the first hardly matters. With PAE, each app has it's own 4GB "page" in memory. The 64-bit kernel provides the same increase in total available memory you would get with PAE, yet with 32-bit apps, each app is still limited to 4GB.
 
Last edited:
I think you are being Gyro Gearloose here. :) You took a 64bit source and compiled it with a 32bit compiler? Then its not a 64bit kernel. If there was some miracle cure to circumvent PAE in 32bit someone would have done it long ago.

Example, if you take the source code for an Itanium kernel and compile it in an x86 system it doesnt make it an Itanium kernel. (It wouldnt even compile without doing changes ofcourse, but lets say it did)

Probably that 64bit sourcecode has some 64bit specific stuff in it, i doubt it compiled without a bunch of warnings. The problem with nividia kernel modules could be the result of one of those.
 
I think you are being Gyro Gearloose here. :) You took a 64bit source and compiled it with a 32bit compiler? Then its not a 64bit kernel. If there was some miracle cure to circumvent PAE in 32bit someone would have done it long ago.

Example, if you take the source code for an Itanium kernel and compile it in an x86 system it doesnt make it an Itanium kernel. (It wouldnt even compile without doing changes ofcourse, but lets say it did)
Actually, I think I can build an Itanic kernel on my x86 box (cross-compilation is routine for me, but I lead a strange and unfulfilling life). I just can't RUN it. gcc will quite happily build to whatever arch you tell it to build.

Probably that 64bit sourcecode has some 64bit specific stuff in it, i doubt it compiled without a bunch of warnings. The problem with nividia kernel modules could be the result of one of those.
Maybe. Compilation behavior is one thing, runtime behavior (i.e., what matters) may be another story entirely. What is true in 32-bit protected mode is not always true in long mode.

What I'm trying to do is /possible/ because the nvidia binary drive works on a x86_64 (full) system. I was just asking a simple question about packaging here, before being up nosed by people who think they know better.
Having recently read TFM (nVidia's support site), I'm not entirely sure of what you're trying to do.

Please provide more information - are you using drivers from the restricted branch--through apt or synaptic), or are you pulling them specifically from nVidia? And what binary is it that you want?
 
Last edited:
I don't know about you guys, but you seems to be missing a whole page of information in your knowledge book. Whats that about 'libraries' 'compilers' and stuff ? gcc is perfectly capable or targeting all x86, regardless of architecture size. Unless you're not running the same linux I do.

Try that in your 32 bits linux:

Code:
cd /usr/src/linux-<your version>
make ARCH=x86_64 CC=gcc-4.3 oldconfig
[here make sure you say 'yes' to the 32 bit emulation]
make ARCH=x86_64 CC=gcc-4.3 
su 
make ARCH=x86_64 CC=gcc-4.3 install modules_install
mkinitramfs /boot/initrd.img-<your version> <your version>
update-grub

reboot. that works just fine, in fact, on some CPU like the atom it works a hell of a lot better -- probably a better code execution path, better use of the cache, etc.

Code:
michel@yuck:~$ uname -a
Linux yuck 2.6.27-atom #3 SMP Sun Nov 16 15:02:39 GMT 2008 x86_64 GNU/Linux

Thats on a plain base 32 bits debian. Nothing to install, not even a single 64 bits library. And yes, absolutely everything works.

Now, on ubuntu, you have /usr/src/nvidia-177.80 . When you upgrade kernel, the deb automaticaly runs a 'make' into that to make a wrapper that is versionned correctly for your kernel, but still links with the nvidia binary bit. There is only one 'binary' in there, but it's 32 bits. They /must/ have the 64 bits ones available somewhere, since it's reported the binary driver works in 'full fat' 64 bits ubuntu...
Code:
[michel@yap /usr/src/nvidia-177.80]% make clean && file *.o
nv-kernel.o: ELF 32-bit LSB relocatable, Intel 80386, version 1 (SYSV), not stripped
 
Did you go to nVidia's site and download the driver?

http://www.nvidia.com/object/linux_display_ia32_177.82.html - 32-bit
http://www.nvidia.com/object/linux_display_amd64_177.82.html - 64-bit

That would be the simplest way to get the driver, and from what you've posted so far, you haven't bothered to try that.

Or something like:
Code:
sudo apt-get install nvidia-glx
...or something along that chord. It's probably best to leave well enough alone.

I don't know about you guys, but you seems to be missing a whole page of information in your knowledge book. Whats that about 'libraries' 'compilers' and stuff ? gcc is perfectly capable or targeting all x86, regardless of architecture size. Unless you're not running the same linux I do.
Oh, believe me, I know how to use make, gcc, as, ld and friends. And that information takes up much more than a page. :mad:

The point of what you've done is you've built a x86-64 kernel in a 32-bit userland (why, we'll never know, especially considering that IA32 emulation works just fine for IA32 binaries while still peacefully coexisting with x86-64 binaries, letting you have the best of both worlds). Yipee, go you. 'Why' is not clear.

Because I don't fear having a *real* 64-bit system, I don't see the point of this exercise. I would assume that trying to install the userland component of the (64-bit) nVidia GLX drivers will fail interestingly, since it is probably built against 64-bit userland libraries.

And yes, absolutely everything works.
Of course. Not as well as it would if you had 'native' 64-bit libraries (with 32-bit compatibility and emulation for stuff that hasn't been ported to x86-64) and userland, etc. Once again, there's very little real point (from a day-to-day use perspective).
 
Actualy, I had tried that. Otherwise I wouldn't have asked in the first place :
Code:
[michel@yap /usr/src]% sh NVIDIA-Linux-x86_64-177.82-pkg2.run

ERROR: this .run file is intended for the
Linux-x86_64 platform, but you appear to be
running on Linux-x86.  Aborting installation.

Thats why I wanted I get my hands on the .deb, because I know how to pry these open more easily
 
Actualy, I had tried that. Otherwise I wouldn't have asked in the first place :
Code:
[michel@yap /usr/src]% sh NVIDIA-Linux-x86_64-177.82-pkg2.run

ERROR: this .run file is intended for the
Linux-x86_64 platform, but you appear to be
running on Linux-x86.  Aborting installation.

Thats why I wanted I get my hands on the .deb, because I know how to pry these open more easily
...that's because you have a 32-bit userland...my guess is that (shock!) a 64-bit driver installer might depend on having at least one 64-bit userland component present. :)
 
The nvidia drivers involve two separate components, one source, and one binary. The binary contains the bulk of the driver and is precompiled. The source part is the wrapper that makes the modules work with your kernel. It's basically a way of releasing closed source software that works with different kernel configurations.

The binary modules are different for 32 bit and 64 bit systems. They are not compatible. You cannot interface a 32 bit binary module with a 64 bit kernel, and vice versa. On the nVidia site, when you download the driver installers, you get either the 32 bit or the 64 bit version. They are quite different.

Userspace programs work somewhat differently from kernel modules, which must very closely match the kernel.

X.org or other userspace components shouldn't matter to kernel modules. The kernel is self-sufficient and does not dynamically link to any libraries. I have no idea why your system thinks it's 32 bit and won't build the 64 bit nvidia modules. I've built and successfully inserted nvidia modules on systems that don't even have X.org. I suspect you did not correctly set up multilib.
 
Code:
cd /usr/src/linux-<your version>
make ARCH=x86_64 CC=gcc-4.3 oldconfig
[here make sure you say 'yes' to the 32 bit emulation]
make ARCH=x86_64 CC=gcc-4.3 
su 
make ARCH=x86_64 CC=gcc-4.3 install modules_install
mkinitramfs /boot/initrd.img-<your version> <your version>
update-grub

Are you sure that even works? I was not aware you could place environment variables after the command. I would think that would come up with an invalid make target error, as ARCH= and CC= should be environment variables.
Code:
ARCH=x86_64 CC=gcc-4.3 make whatevertargetyou'remaking
 
Back