• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Compiling programs from source [kernels, firefox etc..]

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Super Nade

† SU(3) Moderator  †
Joined
Aug 30, 2004
Location
Santa Barbara, CA
Introduction:

Why do we compile programs from source code?
There can be many motivations to compile software from source code, ability to customize, personalize and increased performance are the most commonly seen reasons. In my case it is all about performance. You may not gain much with a modern system, but if you have a slow sluggish system, you will definitely see a difference!
What is the objective of this writeup?
The objective of this writeup is two-fold, improve performance and learn something about how Linux works (a daunting prospect!)
Target audience:
I am writing this for Linux newbs (just like me). If you can copy/paste and read a bit, you should have no problems following this guide.
Spirit of the document:
I am no Linux expert. EVERYTHING in here has been pulled from other sources and adapted to my system. References are duly cited.
 
Last edited:
Compiling the Linux Kernel

Testbed:

  1. Dell Inspiron 600m (SLOW laptop!)
  2. Ubuntu 10.10
Pre-requisites:

  1. Get familiar with the terminal (command line).
  2. Unzipping archives (via gui is easy).
  3. Navigating to folders and viewing files.

Procedure:

Of late I've been dabbling in Linux, more specifically Ubuntu. I've installed Ubuntu 10.10 on my Dell Inspiron 600m craptop. This ancient clunker had problems galore with XP on it. I wanted to see how *nix would perform.

The first thing I did after installing Ubuntu was to compile the kernel from source. These are the steps I followed:

  1. Download the latest source from kernel.org (either an rc or a stable source).
  2. Extract the archive to a convenient location. In my case it is: septimus@laptop:~/Downloads/kernels/linux-2.6.36$
  3. All the necessary tools should be installed by default, if not install them as follows: septimus@laptop:~/Downloads/kernels/linux-2.6.36$ sudo apt-get install fakeroot kernel-wedge build-essential makedumpfile kernel-package libncurses5 libncurses5-dev
  4. Before compiling the kernel, you have to choose what options to enable/disable. In other words we have to configure the kernel. The stock config comes with everything and the kitchen sink thrown in. A safe bet would be to copy the existing stock configuration file before trimming the fat. It is done as follows: septimus@laptop:~/Downloads/kernels/linux-2.6.36$ cp -vi /boot/config-`uname -r` .config
  5. The next step would be to edit the Makefile. This file contains many options that are used to build the kernel, but we are interested only in certain hardware optimization options that will be used by the C Compiler. Navigate into the folder which you just unzipped and open the file named Makefile in gedit or any other editor.
  6. Hit Ctrl-F and type in CFLAGS. Change as you please. My CFLAGS look like this:
Code:
# SHELL used by kbuild
CONFIG_SHELL := $(shell if [ -x "$$BASH" ]; then echo $$BASH; \
      else if [ -x /bin/bash ]; then echo /bin/bash; \
      else echo sh; fi ; fi)

HOSTCC       = gcc
HOSTCXX      = g++
HOSTCFLAGS   = -Wall -Wmissing-prototypes -Wstrict-prototypes -O2 -fomit-frame-pointer [COLOR=Cyan]-pipe -march=pentium-m[/COLOR]
HOSTCXXFLAGS = -O2
  • Now we can configure the kernel. Type in "make menuconfig" at the prompt. Make sure you are still in the directory your unzipped source files are in. You should see a menu driven selection. Customize, save and exit.
  • This is not Gentoo, so there is no MAKEOPTS variable to set number of processing threads to use for compiling. Instead, at the prompt type in: export CONCURRENCY_LEVEL=n, where n is the number of CPU cores+1.
  • Then type in: make-kpkg clean followed by
    fakeroot make-kpkg --initrd --append-to-version=-some-name kernel-image kernel-headers (Replace the green text with anything you want. I just go by Alpha, Bravo, Charlie etc..)
  • After some time (many hours on slow laptops, to 15 min on fast desktops), compiled files are created in the parent directory, i.e the parent folder your source folder is in.
  • Navigate to the parent folder and type sudo dpkg -i linux-image , hit the tab key and the name will be completed for you. Likewise do sudo dpkg -i linux-headers (hit tab to complete the name).
  • Almost done! Now type:sudo update-initramfs -c -k 2.6. followed by tab for completion. A stable version will read 2.6.xx and a release candidate will read 2.6.xx-rx where x's are numbers.
  • Last step: sudo update-grub

To remove the kernel:
sudo dpkg -r linux-headers-2.6.xx
sudo dpkg -r linux-image-2.6.xx
sudo rm /boot/initrd.img-2.6.32.xx
I got started based on this guy's guide, but adapted it to kernel.org source compilation.
http://linuxtweaking.blogspot.com/2010/05/how-to-compile-kernel-on-ubuntu-1004.html

Further Reading:
 

Attachments

  • config-2.6.36-bravo.tar.gz
    28.1 KB · Views: 101
Last edited:
Compiling Firefox did not really give me a noticeable performance boost. Claims of huge performance increases sound a bit dubious to me.

Procedure:

  1. Download source archive from Mozilla's FTP Site--->ftp://ftp.mozilla.org/pub/mozilla.org/firefox/releases/
  2. Extract the archive to your home folder. In my case it is at:
    Code:
    septimus@laptop:~/Downloads/mozilla-central$
  3. The next step is to create a "makefile" to store our compilation options. Open the folder the archive was extracted to and create an empty file called .mozconfig (don't forget the dot!)
  4. The above file is your makefile. Mine looks like this:
    Code:
    . $topsrcdir/browser/config/mozconfig
    export MOZILLA_OFFICIAL=1
    export BUILD_OFFICIAL=1
    mk_add_options MOZILLA_OFFICIAL=1
    mk_add_options BUILD_OFFICIAL=1
    mk_add_options MOZ_OBJDIR=@TOPSRCDIR@/obj-@CONFIG_GUESS@
    ac_add_options --enable-official-branding
    [COLOR=YellowGreen]export CHOST="x86-pc-linux-gnu"
    export CFLAGS="-march=pentium-m -O2 -pipe -fomit-frame-pointer"[/COLOR]
    export CXXFLAGS="${CFLAGS}"
    export CPPFLAGS="${CFLAGS}"
    ac_add_options --enable-application=browser
    mk_add_options MOZ_CO_PROJECT=browser
    ac_add_options --enable-optimize
    ac_add_options --enable-default-toolkit=cairo-gtk2
    ac_add_options --enable-system-cairo
    ac_add_options --enable-xft
    Make your optimizations based on your processor where marked in green.
  5. Now it is time to build:
    Code:
    $make -f client.mk build
    On my slow laptop it took about 3 hours.
  6. To install/integrate it with your Ubuntu install do this:
    Code:
    cd obj-i686-pc-linux-gnu/
    and
    Code:
    make
    followed by
    Code:
    sudo checkinstall
    This is making a .deb package for you. Answer "n" to the questions about documentation directory and "y" when permission to exclude those directories is asked. All done!
  7. Firefox will be installed to /usr/local/lib/firefox-version, where version is the version you just compiled. Clicking the Firefox icon will automatically launch this version.
  8. To remove it, use Applications>Ubuntu Software Center.
 
Last edited:
quick question and then you can delete my post, why not use gentoo for this? I mean yes this is a good learning experience but Ubuntu isn't designed for compiling stuff yourself, once you install it you won't have an easy way to uninstall it (besides make uninstall). However I can see how gentoo does sort of hold your hand a little bit compared to compiling 100% of source yourself, if that is what you are trying to show.
 
quick question and then you can delete my post, why not use gentoo for this? I mean yes this is a good learning experience but Ubuntu isn't designed for compiling stuff yourself, once you install it you won't have an easy way to uninstall it (besides make uninstall).

Notice the instructions about kpkg. Once a source is configured and compiled, it can be packaged into a deb or rpm, and then you install that so the package manager deals with what files go where. How do you think pre-compiled (I dislike the term "binary"; everything in a computer is binary) packages and distros are made to start with? I agree that Ubuntu seems to be a less-than-optimal choice if you don't want to use pre-compiled packages, though.
 
quick question and then you can delete my post, why not use gentoo for this? I mean yes this is a good learning experience but Ubuntu isn't designed for compiling stuff yourself, once you install it you won't have an easy way to uninstall it (besides make uninstall). However I can see how gentoo does sort of hold your hand a little bit compared to compiling 100% of source yourself, if that is what you are trying to show.

Getting a desktop installed in Gentoo is an ordeal I just do not want to go through again. Spent about a week trying to setup a functional rig. No go. Ubuntu is the most popular distro, it just works out of the box and is built for noobs. I just wish to show that there is a bit in it for those who wish to optimize further. You can always make a .deb and use software manager to install/uninstall your packages.

Unstuck this thread.
 
What if the deb you build is built against a library that ubuntu updates? How does debian/ubuntu handle that? The binary won't work anymore unless rebuilt, but the causes of this are mysterious without tools to track it down.
 
What if the deb you build is built against a library that ubuntu updates? How does debian/ubuntu handle that? The binary won't work anymore unless rebuilt, but the causes of this are mysterious without tools to track it down.

The same way other packages know. The RPM or DEB contains dependency information, and that information includes version information on the dependencies. If package B-1.0 depends on A-1.0, and you want to update to A-1.1, but the ABI doesn't change, the package manager knows it can install A-1.1 without needing to update B. If the ABI changes in 1.2, then the package manager will say "B-1.0 is blocking install of A-1.2", and it won't update it.

I think pre-compiled distros like Ubuntu use static libraries instead of dynamic (though I haven't used anything but Gentoo in years so I'm not sure), so that when you install an update, it doesn't remove the old libraries, it just adds new ones alongside, and applications depending on the old versions still have them.
 
Not sure what you mean in the second paragraph. Static libraries are not separate binaries at all, they are built into the executable (i.e. linked in at compile time) as opposed to shared libraries (.so's) that are linked in at run time. I know Gentoo sometimes slots the libraries so that multiple versions of shared libraries are simultaneously available. Gentoo is working on how to handle shared library versioning right now (which will be much better managed through portage 2.2). Currently stuff can break a lot and you have to fix it with revdep-rebuild.

All modern linux distros use shared libraries. The amount of memory that would be wasted by using static linking would be enormous - each running executable, no matter how small, could have glibc built into it.
 
Misuage of terms, then. Call it slots. I recall my SuSE system years ago having far more variations/versions of single libraries installed in /usr/lib at the same time than my Gentoo build-from-source system does. Also, an update in version doesn't necessarily change the ABI. If the ABI remains the same, then dependent packages need not be recompiled.

Or, maybe pre-compiled distros are always so constipated-edge because they can't update something until all dependents can be updated at the same time.

Regardless, dependencies are easily versioned in either RPM or DEB packages.
 
I am very excited about this thread.
My record for compiling things is 2 for 5, with one totally destroyed install that required a format.
Getting better at compiling things would be glorious.
Ubuntuish is perfect for me, as that is what i run :D
 
I'm going to tackle VLC this week and then try KDE.

Question: Does having multiple kernels installed slow down your system or does it slow down boot time? I have about 6/7 custom compiled kernels.
 
I'm going to tackle VLC this week and then try KDE.

Question: Does having multiple kernels installed slow down your system or does it slow down boot time? I have about 6/7 custom compiled kernels.

Why would it? It's only loading one, regardless of how many are on the drive.
 
I think your answer is what he is asking.

The number of kernels you have installed effects the number of options your bootloader has for starting up the system, but I wouldn't think the number of kernels available has any impact directly.

Does the kernel you are currently running normally effect how applications are configured/compiled from source? If your running kernel supports everything and the kitchen sink, are applications going to configure and compile with all the bells and whistles that kernel supports? So if you boot into a trimmed down kernel later, many of the applications you have compiled and installed may be a bit bloated with options your trimmed down kernel does not support? That in a way could affect boot times.

Either way, the number of kernels you have compiled/installed doesn't slow down boot times directly. But I'm unfamiliar with compiling and installing apps while running one kernel then running the same apps under a considerably different kernel... I know there are compile time options, but when you run configure/make, I'm not sure what all the system is doing - it looks like it might be detecting what options your system currently supports and customizing the build appropriately.
 
The only things I can think of that might be affected by that are filesystem drivers (FUSE/CUSE, CIFS, NFS), video drivers if you turn KMS on or off, and possibly anything that depends on kernel crypto routines. Most everything else is userspace, and not dependent on small kernel variations.
 
Cool, thanks. You nailed it for me too. I've ran into some issues I had to work out with kernel crypto routines when compiling truecrypt and not having some options available I was looking for... So thats sort of what I had in the back of my mind with the last post, in addition to not knowing what else may be similar in that way.
 
I had an ubuntu box that survived six kernal updates, the grub screen was rather full, but it didn't slow down.
 
Binary distros often cannot update packages because the new version of program A requires a new version of program B, but the ABI is incompatible with programs C-Z, so you would need to update everything. One reason I hate binary distros.

Adding lots of kernels has no effect on speed. Go nuts.

As for compiling versus a kernel tree, it usually does not matter. It can in a few cases. The first is that any program requires a minimum kernel version with certain features built in. Other than those features and the minimum version, you are ok. Secondly, anything that directly interacts with the kernel (e.g. modules, like video drivers, or other hardware and software drivers) will need to be recompiled. Even small changes in the kernel can require recompiling all the kernel modules. Very different kernel versions might cause problems just because the make file may fork into two paths for, say, 2.4 and 2.6.
 
As a follow up to IMOG's questions:

I was under the impression that compiling other stuff did not depend on the current kernel you were running, but only on the options you pass to the compiler and anything else in the makefile. Is this true?

I don't see any difference in how my apps run under different kernels. It seems that the kernel alone does not make a huge difference on a slow/old system. Other inherent bottlenecks seem to dominate. If I ever chance upon something that makes a difference, I'll post it up.

I have removed a lot of "fat" from the stock config. What options can I modularize or turn off to get the leanest possible kernel that gives me a functional system with a gui?

I went from 2.6.32 to 2.6.37-rc1 without problems. I don't want to mess with the video drivers just yet (don't want a brick from 1960...) but will give it a go when I'm feeling froggy. It seems that video drivers have been the bane of all my failed installs (Gentoo,Sabayon, PCLinux, Yoper etc etc).

Currently Running Phoronix Tests and will compile the latest RC when I head out (takes about 5 to 6 hours). :)
 
Last edited:
Re: video driver issues... do you have ati?

Some programs require kernel headers to compile, some do not, it just depends on the way the makefile is written and what kind of program it is. Many programs require a kernel tree even when there is no logical reason why they should need it (to me at least).
 
Back