Linux Kernel Compilation speed up command - linux-kernel

I am using Linux 3.18.25 on i5 (second gen) machine(dual boot windows and Linux). I am making some changes in kernel modules to get idea of the kernel code. The problem is, every time I compile my code using make command it takes 1 hour and 30 minutes approximately, even if I use make -j 4 command it takes almost same time. What should I do to compile the kernel code more quickly? Is there any other way to compile kernel other than using make or make -j 4 command?

Well I am not expert but from my experience:
set -J parameter same as your processors if you have 8 then make it
8, you can check from 'cat /proc/cpuinfo'
If its virtual machine make sure you have hyper enabled and
virtual machine is using more than one physical cpu core
Dont use toolchain and try to compile at the same target
architecture (i.e. if its amd64 then compile at amd 64 bit
machine)
**EDIT:
(Update from Andy comment) Check ccache and how its used in kernel compilation: http://linuxdeveloper.blogspot.de/2012/05/using-ccache-to-speed-up-kernel.html
Additional note: Also make sure you squeeze your CPU enough https://askubuntu.com/questions/523640/how-i-can-disable-cpu-frequency-scaling-and-set-the-system-to-performance

It all depends on the machine you are using, in order for j4 to work you need at least 4 cores. otherwise the jobs will just wait for each other (this looks exctly as you describe). try and compile on a multicore machine instead (I know this is not very helpful, but from my expiriance compiling kernels there is not much else you can do).
EDIT:
as it turns out I lived a very protected life so far. kernel compilation usually takes between 1-2 hour - exactly what you see.
BUT:
there is still things you can do, and they are all listed here
good luck

Related

Speed up embedded linux compilation process

Have an embedded linux (OpenWrt) project for custom hardware. Any changes in kernel or application require full image or application recompiling. And recompiling is painfully slow.
To reduce this pain bought AMD Threadripper 3970X based work station with 128Gb RAM and 1Tb SSD. Testbenches for this CPU shows 120 second of linux kernel compilation time.
But I got bigger compilation time.
Full image compilation first time reduced from:
to:
Repeated image compilation reduced from:
to:
Package recompilation ($ time make package/tensorflow/compile) reduced from:
to:
E.g. compiling time reduced 2-7x.
During first image compilation all necessary source code to be downloaded from network. I have fast ethernet (100Mb/s) connection to not waist time for that.
I use RAMDISK:
$ sudo mkdir /mnt/ramdisk
$ sudo mount -t tmpfs -o rw,size=64G tmpfs /mnt/ramdisk
to store all sources, object and temporary files so no IO losses I believe.
make -j64 used to compile it. I see that all 64 cores loaded very rarely during compilation:
Mostly I see following:
or even this:
so I can't believe that faster compilation can't be achieved. Could someone give me hints/advices how to speed up GCC C/C++ cross compilation process. Some search points me to distcc and Parallel GCC but I doesn't have experience with it so not sure if this is what I need as OpenWrt has almost nothing in their manuals explaining how to speed up build process.
In linux, there is a concept of incremental build, so first time it will take time to build, but next time you need to build only the part which is changed or added extra. No need to rebuild everything. In that case build will be faster.
All the cores of the CPU will not be loaded all the times. It depends how many tasks are running currently. Suppose in your system, there are 8 cores but only 6 tasks are running. In that case all the cores will not be loaded fully.

GDB Debugging a Raspberry Pi via QEMU

I have multiple questions regarding debugging a Raspberry pi 3 from a linux x64 host using gdb-multiarch, as well as writing bare metal programs in general. We are currently facing a problem where our code appears to not be loaded into memory. When we begin debugging in GDB we start at address 0. 3 instructions down we jump into 0x10000. If I modify my linker script to put the Raspberry pi into either address I get the same result, we jump into 0x10000 and our code isn't loaded there. Instead we get this
We noticed also that GDB is using 32 bit register names here when we're supposed to be debugging 64 bit code.
Again a recap of what we're using:
QEMU with versatile-pb machine.
An aarch64 GCC cross-compiler.
GDB-multiarch.
We've tried on two different hosts: Ubuntu 16.04 x64 Host running in virtualbox. Mint x64 running natively.
We also tried the arm-none-eabi toolchain but were running into problems not having our code compiled as 64 bit.
Help is much appreciated! Thanks!
You don't give your command line, but "versatile-pb" is a 32-bit only board type, so trying to run 64-bit code on it is going to misbehave in confusing ways. You need to tell QEMU to emulate a 64-bit capable board that matches what your bare-metal code is expecting to run on.
In QEMU 2.12 there will be a "raspi3" QEMU board which may be helpful for you; you'd need to try building the latest 2.12 release candidate tarball at the moment if you wanted to experiment with that (2.12 release isn't due for another couple of weeks). Otherwise you could use the "virt" board if you made sure your bare metal code was built to be able to run on that board.

Wine requires a 3G/1G user/kernel memory split on Raspberry Pi 2B

I'm playing with Wine on Raspbian Jessie. I just installed from source but executing wine returns:
Warning: memory above 0x80000000 doesn't seem to be accessible.
Wine requires a 3G/1G user/kernel memory split to work properly.
wine: failed to map the shared user data: c0000018
From here, it seems recompiling the kernel with 3G/1G memory might help.
Since I've never did kernel compilation I would like some more motivation in order to go that way. May someone confirm the kernel compilation approach works well? Or perhaps suggest another approach?
Exagear is shipped with a special build of wine that supports 2g/2g memory split. You should use only this build of wine, not the one from apt-get.

Is it possible to generate native x86 code for ring0 in gcc?

I wonder, are there any ways to generate with the gcc some native x86 code (which can be booted without any OS)?
Yes, the Linux kernel is compiled with GCC and runs in ring 0 on x86.
The question isn't well-formed. Certainly not all of the instructions needed to initialize a modern CPU from scratch can be emitted by gcc alone, you'll need to use some assembly for that. But that's sort of academic because modern CPUs don't actually document all this stuff and instead expect your hardware manufacturer to ship firmware to do it. After firmware initialization, a modern PC leaves you either in an old-style 16 bit 8086 environment ("legacy" BIOS) or a fairly clean 32 or 64 bit (depending on your specific hardware platform) environment called "EFI Boot Services".
Operations in EFI mode are all done using C function pointers, and you can indeed build for this environment using gcc. See the gummiboot boot loader for an excellent example of working with EFI.

Delphi extra slow compilation time

Well, there is a strange problem occured in my working project. It is written on Delphi. When I try to compile it, it takes 8 hours to compile about 770 000 lines (and it is not the end), while my colleague needs only 15-20 seconds.
I've tried everything suggested in Why does Delphi's compilation speed degrade the longer it's open, and what can I do about it?
Shorten the path to project
Defragment disc with MyDefrag
Use Clear Unit Cache (do not sure, if it worked at all)
I also turned off the optimization and I use debug mode. My PC is pretty fast (i5-2310 3.1 GHz, 16 Gb RAM, usual SATA HDDs), the bottle neck could be the HDD, but my collegue has usual one too. So, it is very mysterious, what is the reason of so slow compilation.
Edit: I apologize for lack of information. Here is additional info:
I use debug mode, release one works same.
We use Delphi XE version.
I've copied my collegue's folder with project initially.
I do not use network drive, and I tried to move project to another HDD.
Additional info about system: I use Windows 7 Enterprise N 64 bit, while my collegue uses Windows 7 32 bit, Also, Delphi XE is 32-bit (dunno, if it can be 64-bit). May be it is the reason in some way?
Edit 2: I found solution! The problem was that I installed Delphi on my Windows 64 bit system. Installing it on virtual Windows 7 x86 made it work: compiling in seconds. Dunno, why is there so big gap in perfomance.
Are you sure this is not some hardware problem, e.g. your hard disk having a bad sector? Try to put the source code on a different disk and see if the problem goes away. Or maybe the search path points to a network drive that is very slow or not even available?

Resources