Linux kernel memory management? - memory-management

Will Linux Kernel free kmalloc'ed and not kfree'd in kernel module memory after module release just like it's work with user space apps?

The kernel will not do any garbage collection for a module. If the module kmallocs a chunk of memory and doesn't kfree it before the module is unloaded, that chunk will stay allocated and inaccessible until the next reboot.

As others said, the kernel will not do any garbage collection for a module but device drivers can use devm_* types of resource allocations (called managed resource allocation functions) and the kernel will do all the required cleanups after there is no more reference to the device.
See here for the commented source code in the kernel source for devm_kmalloc.

Related

Memory Management in Os

I am studying OS in my Engineering class, have one doubt in the section of Memory management.
In RAM the entire process is loaded or only the section of the program needed for execution is loaded?

How does U-Boot communicate with Linux kernel?

I'm reading the book and it tells that:
After U-Boot loads Linux kernel, the kernel will claim all the resources of U-Boot
What does this mean? Does it mean that all data structures that allocated in U-Boot will be discarded?
For example: during U-Boot, PCIE and Network Device will be initialized.
After booting Linux kernel, will the PCIE and Network Device data structure be discarded? Will the Linux kernel do PCIE and NEtwork initialize again? Or U-Boot will transfer some data to kernel?
It depends on your CPU architecture how the communication happens, but it is usually via a special place in RAM, flash or the filesystem. No data structures are transferred, they would be meaningless to the kernel and the memory space will be different between the two. Uboot generally passes boot parameters like what type of hardware is present, what memory to use for something, or which type of mode to use for a specific driver. So yes, the kernel will re-initialize the hardware. The exception may be some of the low level CPU specifics which the kernel may expect uboot or a BIOS to have setup already.
Depending on your architecture, there may be different mechanism for the u-boot to communicate with the Linux kernel.
Actually there may be some structures defined by u-boot which are transferred to and used by the kernel using ATAGS. The address in which these structure are passed is stored in r2 register on ARM. They convey information such as available RAM size and location, kernel command line, ...
Note that on some architectures (like ARM again) we have support for device-tree which intends for defining the hardware in which the kernel is going to be run as well as kernel command line, memory and other thins. Such description is usually created during kernel compile time, loaded into the memory by the u-boot and in case of ARM architecture, its address is transferred through r2 register.
The interesting thing about this (regarding your question) is that u-boot can change this device-tree structure before passing it to the kernel through device tree overlay mechanism. So this is a (relatively) new way of u-boot/kernel communication. Note that device-tree is not supported on some architectures.
And at the end, yes, the hardware is reinitialized by the kernel even in they have already initialized by the u-boot except for memory controller and some other very low level initialization, AFAIK.

Which part of memory, Loadable Kernel Modules are Residing?

Are LKMs in linux OS residing in dynamic part of memory (heap or .bss) or are staying as static code in the kernel?
The memory for modules is allocated via vmalloc

Linux kernel modules

I've not clear what is the difference between drivers that can be "embedded" inside a monolithic kernel and drivers available only as external modules.
What kind of effort is requested to "port" some driver (provided as "external module" only) to a monolithic kernel?
I would like to be able to run Vmware Tools disabling loadable modules support and getting rid of the initrd bazaar.
Though the driver more or less remains the same(in both cases),there are definitely benefits for using "drivers" embedded in monolithic kernel.
I'll try to explain the "effort in porting" the driver part which you've asked.
Depending on the kind of driver you've, essentially you've to figure out how it will fit in the current kernel source tree, its compilation(include your .ko in the uImage) and loading of it while kernel booting. Let's illustrate each step a bit:
a.) Locate the folder (in the kernel source tree) where you think it is best suited to keep your driver code.
b.) Work on to make sure your driver code is getting compiled.[i.e ultimately it will be part of monolithic kernel image(uImage or whatever you call it)]. In this context, You've to work on your Makefile for your driver. You might have to introduce some CONFIG flags to compile your driver code. There are tons of Makefiles' and driver code lying in the source tree. Roam around and you will get a good reference of how it is being done.
c.) Make sure that your driver code is independent of any other
loadable kernel module(i.e such modules which are not part of the
"monolithic" kernel image). Because if you invoke your driver
code(which is monolithic now and is in memory) which depends on
loadable module code then it may cause some kernel
panic/segmentation fault kind of error.
d.) Make sure that your driver is registered with a higher level of
subsystem which will be initializing all the registered drivers
during boot-up time.(for example: an i2c driver once registered
with i2c driver framework will be loaded automatically when i2c subsystem is initialized during system startup). This step might not be really required if you can figure out another way of invoking your driver's __init and __exit functions.
e.) Now, Your Driver _init and (_exit sections) "should" be called
if it is getting loaded by any device driver framework or directly(i.e. while
kernel is booting up ).
f.) In case of h/w drivers, we have .probe implementation in driver
which will be invoked once the kernel finds a corresponding device.
In case of s/w drivers, I guess __init and __exit is all you have.
g.) Once it is loaded, you can use it like you were using it earlier as a loadable kernel module
h.) I'll recommend reading source code of similar device drivers in the linux kernel tree and see how they are operating.
Hope this helps.

Is number of compiled modules affecting size of linux kernel in RAM?

When I am compiling a Linux kernel, the amount of drivers and modules I compile is definitely affecting the size of produced binary. But does it also affect the size of kernel when it loads into memory?
I mean, when I compile drivers that I don't need for my hardware, will the kernel just ignore them, or are they also loaded in RAM?
TL;DR :
I compile kernel A containing ONLY drivers that I need;
Kernel B containing drivers I need + extra drivers I don't.
Will kernel B eat more memory than kernel A?
Any driver that is built as part of the Linux kernel image is loaded into main memory during boot and will continue to consume main memory irrespective of whether it is used.
Drivers built as separate modules i.e. .ko files can be separately loaded as required. They do NOT consume any main memory unless they are loaded.
Loading a kernel module is done using modprobe and insmod commands, after the Linux kernel is loaded and running.
When loading a Linux kernel module using modprobe, any other modules it depends upon are automatically loaded first.
When the kernel modules are loaded, they need to be mapped into a contiguous block of virtual memory. This is achieved by introducing an additional constraint on the memory map using vmalloc.

Resources