Are LKMs in linux OS residing in dynamic part of memory (heap or .bss) or are staying as static code in the kernel?
The memory for modules is allocated via vmalloc
Related
I have a memory dump of Linux system.
I am trying to find all the executable pages in memory belonging to the kernel. How can I do that using Volatility/Rekall?
I'm reading the book and it tells that:
After U-Boot loads Linux kernel, the kernel will claim all the resources of U-Boot
What does this mean? Does it mean that all data structures that allocated in U-Boot will be discarded?
For example: during U-Boot, PCIE and Network Device will be initialized.
After booting Linux kernel, will the PCIE and Network Device data structure be discarded? Will the Linux kernel do PCIE and NEtwork initialize again? Or U-Boot will transfer some data to kernel?
It depends on your CPU architecture how the communication happens, but it is usually via a special place in RAM, flash or the filesystem. No data structures are transferred, they would be meaningless to the kernel and the memory space will be different between the two. Uboot generally passes boot parameters like what type of hardware is present, what memory to use for something, or which type of mode to use for a specific driver. So yes, the kernel will re-initialize the hardware. The exception may be some of the low level CPU specifics which the kernel may expect uboot or a BIOS to have setup already.
Depending on your architecture, there may be different mechanism for the u-boot to communicate with the Linux kernel.
Actually there may be some structures defined by u-boot which are transferred to and used by the kernel using ATAGS. The address in which these structure are passed is stored in r2 register on ARM. They convey information such as available RAM size and location, kernel command line, ...
Note that on some architectures (like ARM again) we have support for device-tree which intends for defining the hardware in which the kernel is going to be run as well as kernel command line, memory and other thins. Such description is usually created during kernel compile time, loaded into the memory by the u-boot and in case of ARM architecture, its address is transferred through r2 register.
The interesting thing about this (regarding your question) is that u-boot can change this device-tree structure before passing it to the kernel through device tree overlay mechanism. So this is a (relatively) new way of u-boot/kernel communication. Note that device-tree is not supported on some architectures.
And at the end, yes, the hardware is reinitialized by the kernel even in they have already initialized by the u-boot except for memory controller and some other very low level initialization, AFAIK.
I am having issues in loading my root fs and after inspecting the Kernel Log it says some thing like
"INITRD: 0x1f8ca000+0x0028ac63 is not a memory region - disabling initrd"
What does this mean?
Background
I am running linux on one core of an ARM Cortex A9 and trying to run another baremetal application on another core. I have changed the device tree to reflect this and i am reserving part of the SDRAM for Linux and part for the bare-metal application. I am using Uboot. Is this something to do with the uboot?
Cheers,
S
As you are NOT dedicating the entire RAM to the Linux kernel on the main core, you will need to ensure that the intrd load address specified in the bootargs is accesible from the main core.
Next, this info is usually passed to the Linux kernel in bootargs passed from u-boot as
initrd=<initrd-start-addr>,<initrd-size>
Modify it according to your custom memory-map
Finally in u-boot, load the initrd at the new proper address you just specified and boot the Linux kernel.
When I am compiling a Linux kernel, the amount of drivers and modules I compile is definitely affecting the size of produced binary. But does it also affect the size of kernel when it loads into memory?
I mean, when I compile drivers that I don't need for my hardware, will the kernel just ignore them, or are they also loaded in RAM?
TL;DR :
I compile kernel A containing ONLY drivers that I need;
Kernel B containing drivers I need + extra drivers I don't.
Will kernel B eat more memory than kernel A?
Any driver that is built as part of the Linux kernel image is loaded into main memory during boot and will continue to consume main memory irrespective of whether it is used.
Drivers built as separate modules i.e. .ko files can be separately loaded as required. They do NOT consume any main memory unless they are loaded.
Loading a kernel module is done using modprobe and insmod commands, after the Linux kernel is loaded and running.
When loading a Linux kernel module using modprobe, any other modules it depends upon are automatically loaded first.
When the kernel modules are loaded, they need to be mapped into a contiguous block of virtual memory. This is achieved by introducing an additional constraint on the memory map using vmalloc.
Will Linux Kernel free kmalloc'ed and not kfree'd in kernel module memory after module release just like it's work with user space apps?
The kernel will not do any garbage collection for a module. If the module kmallocs a chunk of memory and doesn't kfree it before the module is unloaded, that chunk will stay allocated and inaccessible until the next reboot.
As others said, the kernel will not do any garbage collection for a module but device drivers can use devm_* types of resource allocations (called managed resource allocation functions) and the kernel will do all the required cleanups after there is no more reference to the device.
See here for the commented source code in the kernel source for devm_kmalloc.