How to get kernel executable pages in Volatility (linux memory)? - memory-management

I have a memory dump of Linux system.
I am trying to find all the executable pages in memory belonging to the kernel. How can I do that using Volatility/Rekall?

Related

Which part of memory, Loadable Kernel Modules are Residing?

Are LKMs in linux OS residing in dynamic part of memory (heap or .bss) or are staying as static code in the kernel?
The memory for modules is allocated via vmalloc

How can a 4GB process run on only 2 GB RAM?

Given a 32-bit/64-bit processor can a 4GB process run on 2GB RAM. Will it use virtual memory or it wont run at all?
This is HIGHLY platform dependent. On many 32bit OS's, no single process can ever use more than 2GB of memory, regardless of the physical memory installed or virtual memory allocated.
For example, my work computers use 32bit Linux with PAE (Physical Address Extensions) to allow it to have 16GB of RAM installed. The 2GB per process limit still applies however. Having the extra RAM simply allows me to have more individual processes running. 32bit Windows is the same way.
64bit OS's are more of a mixed bag. 64bit Linux will allow individual processes to map memory well in excess of 32GB (but again, varies from Kernel to Kernel). You will be limited only by the amount of Swap (Linux virtual memory) you have. 64bit Windows is a complete crap shoot. Certain versions will only allow 2GB per process, but most will allow >32GB limited only by the amount of Page File the user has allocated.
Microsoft provides a useful table breaking down the various memory limits on various OS versions/editions. Unfortunately there is no such table that I can find with cursory searching for Linux since it is so fragmented.
Short answer: Depends on the system.
Most 32-bit systems have a limitation of 2GB per process. If your system allows >2GB per process, then we can move on to the next part of your question.
Most modern systems use Virtual Memory. Yet, there are some constrained (and various old) systems that would just run out of space and make you cry. I believe uClinux supports both MMU and MMU-less architectures. Most 32-bit processors have a MMU (a few don't, see ARM Cortex-M0) and a handful of 16-bit or 8-bit have it as well (see Atmel ATtiny13A-MMU and Atari MMU).
Any process that needs more memory than is physically available will require a form of Memory Swap (e.g., a partition or file).
Virtual Memory is divided in pages. At some point, a page reside either in RAM or in Swap. Any attempt to access a memory page that's not loaded in RAM will trigger an interruption called Page Fault, which is handled by the kernel.
A 64-bit process needing 4GB on a 64-bit OS can generally run in 2GB of physical RAM, by using virtual memory, assuming disk swap space is available, but performance will be severely impacted if all of that memory is frequently accessed.
A 32-bit process can't address exactly 4GB of memory in practice (some address space overhead is required by the operating system), so it won't run. Depending on the OS, it can probably run a process that needs > 2GB and < 3-4GB.

why a Windows driver can be loaded in Virtual memory more than once?

I want to know more about why Windows kernel drivers may be loaded in virtual memory more than once?
This is because I can see a driver in multiple offsets of a memory dump by Volatilty tool.

Not able to mount rootfs on zedboard

I am having issues in loading my root fs and after inspecting the Kernel Log it says some thing like
"INITRD: 0x1f8ca000+0x0028ac63 is not a memory region - disabling initrd"
What does this mean?
Background
I am running linux on one core of an ARM Cortex A9 and trying to run another baremetal application on another core. I have changed the device tree to reflect this and i am reserving part of the SDRAM for Linux and part for the bare-metal application. I am using Uboot. Is this something to do with the uboot?
Cheers,
S
As you are NOT dedicating the entire RAM to the Linux kernel on the main core, you will need to ensure that the intrd load address specified in the bootargs is accesible from the main core.
Next, this info is usually passed to the Linux kernel in bootargs passed from u-boot as
initrd=<initrd-start-addr>,<initrd-size>
Modify it according to your custom memory-map
Finally in u-boot, load the initrd at the new proper address you just specified and boot the Linux kernel.

Is number of compiled modules affecting size of linux kernel in RAM?

When I am compiling a Linux kernel, the amount of drivers and modules I compile is definitely affecting the size of produced binary. But does it also affect the size of kernel when it loads into memory?
I mean, when I compile drivers that I don't need for my hardware, will the kernel just ignore them, or are they also loaded in RAM?
TL;DR :
I compile kernel A containing ONLY drivers that I need;
Kernel B containing drivers I need + extra drivers I don't.
Will kernel B eat more memory than kernel A?
Any driver that is built as part of the Linux kernel image is loaded into main memory during boot and will continue to consume main memory irrespective of whether it is used.
Drivers built as separate modules i.e. .ko files can be separately loaded as required. They do NOT consume any main memory unless they are loaded.
Loading a kernel module is done using modprobe and insmod commands, after the Linux kernel is loaded and running.
When loading a Linux kernel module using modprobe, any other modules it depends upon are automatically loaded first.
When the kernel modules are loaded, they need to be mapped into a contiguous block of virtual memory. This is achieved by introducing an additional constraint on the memory map using vmalloc.

Resources