Getting kernel version from linux kernel module at runtime - linux-kernel

how can I obtain runtime information about which version of kernel is running from inside linux kernel module code (kernel mode)?

By convention, Linux kernel module loading mechanism doesn't allow loading modules that were not compiled against the running kernel, so the "running kernel" you are referring to is most likely is already known at kernel module compilation time.
For retrieving the version string constant, older versions require you to include <linux/version.h>, others <linux/utsrelease.h>, and newer ones <generated/utsrelease.h>. If you really want to get more information at run-time, then utsname() function from linux/utsname.h is the most standard run-time interface.
The implementation of the virtual /proc/version procfs node uses utsname()->release.
If you want to condition the code based on kernel version in compile time, you can use a preprocessor block such as:
#if LINUX_VERSION_CODE <= KERNEL_VERSION(2,6,16)
...
#else
...
#endif
It allows you to compare against major/minor versions.

You can only safely build a module for any one kernel version at a time. This means that asking from a module at runtime is redundant.
You can find this out at build time, by looking at the value of UTS_RELEASE in recent kernels this is in <generated/utsrelease.h> amongst other ways of doing this.

Why can't I build a kernel module for any version?
Because the kernel module API is unstable by design as explained in the kernel tree at: Documentation/stable_api_nonsense.txt. The summary reads:
Executive Summary
-----------------
You think you want a stable kernel interface, but you really do not, and
you don't even know it. What you want is a stable running driver, and
you get that only if your driver is in the main kernel tree. You also
get lots of other good benefits if your driver is in the main kernel
tree, all of which has made Linux into such a strong, stable, and mature
operating system which is the reason you are using it in the first
place.
See also: How to build a Linux kernel module so that it is compatible with all kernel releases?
How to do it at compile time was asked at: Is there a macro definition to check the Linux kernel version?

Related

setting up a development environment for Linux device driver

I am trying to read the LDD book by Jonathan Corbet, Greg Kroah-Hartman, Alessandro Rubini and implement the sample modules. So to begin with, I tried setting up a development system. Installed Ubuntu 16.04 Xenial. Now, I just created a directory and wrote the hello_world module with a Makefile. Got it built and run it, verified the dmesg logs.
Is that all the development setup? I searched online and found articles where they are asking to download and compile the kernel, use a VM to boot the kernel. What is the reason? Or what am I missing?
Is there any better article which clarifies this?
Thanks
hago
You can try one more way:
If you have native windows, install virtual machine software such as
Virtual box. Get your favourite Linux distribution (no bias, just
an example - Ubuntu) and install it through Virtual box.
Get the latest kernel (or of your choice) from kernel.org.
Choose the platform you want to build this kernel for. E.g arm64 or x86.
In case you do not have real boards (e.g RPi for arm variant), you can use qemu-arm64 or qemu-x86 to run your compiled kernel. This is also a good option when users do not have the boards.
Another good use case for using qemu for the newbie kernel developers is even they write some modules which crashes, then the qemu instance is crashed so no harm.
I think using qemu is a good option for people who starts to learn kernel programming and also want to try writing some of their modules and do not intend to purchase hardware at this point of time.
It depends on your target. For your case, you have made a kernel driver for your computer (it run Linux kernel).
But if you want to develop a Kernel driver for another target like Rasberry Pi, ARM board, X86-X64 board, ... you must learn to compile, edit Kernel config, boot Kernel image, ... because each target has different kernel versions.
You can refer to this training for more detail: https://bootlin.com/training/embedded-linux/

Linux kernel modules

I've not clear what is the difference between drivers that can be "embedded" inside a monolithic kernel and drivers available only as external modules.
What kind of effort is requested to "port" some driver (provided as "external module" only) to a monolithic kernel?
I would like to be able to run Vmware Tools disabling loadable modules support and getting rid of the initrd bazaar.
Though the driver more or less remains the same(in both cases),there are definitely benefits for using "drivers" embedded in monolithic kernel.
I'll try to explain the "effort in porting" the driver part which you've asked.
Depending on the kind of driver you've, essentially you've to figure out how it will fit in the current kernel source tree, its compilation(include your .ko in the uImage) and loading of it while kernel booting. Let's illustrate each step a bit:
a.) Locate the folder (in the kernel source tree) where you think it is best suited to keep your driver code.
b.) Work on to make sure your driver code is getting compiled.[i.e ultimately it will be part of monolithic kernel image(uImage or whatever you call it)]. In this context, You've to work on your Makefile for your driver. You might have to introduce some CONFIG flags to compile your driver code. There are tons of Makefiles' and driver code lying in the source tree. Roam around and you will get a good reference of how it is being done.
c.) Make sure that your driver code is independent of any other
loadable kernel module(i.e such modules which are not part of the
"monolithic" kernel image). Because if you invoke your driver
code(which is monolithic now and is in memory) which depends on
loadable module code then it may cause some kernel
panic/segmentation fault kind of error.
d.) Make sure that your driver is registered with a higher level of
subsystem which will be initializing all the registered drivers
during boot-up time.(for example: an i2c driver once registered
with i2c driver framework will be loaded automatically when i2c subsystem is initialized during system startup). This step might not be really required if you can figure out another way of invoking your driver's __init and __exit functions.
e.) Now, Your Driver _init and (_exit sections) "should" be called
if it is getting loaded by any device driver framework or directly(i.e. while
kernel is booting up ).
f.) In case of h/w drivers, we have .probe implementation in driver
which will be invoked once the kernel finds a corresponding device.
In case of s/w drivers, I guess __init and __exit is all you have.
g.) Once it is loaded, you can use it like you were using it earlier as a loadable kernel module
h.) I'll recommend reading source code of similar device drivers in the linux kernel tree and see how they are operating.
Hope this helps.

Linux l2TPv3 support

I use CentOS and it does not have support for L2TPv3 which was introduced in 2.6.35.
CentOS is at 2.6.32. How do I selectively patch just the L2TPv3 changes to my kernel?
Also, these are kernel modules. Would I need to run the modified kernel to be able to insmod these KOs?
Back porting features is a very non trivial task, not something that can easily be done casually. Thus, your best option is to look around whether somebody created the necessary patches for your kernel version.
Also, Linux kernel has no strict interface definitions when modules are concerned, thus it is very desirable that kernel and modules are compiled from the same source. Sometimes it is possible to successfully use "mismatched" modules with a given kernel, but rather frequently an attempt to do so results in various problems.
But if you will adventurous, try using modprobe -f. This will disable the module version checking and modprobe will try to squeeze the module in (even at a cost of crashing the system on spot).

Linux kernel code segment memory page modifications

I'm trying to implement a "Semantic based memory sharing model" for Xen. As a part of my project, i'm trying to share kernel code pages across VMs. I' ve assumed that the code segments of linux kernels with similar version are 100% identical. But when i carry out some experiments using Virtual Machines running Debian Squeeze, i have found 3 memory pages are different in kernel code segment.
So my question is that, does the linux kernel modifies its code pages at runtime?
Yes, it can - for example, spinlocks can be dynamically patched out of the code if the kernel sees at runtime that it is running on a uniprocessor system. I do not know of an exhaustive list of such cases, you will need to inspect the code.
See the LWN article on SMP Alternatives for more information on one system that does runtime patching within the kernel.

Can anyone please tell me how is Kernel Programming done in Linux, as Windows DDK in Windows

I am aware of windows kernel but new to linux kernel. I just need to know how its done in linux, i.e. the program development.
You can check there (free-electrons.com), it's a good informations source for kernel developement. (specialized in embedded linux, but most of the docs are available for standard development)
You have also the classical Linux Devices Drivers, which is very complete and detailled.
And last but not least, the Linux kernel documentation.
Linux does not have a stable kernel API. This is by design, so you should generally avoid writing kernel code if you can; it is unlikely to remain source-compatible indefinitely, and will definitely NOT be binary-compatible, even between minor releases.
This is less-or-more true for vendor kernels; Redhat etc DO maintain source & binary kernel compatibility between major revisions.
More work is gradually being done in the kernel to reduce the amount of kernel-code required to carry out various tasks, such as driver development (for example, USB drivers can typically be done in userspace with libusb), filesystem development (FUSE) and network filtering (NFQUEUE). However, there are still some cases where you need to; in particular, block devices still need to be in the kernel to be able to be usefully used for boot devices and swap.

Resources