I need to make some RPC calls from a module that resides in the kernel. I was wondering if glib could be used for this prurpose. Has anyone tried using the glib library inside the kernel? Is that even possible?
No, it's not possible to use userspace libraries in the kernel. Have a look at net/sunrpc/ directory for the kernel impletantion of RPC. It's used by the NFS kernel code.
Related
I want to compile a linux kernel such that the vmlinux has all the network drivers statically linked. I tried to disable CONFIG_MODULES, but that didnt do the trick.
To compile all your modules into the kernel statically, you have to change the appropriate configuration options from m to y.
Yes, all of them.
I have an (old) embedded system for which I want to compile programs. I don't have the toolchain, so I want to create one.
The embedded system has an "ARM926EJ-S rev 5 (v5l)" CPU and "cat /proc/version" says that it runs "Linux version 2.6.20.7" with GCC 4.0.2.
I have heard that I have to include the kernel headers in the build process. I download the Linux kernel version 2.6.20 from kernel.org, extract all files and run "make headers_install ARCH=arm INSTALL_HDR_PATH=~/headers". Is this the correct way or do I need the header files of the specific kernel?
untar the kernel.
make mrproper
make ARCH=${arch} headers_check
e.g make ARCH=arm headers_check
make ARCH=${CLFS_ARCH} INSTALL_HDR_PATH=dest headers_install
This are the steps to get headers from kernel.
The purpose of kernel headers is -->C library and compiled programs needs to interact with the kernel
i.e for Available system calls and their numbers, Constant definitions, Data structures, etc.
Therefore, compiling the C library requires kernel headers, and many applications also require them.
do I need the header files of the specific kernel?
The kernel-to-userspace ABI is backward compatible
--> 1)Binaries generated with a toolchain using kernel headers older
than the running kernel will work without problem, but won't
be able to use the new system calls, data structures, etc.
-->2)Binaries generated with a toolchain using kernel headers newer
than the running kernel might work on if they don't use the
recent features, otherwise they will break.
--->3)Using the latest kernel headers is not necessary, unless access
to the new kernel features is needed
So in your case kernel version is "Linux version 2.6.20.7"
You can use kernel headers of Linux kernel version 2.6.20 or 2.6.21 from kernel.org.
does not create any problem in this case.
That should be fine if you're using the headers to build a libc
You should probably run make ARCH=arm headers_check beforehand too.
I've not clear what is the difference between drivers that can be "embedded" inside a monolithic kernel and drivers available only as external modules.
What kind of effort is requested to "port" some driver (provided as "external module" only) to a monolithic kernel?
I would like to be able to run Vmware Tools disabling loadable modules support and getting rid of the initrd bazaar.
Though the driver more or less remains the same(in both cases),there are definitely benefits for using "drivers" embedded in monolithic kernel.
I'll try to explain the "effort in porting" the driver part which you've asked.
Depending on the kind of driver you've, essentially you've to figure out how it will fit in the current kernel source tree, its compilation(include your .ko in the uImage) and loading of it while kernel booting. Let's illustrate each step a bit:
a.) Locate the folder (in the kernel source tree) where you think it is best suited to keep your driver code.
b.) Work on to make sure your driver code is getting compiled.[i.e ultimately it will be part of monolithic kernel image(uImage or whatever you call it)]. In this context, You've to work on your Makefile for your driver. You might have to introduce some CONFIG flags to compile your driver code. There are tons of Makefiles' and driver code lying in the source tree. Roam around and you will get a good reference of how it is being done.
c.) Make sure that your driver code is independent of any other
loadable kernel module(i.e such modules which are not part of the
"monolithic" kernel image). Because if you invoke your driver
code(which is monolithic now and is in memory) which depends on
loadable module code then it may cause some kernel
panic/segmentation fault kind of error.
d.) Make sure that your driver is registered with a higher level of
subsystem which will be initializing all the registered drivers
during boot-up time.(for example: an i2c driver once registered
with i2c driver framework will be loaded automatically when i2c subsystem is initialized during system startup). This step might not be really required if you can figure out another way of invoking your driver's __init and __exit functions.
e.) Now, Your Driver _init and (_exit sections) "should" be called
if it is getting loaded by any device driver framework or directly(i.e. while
kernel is booting up ).
f.) In case of h/w drivers, we have .probe implementation in driver
which will be invoked once the kernel finds a corresponding device.
In case of s/w drivers, I guess __init and __exit is all you have.
g.) Once it is loaded, you can use it like you were using it earlier as a loadable kernel module
h.) I'll recommend reading source code of similar device drivers in the linux kernel tree and see how they are operating.
Hope this helps.
I use CentOS and it does not have support for L2TPv3 which was introduced in 2.6.35.
CentOS is at 2.6.32. How do I selectively patch just the L2TPv3 changes to my kernel?
Also, these are kernel modules. Would I need to run the modified kernel to be able to insmod these KOs?
Back porting features is a very non trivial task, not something that can easily be done casually. Thus, your best option is to look around whether somebody created the necessary patches for your kernel version.
Also, Linux kernel has no strict interface definitions when modules are concerned, thus it is very desirable that kernel and modules are compiled from the same source. Sometimes it is possible to successfully use "mismatched" modules with a given kernel, but rather frequently an attempt to do so results in various problems.
But if you will adventurous, try using modprobe -f. This will disable the module version checking and modprobe will try to squeeze the module in (even at a cost of crashing the system on spot).
This question already has an answer here:
Linking Dylibs in Kexts?
(1 answer)
Closed 9 years ago.
I am writing a kext driver for OS X and would like to use functions from the library libpcap.dylib. Libpcap.dylib lives in /usr/lib on OS X. Can it be used from kernel space? How can I use libpcap.dylib from a kext using Xcode?
I manage to compile -- (-lpcap apears as link option) but:
got an warning on "unexpected dylib" by linker. It is clear that is misplaced somehow.
kextload can't resolve libpcap dependencies.
kextlibs shows only libs that I include thru OsBundleLibraries suggesting that my dylib is ignored.
I am aware of similar question Linking Dylibs in Kexts? but want to know if someone have have used libpcap on a kext.
As is noted in Linking Dylibs in Kexts?, it's not possible to load a dylib in to the kernel via a kernel extension.
You don't mention what it is you're trying to achieve so it's difficult to know what alternatives would be relevant to you. I'd suggest reading up on [Network Kernel Extensions][1] to see if one of the techniques they cover could be used instead of pcap. Alternatively, you could make use of pcap from a userspace program and communicate with it from your kernel extension.
WinPcap has both user-land and kernel-mode components, because the Windows kernels don't provide the necessary kernel-mode components.
On UN*X systems - for example, on OS X - the kernel-mode components are part of the OS, and libpcap only includes user-mode code.
The equivalent, in *BSD and OS X, of WinPcap's kernel-mode code is BPF, which you won't be able to use from a kext. In addition, BPF has no equivalent of the send-queue stuff to do synchronized transmission of packets - you can send packets, but that just immediately injects the packet into the network stack - so neither using libpcap from your kext, nor using raw BPF from your kext, would help you with your timing needs.