I want to distribute proprietary Linux module for various distros without pre-building the module for all of them. For example I have the following files:
wrapp.c
mod.c
fops.c
All wrapp.c is wrapper for all kernel functions I'm using:
unsigned int wrap_ioread8(void *addr)
{
return ioread8(addr);
}
What I want to do is to give the customer mod.o and fops.o + the wrapp.c.
So I build the mod/fops.o on kernel 3.2 then tried to use them on kernel 2.6.32. The module builds without a problem but when I try ioctl() to the module I'm getting "invalid parameter". The ioctl interface is not changed between 3.2 and 2.6.32! so I'm stuck understanding what is wrong. If I build the module from source it works without a problem.
I was reading about binary blobs, o_shipped, etc. but so far I can't make it work. What am I missing?
Tried renaming the mod/fops.o to mod/fops.o_shipped but as long as I understand this it's only related to "make clean"...
Related
I have a C API DLL we created for a USB product we make that I thought would be nice to be able to import with python without using any wrapping functions like ctypes. Our DLL is already statically linked with boost and the C runtime for other functionality without any problems. Once I add any exported python functions such as this simple test:
#include <boost/python.hpp>
char const* greet()
{
return "hello, world";
}
BOOST_PYTHON_MODULE(mymodule)
{
using namespace boost::python;
def("greet", greet);
}
The DLL that gets built now depends on python34.dll at load time. Normally this would be fine, but the reality is that most users of our DLL currently aren't going to use the python functionality. What I would like to happen is to only have this dependency if the module gets loaded by python, or when a python function gets called for the first time. That way only python users would need to have the python DLL, and our DLL would not get an error loading on systems that lack the python DLL when not being loaded by python code.
On solution I can think of would be to create a separate DLL just for python users that either calls our DLL or includes all of its functionality. I would prefer to just have one binary if possible though. Once there are two or more files involved I question the value over just distributing a wrapper written in python using ctypes.
In my own Windows C/C++ code, I always load whatever DLLs I can on-demand so that I can do better error reporting to the user and maybe allow the program to run with crippled functionality if it is not there. I was a little surprised that boost.python was pulling in python34.dll right at load time.
I've been writing a Python extension use the Python/C API to read data out of a .ROOT file and store it in a list of custom objects. The extension itself works just fine, however when I tried to use it on a different machine I ran into some problems.
The code depends upon several libraries written for the ROOT data manipulation program. The compiler is linking these libraries dynamically, which means I cannot use my extension on a machine that does not have ROOT installed.
Is there a set of flags that I can add to my compilation commands to make these libraries statically linked? Obviously this would make the file size much larger but that isn't much of an issue providing that the code runs at the same speed.
I did think about collating all of the ROOT libraries that I need into an 'archive' file. I'm not too familiar with this so I don't know if that's a good idea or not.
Any advice would be great, I've never really dealt with the static/dynamic library issue before.
Thanks, Sean.
Can anyone please help me with getting the proper header files needed for the copy_from_user method?
I found a few of the include headers I need, but my compiler keeps saying that they are not found. I am running CentOS on my machine. I have tried yum installing various kernel-headers and devel packages but still no luck.
Is there a special segment I need to add in my gcc command? Everything I find on the Internet only tells me how to use the method but not actually how I can get access to it in the first place.
I assume you're developing a kernel module, because outside of it trying to use copy_from_user wouldn't make sense. Either way, in the kernel use:
#include <linux/uaccess.h>
Edit: if building a kernel module is what you want, you may want to look at this Hello World Linux Kernel Module. Specifically the makefile portion may be of interest to you (search for obj-m).
I'm new in writing Linux device driver, and I'm wondering how the kernel Makefile magically knows what to compile. To illustrate what I don't understand, consider the following case:
I did a #include <linux/irq.h> in my driver code and I'm able to find the header file irq.h in the kernel directory KDIR/include/linux. However, this is only the header file, so I thought the irq.c source code must be out there somewhere. Hence, I looked into the KDIR/arch/arm searching for irq.c (since I'm using the ARM architecture). My confusion begins here when I found really many irq.c inside KDIR/arch/arm. To simply list a few, I got:
KDIR/arch/arm/mach-at91/irq.c
KDIR/arch/arm/mach-davinci/irq.c
KDIR/arch/arm/mach-omap1/irq.c
KDIR/arch/arm/mach-orion5x/irq.c
many more...
In my Makefile, I have a line like this:
$(MAKE) -C $(KDIR) M=$(PWD) CROSS_COMPILE=arm-none-linux-gnueabi- ARCH=arm modules
So I understand that the kernel Makefile knows that I'm using the ARM architecture, but under KDIR/arch/arm/, there are so many irq.c with the same name. I'm guessing that the mach-davinci/irq.c is compiled since davinci is the cpu name I'm using. But then, how can the kernel Makefile knows this is the one to compile? Or if I would like to have a look for the irq.c that I'm actually using, which one should I refer to?
I believe there must be a way to know this besides reading the long kernel Makefile. Thanks for any help!
Beyond the ARCH variable, you can also choose the system type (mach) from the configuration menu (there is actually a sub-menu called "System type" when you type make menuconfig for instance). This selection will include and compile all files under linux2.6/arch/$ARCH/mach-$MACH, and in your case this is how only one irq.c gets compiled.
That aside, it is interesting to understand how the kernel chooses which files to compile. The mechanism behind this is called Kconfig, and it is what allows you to precisely configure your kernel using make menuconfig and others, to compile a module from the outside like you are doing, and to select the right files to compile from simple Makefiles. While it is simple to use, its internals are rather complex - but this article, although rather old, explains it rather well:
http://www.linuxjournal.com/article/6568
To make a very long story short, there's a target make config, which you can trace. That one generates .config, that is your main guideline to making dependencies and controlling what will be compiled, what not, what as module and what will be compiled into the kernel.
This guide should give you a basic understanding of building a kernel module (and I assume that's where you want to start with your driver).
I am working on a project which consists of multiple kernel modules. There is some shared functionality between the different modules, but I don't want to include the same code in each module. Does the Linux kernel have a "shared object library" or does the common code go into a separate module?
Typically, you would put the functionality common to the modules in a separate module itself. A good example of this is the drivers/scsi/libsas module used by other SAS (Serial Attached SCSI) device drivers. If you go this route, see the kernel documentation in section 6.3 of Documentation/kbuild/modules.txt for suggestions on referencing symbols from other external modules.
If you're looking for a way to share functions between modules you should take a look at EXPORT_SYMBOL macro. A simple example:
file super.c
void call_me(){
printk("Hello from super.\n");
}
EXPORT_SYMBOL(call_me);
file super.h
extern void call_me();
file base.c
#include "super.h"
void call_super(){
call_me();
}
Here super.c and base.c are different modules.
If this is what you're looking for let me know. I can send you a more complex example with makefiles and stuff. Hope it helps.
Note: I've used this in many distros... however each time I did it I needed to copy the file Modules.symvers to each other module directory.
Supose you have a module A and a module B, which uses functions from A. Upon compiling A, a file named Modules.symvers is created. I've needed to copy that file to B's folder before compiling it. Just don't issue make clean in B's folder after copying Modules.symvers, or it will get deleted.