We've been compiling kernel modules for an embedded powerpc system for a few years now and generally things are ok with some rare unexplained stability problems. Recently a collegue pointed out that kernel modules should be compiled with the same compiler as the kernel. After doing a bit of digging i find that the kernel (montavista linux 2.4.20) was compiled with gcc3.4.1 and we've been using (denx eldk) gcc4.0.0. I've recently built gcc4.7.1 for our userspace code but loading kernel modules built with this version cause the system to crash. I then build gcc3.4.1 from source and some builds work and some don't - think i may have an issue with the make scripts but thats another story.
So my question: Is my collegue correct? And if so can anyone explain what is different in the resulting .o file that causes the incompatibility?
Wow, that kernel has been around a long time, since the early days of my former MontaVista employment! I'm not sure there is a hard fast answer here, but I know if it were me, I would be concerned about compiler differences. The Linux kernel has always been sensitive to compiler versions, in part because of it's sheer size and complexity. The kernel uses lots of GNU extensions, and actually makes a pretty good stress test for a new compiler build.
You can discover what compiler was used to build the kernel simply by looking at the output of /proc/version (I think that exists that far back in the 2.4.20 kernel days, but I could be wrong on that.) $ cat /proc/version. It certainly works for modern kernels and has been in the kernel for a long time.
My first suggestion would be to upgrade the kernel to something more modern, but I suspect that's not really an option, or you wouldn't be asking the question! ;)
I suspect that even a compiler expert (not me) would have a hard time answering the question "what's different". But do this simple test. Compile your module with 3.4.1, and then 4.7. The resulting objects (.ko files) will certainly be different.
The reality is, bugs exist in all software, and may lurk for ages until something comes along to stimulate the bug. See my blog post here: http://blogs.mentor.com/chrishallinan/blog/2012/05/18/fun-with-toolchain-versions for a perfect example of this.
Now I'm not saying that's your issue, but I think I'd feel alot better if my modules and kernel were both compiled with the same compiler version.
Good luck.
It matters.
Mostly you would encounter the following, i.e.
rmmod: ERROR: could not remove 'hello': Device or resource busy
Related
I'm trying to cross compile a project from x86_64 Linux to Win64 in Lazarus. On build, I get:Fatal: Cannot find system used by fcllaz of package FCL.
I've seen this question asked in several places, and I guess I don't understand the answers. I do have fcllaz.pas. I've seen "Check your -Fu" answers, but there isn't enough detail for me to determine what I'm looking for or need to do. I've seen those statements in fpc.cfg, I'm just not sure what to do with them.
I'm quite new to Lazarus. In the form of a question: how do I point Lazarus/fpc to fcllaz and get this thing compiled?
The error is that it can't find unit system, fcllaz is just what is being compiled when it first misses system.
Not finding system means the compiler can't find the RTL (and the rest of the precompiled units) for the selected target (win64). These probably don't come with your installation so you have to build and install them yourself.
The -Fu are lines in the fpc.cfg that should point to the relevant units.
Though a bit outdated, the buildfaq has a lot of background info how the system builds and finds its units.
Still chasing that white whale of running Minecraft on an BBB, I eventually came to the conclusion that the major issue was twofold. First, Minecraft has a dependency on the Lightweight Java Games Library, or LWJGL, and it does not have an ARM version to reference when it's downloading the run environment. Second, Minecraft's launcher doesn't allow you to reference specific jars in the boot up process, meaning that any version of LWJGL and it's accessories that could be ported to ARM would also have to pass the sha checks. Granted, those could be fudged, but I'm at a bit of a loss on how to proceed since I seem to be in uncharted territory. Anyone have any pieces of advice or suggestions on where to go from here?
I have found some information here for LWJGL for ARM:
http://www.raspberrypi.org/forums/viewtopic.php?f=34&t=19532
This brought me to 2 places:
More info on running ARM LWJGL: http://www.trimslice.com/forum/viewtopic.php?f=48&t=393
And, what appears to be precompiled LWJGL for ARM: http://openjdk.gudinna.com/lwjgl-es/
I have yet to test anything, but it's a step in the direction. I am new to a lot of this myself, so we'll see where it goes.
I am also in the same boat, trying to do the same thing.
All I know for sure so far:
JWJGL is built for x86, so it can't run on ARM processor - we would need to recompile for ARM.
There is a checksum when you do replace the LWJGL library, which triggers minecraft to replace the file with the x86 version.
I had an idea that, in order to get around the checksum issue, if someone with the know how could make a mod that also included the LWJGL for ARM. In this manner, as I understand it, getting MC to run on BBB would be as simple as copying over a mod.
Sorry I couldn't help further. I'll keep an eye on this post and I'll let you know what I find out.
I'm new to programming Linux kernel modules, and many getting started guides on the topic include little information about how to build a kernel module which will run on many versions and CPU platforms of Linux. Most of the guides I've seen simply state things like, "Linux doesn't ensure any ABI/API compatibility between versions." However, other OSes do provide these guarantees for major versions, and the guides are mostly targeting 2.7 (which is a bit old now).
I was wondering if there is any kind of ABI/API compatibility now, or if there are any standard ways to deal with versioning other than isolating the kernel-dependent bits of my code into files with a ton of preprocessor directives. (Also, are there any standard preprocessor symbols I should be using in the second case?)
There isn't a stable ABI for the kernel and most likely never will be because it'd make Linux suck. The reasons for not having one are all pretty much documented in that link.
The best way to deal with this is to get your driver merged upstream where it'll be maintained by other kernel developers.
As to being cross-platform, that pretty much comes free with the Linux kernel as long as you only use the standard, platform-independent functions provided in the API.
Linux, the ying and the yang. Tangrs answer is good; it answers your question. However, there is the linux compat projects. See the backports wiki. Basically, there are libraries that provide shim functionality for newer Linux ABI's which you can use to link your code. The KERNEL_VERSION macro that Eugene notes is inspected in a compat.h, and appropriate compat-2.6.38.h, etc are included where each version has either macros and/or library functions to provide a forward API.
This lets the Linux Wifi group write code for the bleeding edge kernel, while still making it possible to compile on older kernel versions.
I guess this answers the question,
if there are any standard ways to deal with versioning?
The compat library is not a panacea, but at least it is there and under development.
Open source - There are many mutations. They all have a different plan.
I am a complete newbie to the ARM world. I need to be able to write C code, compile it, and then download into an ARM emulator, and execute. I need to use the GCC 4.1.2 compiler for the C code compilation.
Can anybody point me in the correct directions for the following issues?
What tool chain to use?
What emulator to use?
Are there tutorials or guides on setting up the tool chain?
building a gcc cross compiler yourself is pretty easy. the gcc library and the C library and other things not so much, an embedded library and such a little harder. Depends on how embedded you want to get. I have little use for gcclib or a c library so roll your own works great for me.
After many years of doing this, perhaps it is an age thing, I now just go get the code sourcery tools. the lite version works great. yagarto, devkitarm, winarm or something like that (the site with a zillion examples) all work fine. emdebian also has a good pre-built toolchain. a number of these places if not all have info on how they built their toolchains from gnu sources.
You asked about gcc, but bear in mind that llvm is a strong competitor, and as far as cross compiling goes, since it always cross compiles, it is a far easier cross compiler to download and build and get working than gcc. the recent version is now producing code (for arm) that competes with gcc for performance. gcc is in no way a leader in performance, other compilers I have used run circles around it, but it has been improving with each release (well the 3.x versions sometimes produce better code than the 4.x versions, but you need 4.x for the newer cores and thumb2). even if you go with gcc, try the stable release of llvm from time to time.
qemu is a good emulator, depending on what you are doing the gba emulator virtual gameboy advance is good. There are a couple of nds emulators too. GDB and other places have what appear to be ARMs own armulator. I found it hard to extract and use, so I wrote my own, but being lazy only implemented the thumb instruction set, I called mine the thumbulator. easy to use. Far easier than qemu and armulator to add peripherals to and watch and debug your code. ymmv.
Hmmm I posted a similar answer for someone recently. Google: arm verilog and at umich you will find a file isc.tgz in which is an arm10 behavioural (as in you cannot make a chip from it therefore you can find verilog on the net) model. Which for someone wanting to learn an instruction set, watching your code execute at the gate level is about as good as it gets. Be careful, like a drug, you can get addicted then have a hard time when you go back to silicon where you have relatively zero visibility into your code while it is executing. Somewhere in stackoverflow I posted the steps involved to get that arm10 model and another file or two to turn it into an arm emulator using icarus verilog. gtkwave is a good and free tool for examining the wave (vcd) files.
Above all else you will need the ARM ARM. (The ARM Architectural Reference Manual). Just google it and find it on ARM's web site. There is pseudo code for each instruction teaching you what they do. Use the thumbulator or armulator or others if you need to understand more (mame has an arm core in it too). I make no guarantees that the thumbulator is 100% debugged or accurate, I took some common programs and compared their output to silicon both arm and non-arm to debug the core.
Toolchain you can use Yagarto http://www.yagarto.de/
Emulator you can use Proteus ISIS http://www.labcenter.com/index.cfm
(There is a demo version)
and tutorials, well, google them =)
Good luck!
I'm lately feeling the need to learn a build tool. I'm looking through StackOverflow for recommendations and Gnu Make gets barely mentioned. Instead I see Ant, Maven, CMake, Scon and many others. However, when I look at the little "rogue sources" (as in not-in-the-repo) that I sometimes have to compile, they all require the make && make install steps.
Is learning Make a worse investment of my time than learning another tool?
If so why is Make still so popular?
Make is the standard build tool for everything C/C++. Many others have stepped to the plate, but even when they were useful and successful, they never achieved the ubiquity of make.
Make is installed on virtually every Unix-like machine out there. No matter if you're working with AIX, Solaris, Irix, BSD, or Linux, if there's a compiler installed, there's also make.
Some of the "replacements" (like Automake, CMake) even create Makefiles, which are in turn executed by make.
I would definitely recommend becoming familiar with make. If handled by someone who took the time to learn about make, it is a powerful tool, which can be used in a number of ways not even necessarily related to software development.
Even if you end up using a different build tool in the end, you will be able to "recycle" the lessons learned with make, as the underlying concepts are quite similar. And the sheer number of make-built projects means that there will always be the chance that you have to figure out an existing Makefile.
One thing, though. Get it right from the beginning.
I think the reason you don't see (GNU) make mentioned is that it's often the default; if you have a GNU toolchain, you will have make already. Thus, most people that start talking about build tools, talk about something else.
In my experience, make is fine, but it can be kind of tricky to get it to do exactly what you want to. It's maybe slightly arcane, but it's proven and works.
Make is popular because it's used (mainly) for C/C++ sources in Linux/*nix projects, and is far older than any of the other tools you've mentioned, thus it has stood the test of time and is mature. Kinda like tar.
To be honest with you, I only know make. Those other tools above may be better, but so many projects just use a basic Makefile that you're best off knowing at least a little bit of it. Not only for your own projects at work but most of the open-source ones you find on the net.
It really depends how much you will use it.
If yoy work a lot with C/C++ make projects, then yes, I would recommend learning more about it as a large make file has a steeper learning curve than other build tools you mention.
If you don't work with make, or work in other languages such as C#, Java or PHP then you'd be better off learning build tools relevant to those languages.
Like all tools, if you use it at all, you should put some time
into becoming reasonably adept at it. Also, some tools (like CMake, for example) generate makefiles and you may one day need to mess with those generated files.
GNU make has an excellent manual - it's certainly worth spendin an hour or two reading it.
Make is the de-facto standard on Linux systems for example. It is a very complex tool, and also a very powerful tool.
It is well suited to learn if you are developing C or C++, particularly if targeting Linux/*nix.
One of the features of make, is that you can set up dependencies for when to rebuild a file. E.g. each c or c++ file is build into an .obj file, and in the end, all .obj files are linked to an executable. But maybe the executable is a statically linked library, that is linked into another executable with other .obj files.
Make can make sure that you compilation time is as short as possible, because you can define that a c file should only be compiled if it, or any dependent header files, are newer that the .obj file. So any compilation or linking step is only executed if the current source files for the step is newer that the target file.
If you are developing in for example C#, you don't need this kind of dependency checking because all .cs files are compiled at once into a single executable.
So the conclusion is that you should use a build tool that is well suited for your choice of programming language.
Even if you end up preferring another build tool (personally I'm fond of VS... I know...) knowing make will probably prove more useful.
Make has many applications and whilst it is not always ideal for a single task, when dealing with new technologies it is stalwart and flexible.
I guess where you work is probably different, but I know that everywhere I've worked I would have been a far less valuable employee if I hadn't at least learned how to read Makefiles. Even in all Windows-VisualStudio environments, it comes up every now and then.
For instance, we just got a job that involves porting a bunch of old CX/UX code to Windows. The old code was built with makefiles. There's no way to understand their old system without knowing how to read those old makefiles.