Cross compiling in Lazarus: cannot find fcllaz - lazarus

I'm trying to cross compile a project from x86_64 Linux to Win64 in Lazarus. On build, I get:Fatal: Cannot find system used by fcllaz of package FCL.
I've seen this question asked in several places, and I guess I don't understand the answers. I do have fcllaz.pas. I've seen "Check your -Fu" answers, but there isn't enough detail for me to determine what I'm looking for or need to do. I've seen those statements in fpc.cfg, I'm just not sure what to do with them.
I'm quite new to Lazarus. In the form of a question: how do I point Lazarus/fpc to fcllaz and get this thing compiled?

The error is that it can't find unit system, fcllaz is just what is being compiled when it first misses system.
Not finding system means the compiler can't find the RTL (and the rest of the precompiled units) for the selected target (win64). These probably don't come with your installation so you have to build and install them yourself.
The -Fu are lines in the fpc.cfg that should point to the relevant units.
Though a bit outdated, the buildfaq has a lot of background info how the system builds and finds its units.

Related

How to check if target triplets are compatible?

I have a legeacy project at my hands, which runs on old and fairly limited hardware. I thought of rewriting the rather small project in rust, since the old source is pretty hard to maintain.
My question is: How can I compare two target triplets to judge the compatibility of rust with the old hardware?
From the rust docs I know, that the compiler supports the target i586-unknown-linux-gnu.
And we currently compile the old c based source with i586-suse-linux as the target using gcc.
I suspect, that unknown would include any vendor. But the latter part I am not really sure - even after googling for quite a bit.
Is there any way to know for sure?

why must gnu binutils be configured for a spefic target. What's going on underneath

I am messing around with creating my own custom gcc toolchain for an arm Cortex-A5 cpu, and I am trying to dive as deeply as possible into each step. I have deliberately avoided using crosstool-ng or other tools to assist, in order to get a better understanding of what is going on in the process of creating a toolchain.
One thing stumples me though. During configuration and building of binutils, I need to specify a target (--target). This target can be either the classic host-tuple (ex: arm-none-linux-gnuabi) or a specific type, something like i686-elf for example.
Why is this target needed? and what does it specifically do with the generated "as" and "ld" programs built by binutils?
For example, if I build it with arm-none-linux-gnueabi, it looks like the resulting "as" program supports every arm instruction set under the sun (armv4t, armv5, e.t.c.).
Is it simply for saving space in the resulting executable? Or is something more going on?
I would get it, if I configured the binutils for a specific instruction set for example. Build me an assembler that understands armv4t instructions.
Looking through the source of binutils and gas specifically, it looks like the host-tuple is selecting some header files located in gas/config/tc*, gas/config/te*. Again, this seems arbitrary, as it is broad categories of systems.
Sorry for the rambling :) I guess my questing can be stated as: Why is binutils not an all-in-one package?
Why is this target needed?
Because there are (many) different target architectures. ARM assembler / code is different from PowerPC is different from x86 etc. In principle it would have been possible to design one tool for all targets, but that was not the approach taken at te time.
Focus was mainly on speed / performance. The executables are small as of today's 'standards' but combining all > 40 architectures and all tools like as, ld, nm etc. would be / have been quite clunky.
Moreover, not only are modern host machines much more powerful, that also applies to the compiler / assembled programs, sometime zillions of lines of (preprocessed) C++ to compile. This means the overall build times shifted much more in the direction of compilation that in the old days.
Only different core families are usually switchable / selectable per options.

Does gcc version matter for kernel modules

We've been compiling kernel modules for an embedded powerpc system for a few years now and generally things are ok with some rare unexplained stability problems. Recently a collegue pointed out that kernel modules should be compiled with the same compiler as the kernel. After doing a bit of digging i find that the kernel (montavista linux 2.4.20) was compiled with gcc3.4.1 and we've been using (denx eldk) gcc4.0.0. I've recently built gcc4.7.1 for our userspace code but loading kernel modules built with this version cause the system to crash. I then build gcc3.4.1 from source and some builds work and some don't - think i may have an issue with the make scripts but thats another story.
So my question: Is my collegue correct? And if so can anyone explain what is different in the resulting .o file that causes the incompatibility?
Wow, that kernel has been around a long time, since the early days of my former MontaVista employment! I'm not sure there is a hard fast answer here, but I know if it were me, I would be concerned about compiler differences. The Linux kernel has always been sensitive to compiler versions, in part because of it's sheer size and complexity. The kernel uses lots of GNU extensions, and actually makes a pretty good stress test for a new compiler build.
You can discover what compiler was used to build the kernel simply by looking at the output of /proc/version (I think that exists that far back in the 2.4.20 kernel days, but I could be wrong on that.) $ cat /proc/version. It certainly works for modern kernels and has been in the kernel for a long time.
My first suggestion would be to upgrade the kernel to something more modern, but I suspect that's not really an option, or you wouldn't be asking the question! ;)
I suspect that even a compiler expert (not me) would have a hard time answering the question "what's different". But do this simple test. Compile your module with 3.4.1, and then 4.7. The resulting objects (.ko files) will certainly be different.
The reality is, bugs exist in all software, and may lurk for ages until something comes along to stimulate the bug. See my blog post here: http://blogs.mentor.com/chrishallinan/blog/2012/05/18/fun-with-toolchain-versions for a perfect example of this.
Now I'm not saying that's your issue, but I think I'd feel alot better if my modules and kernel were both compiled with the same compiler version.
Good luck.
It matters.
Mostly you would encounter the following, i.e.
rmmod: ERROR: could not remove 'hello': Device or resource busy

Windows based development for ARM processors

I am a complete newbie to the ARM world. I need to be able to write C code, compile it, and then download into an ARM emulator, and execute. I need to use the GCC 4.1.2 compiler for the C code compilation.
Can anybody point me in the correct directions for the following issues?
What tool chain to use?
What emulator to use?
Are there tutorials or guides on setting up the tool chain?
building a gcc cross compiler yourself is pretty easy. the gcc library and the C library and other things not so much, an embedded library and such a little harder. Depends on how embedded you want to get. I have little use for gcclib or a c library so roll your own works great for me.
After many years of doing this, perhaps it is an age thing, I now just go get the code sourcery tools. the lite version works great. yagarto, devkitarm, winarm or something like that (the site with a zillion examples) all work fine. emdebian also has a good pre-built toolchain. a number of these places if not all have info on how they built their toolchains from gnu sources.
You asked about gcc, but bear in mind that llvm is a strong competitor, and as far as cross compiling goes, since it always cross compiles, it is a far easier cross compiler to download and build and get working than gcc. the recent version is now producing code (for arm) that competes with gcc for performance. gcc is in no way a leader in performance, other compilers I have used run circles around it, but it has been improving with each release (well the 3.x versions sometimes produce better code than the 4.x versions, but you need 4.x for the newer cores and thumb2). even if you go with gcc, try the stable release of llvm from time to time.
qemu is a good emulator, depending on what you are doing the gba emulator virtual gameboy advance is good. There are a couple of nds emulators too. GDB and other places have what appear to be ARMs own armulator. I found it hard to extract and use, so I wrote my own, but being lazy only implemented the thumb instruction set, I called mine the thumbulator. easy to use. Far easier than qemu and armulator to add peripherals to and watch and debug your code. ymmv.
Hmmm I posted a similar answer for someone recently. Google: arm verilog and at umich you will find a file isc.tgz in which is an arm10 behavioural (as in you cannot make a chip from it therefore you can find verilog on the net) model. Which for someone wanting to learn an instruction set, watching your code execute at the gate level is about as good as it gets. Be careful, like a drug, you can get addicted then have a hard time when you go back to silicon where you have relatively zero visibility into your code while it is executing. Somewhere in stackoverflow I posted the steps involved to get that arm10 model and another file or two to turn it into an arm emulator using icarus verilog. gtkwave is a good and free tool for examining the wave (vcd) files.
Above all else you will need the ARM ARM. (The ARM Architectural Reference Manual). Just google it and find it on ARM's web site. There is pseudo code for each instruction teaching you what they do. Use the thumbulator or armulator or others if you need to understand more (mame has an arm core in it too). I make no guarantees that the thumbulator is 100% debugged or accurate, I took some common programs and compared their output to silicon both arm and non-arm to debug the core.
Toolchain you can use Yagarto http://www.yagarto.de/
Emulator you can use Proteus ISIS http://www.labcenter.com/index.cfm
(There is a demo version)
and tutorials, well, google them =)
Good luck!

How can I compile object code for the wrong system and cross compiling question?

Reference this question about compiling. I don't understand how my program for Mac can use the right -arch, compile with those -arch flags, the -arch flags be for the system I am on (a ppc64 g5), and still produce the wrong object code.
Also, if I used a cross compiler and was on Linux, produced 10.5 code for mac, how would this be any different than what I described above?
Background is that I have tried to compile various apache modules. They compile with the -arch ppc, ppc64, etc. I get no errors and I get my mod_whatever.so. But, apache will always complain that some symbol isn't found. Apparently, it has to do with what the compiler produces, even though the file type says it is for ppc, ppc64, i386, x_64 (universal binary) and seems to match all the other .so mods I have.
I guess I don't understand how it could compile for my system with no problem and then say my system can't use it. Maybe I do not understand what a compiler is actually giving me.
EDIT: All error messages and the complete process can be seen here.
Thank you.
Looking at the other thread and elsewhere and without a G5 or OSX Server installation, I can only make a few comments and suggestions but perhaps they will help.
It's generally not a good idea to be modifying the o/s vendor's installed software. Installing a new Apache module is less problematic than, say, overwriting an existing library but you're still at the mercy of the vendor in that a Software Update could delete your modifications and, beyond that you have to figure out how the vendor's version was built in the first place. A common practice in the OS X world is to avoid this by making a completely separate installation of an open source product, like Apache, using, for instance, MacPorts. That has its cons, too: to achieve a high-level of independence, MacPorts will often download and build a lot of dependent packages for things which are already in OS X but there's no harm in that other than some extra build cycles and disk space.
That said, it should be possible to build and install apache modules to supplement those supplied by Apple. Apple does publish the changes it makes to open source products here; you can drill down in the various versions there to find the apache directory which contains the source, Makefile and applied patches. That might be of help.
Make sure that the mod_*.so you build are truly 64-bit and don't depend on any non-64 bit libraries. Use otool -L mod_*.so to see the dynamic libraries that each references and then use file on those libraries to ensure they all have ppc64 variants.
Make sure you are using up-to-date developer tools (Xcode 3.1.3 is current).
While the developer tool chain uses many open source components, Apple has enhanced many of them and there are big differences in OS X's ABIs, universal binary support, dynamic libraries, etc. The bottom line is that cross-compilation of OS X-targeted object code on Linux (or any other non-OS X platform) is neither supported nor practical.

Resources