How to check if target triplets are compatible? - gcc

I have a legeacy project at my hands, which runs on old and fairly limited hardware. I thought of rewriting the rather small project in rust, since the old source is pretty hard to maintain.
My question is: How can I compare two target triplets to judge the compatibility of rust with the old hardware?
From the rust docs I know, that the compiler supports the target i586-unknown-linux-gnu.
And we currently compile the old c based source with i586-suse-linux as the target using gcc.
I suspect, that unknown would include any vendor. But the latter part I am not really sure - even after googling for quite a bit.
Is there any way to know for sure?

Related

Cross compiling in Lazarus: cannot find fcllaz

I'm trying to cross compile a project from x86_64 Linux to Win64 in Lazarus. On build, I get:Fatal: Cannot find system used by fcllaz of package FCL.
I've seen this question asked in several places, and I guess I don't understand the answers. I do have fcllaz.pas. I've seen "Check your -Fu" answers, but there isn't enough detail for me to determine what I'm looking for or need to do. I've seen those statements in fpc.cfg, I'm just not sure what to do with them.
I'm quite new to Lazarus. In the form of a question: how do I point Lazarus/fpc to fcllaz and get this thing compiled?
The error is that it can't find unit system, fcllaz is just what is being compiled when it first misses system.
Not finding system means the compiler can't find the RTL (and the rest of the precompiled units) for the selected target (win64). These probably don't come with your installation so you have to build and install them yourself.
The -Fu are lines in the fpc.cfg that should point to the relevant units.
Though a bit outdated, the buildfaq has a lot of background info how the system builds and finds its units.

Can I mix arm-eabi with arm-elf?

I have a product which bootloader and application are compiled using a compiler (gnuarm GCC 4.1.1) that generates "arm-elf".
The bootloader and application are segregated in different FLASH memory areas in the linker script.
The application has a feature that enables it to call the bootloader (as a simple c-function with 2 parameters).
I need to be able to upgrade existing products around the world, and I can safely do this using always the same compiler.
Now I'd like to be able to compile this product application using a new GCC version that outputs arm-eabi.
Everything will be fine for new products, where both application and bootloader are compiled using the same toolchain, but what happens with existing products?
If I flash a new application, compiled with GCC 4.6.x and arm-none-eabi, will my application still be able to call the bootloader function from the old arm-elf bootloader?
Furthermore, not directly related to the above question, can I mix object files compiled with arm-elf into a binary compiled with arm-eabi?
EDIT:
I think is good to make clear I am building for a bare metal ARM7, if it makes any difference...
No. An ABI is the magic that makes binaries compatible. The Application Binary Interface determines various conventions on how to communicate with other libraries/applications. For example, an ABI will define calling convention, which makes implicit assumptions about things like which registers are used for passing arguments to C functions, and how to deal with excess arguments.
I don't know the exact differences between EABI and ABI, but you can find some of them by reading up on EABI. Debian's page mentions the syscall convention is different, along with some alignment changes.
Given the above, of course, you cannot mix arm-elf and arm-eabi objects.
The above answer is given on the assumption that you talk to the bootloader code in your main application. Given that the interface may be very simple (just a function call with two parameters), it's possible that it might work. It'd be an interesting experiment to try. However, it is not ** guaranteed** to work.
Please keep in mind you do not have to use EABI. You can generate an arm-elf toolchain with gcc 4.6 just as well as with older versions. Since you're using a binary toolchain on windows, you may have more of a challenge. I'd suggest investigating crosstool-ng, which works quite well on Linux, and may work okay on cygwin to build the appropriate toolchain.
There is always the option of making the call to bootloader in inline assembly, in which case you can adhere to any calling standard you need :).
However, besides the portability issue it introduces, this approach will also make two assumptions about your bootloader and application:
you are able to detect in your app that a particular device has a bootloader built with your non-EABI toolchain, as you can only call the older type bootloader using the assembly code.
the two parameters you mentioned are used as primitive data by your bootloader. Should the bootloader use them, for example, as pointers to structs then you could be facing issues with incorrect alignment, padding and so forth.
I Think that this will be OK. I did a migration something like this myself, from what I remember I only ran into a problem to do with handling division.
This is the best info I can find about the differences, it suggests that if you don't have struct alignment issues, you may be OK.

What are some compiled programming languages that compile fast?

I think I finally know what I want in a compiled programming language, a fast compiler. I get the feeling that this is a really superficial thing to care about but some time after switching from Java to Scala for a while I realized that being able to make a small change in code and immediately run the program is actually quite important to me. Besides Java and Go I don't know of any languages that really value compile speed.
Delphi/Object Pascal. Make a change, press F9 and it runs - you don't even notice the compile time. A full rebuild of a fairly substantial project that we run takes of the order of 10-20 seconds, even on a fairly wimpy machine
There's an open source variant available at www.freepascal.org. I've not messed with it but it reportedly is just as fast - it's the design of the Pascal language that allows this.
Java isn't fast for compiling. The feature you a looking for is probably a hot replacement/redeployment while coding. Eclipse recompiles just the files you changed.
You could try some interpreted languages. They usually don't require compiling at all.
I wouldn't choose a language based on compilation speed...
Java is not the fastest compiler out there.
Pascal (and its close relatives) is designed to be fast - it can be compiled in a single pass. Objective Caml is known for its compilation speed (and there is a REPL too).
On the other hand, what you really need is REPL, not a fast recompilation and re-linking of everything. So you may want to try a language which supports an incremental compilation. Clojure fits well (and it is built on top of the same JVM you're used to). Common Lisp is another option.
I'd like to add that there official compilers for languages and unofficial ones made by different people. Obviously because of this the performance changes per compiler.
If you were to talk just about the official compiler I'd say it's probably Fortran. It's very old but it's still used in most science and engineering projects because it is one of the fastest languages. C and C++ come probably tied in second because there also used in science and engineering.

Is it worth learning GNU Make?

I'm lately feeling the need to learn a build tool. I'm looking through StackOverflow for recommendations and Gnu Make gets barely mentioned. Instead I see Ant, Maven, CMake, Scon and many others. However, when I look at the little "rogue sources" (as in not-in-the-repo) that I sometimes have to compile, they all require the make && make install steps.
Is learning Make a worse investment of my time than learning another tool?
If so why is Make still so popular?
Make is the standard build tool for everything C/C++. Many others have stepped to the plate, but even when they were useful and successful, they never achieved the ubiquity of make.
Make is installed on virtually every Unix-like machine out there. No matter if you're working with AIX, Solaris, Irix, BSD, or Linux, if there's a compiler installed, there's also make.
Some of the "replacements" (like Automake, CMake) even create Makefiles, which are in turn executed by make.
I would definitely recommend becoming familiar with make. If handled by someone who took the time to learn about make, it is a powerful tool, which can be used in a number of ways not even necessarily related to software development.
Even if you end up using a different build tool in the end, you will be able to "recycle" the lessons learned with make, as the underlying concepts are quite similar. And the sheer number of make-built projects means that there will always be the chance that you have to figure out an existing Makefile.
One thing, though. Get it right from the beginning.
I think the reason you don't see (GNU) make mentioned is that it's often the default; if you have a GNU toolchain, you will have make already. Thus, most people that start talking about build tools, talk about something else.
In my experience, make is fine, but it can be kind of tricky to get it to do exactly what you want to. It's maybe slightly arcane, but it's proven and works.
Make is popular because it's used (mainly) for C/C++ sources in Linux/*nix projects, and is far older than any of the other tools you've mentioned, thus it has stood the test of time and is mature. Kinda like tar.
To be honest with you, I only know make. Those other tools above may be better, but so many projects just use a basic Makefile that you're best off knowing at least a little bit of it. Not only for your own projects at work but most of the open-source ones you find on the net.
It really depends how much you will use it.
If yoy work a lot with C/C++ make projects, then yes, I would recommend learning more about it as a large make file has a steeper learning curve than other build tools you mention.
If you don't work with make, or work in other languages such as C#, Java or PHP then you'd be better off learning build tools relevant to those languages.
Like all tools, if you use it at all, you should put some time
into becoming reasonably adept at it. Also, some tools (like CMake, for example) generate makefiles and you may one day need to mess with those generated files.
GNU make has an excellent manual - it's certainly worth spendin an hour or two reading it.
Make is the de-facto standard on Linux systems for example. It is a very complex tool, and also a very powerful tool.
It is well suited to learn if you are developing C or C++, particularly if targeting Linux/*nix.
One of the features of make, is that you can set up dependencies for when to rebuild a file. E.g. each c or c++ file is build into an .obj file, and in the end, all .obj files are linked to an executable. But maybe the executable is a statically linked library, that is linked into another executable with other .obj files.
Make can make sure that you compilation time is as short as possible, because you can define that a c file should only be compiled if it, or any dependent header files, are newer that the .obj file. So any compilation or linking step is only executed if the current source files for the step is newer that the target file.
If you are developing in for example C#, you don't need this kind of dependency checking because all .cs files are compiled at once into a single executable.
So the conclusion is that you should use a build tool that is well suited for your choice of programming language.
Even if you end up preferring another build tool (personally I'm fond of VS... I know...) knowing make will probably prove more useful.
Make has many applications and whilst it is not always ideal for a single task, when dealing with new technologies it is stalwart and flexible.
I guess where you work is probably different, but I know that everywhere I've worked I would have been a far less valuable employee if I hadn't at least learned how to read Makefiles. Even in all Windows-VisualStudio environments, it comes up every now and then.
For instance, we just got a job that involves porting a bunch of old CX/UX code to Windows. The old code was built with makefiles. There's no way to understand their old system without knowing how to read those old makefiles.

Nintendo DS homebrew with Ada?

Note: I know very little about the GCC toolchain, so this question may not make much sense.
Since GCC includes an Ada front end, and it can emit ARM, and devKitPro is based on GCC, is it possible to use Ada instead of C/C++ for writing code on the DS?
Edit: It seems that the target that devKitARM uses is arm-eabi.
devkitPro is not a toolchain, compiler or indeed any software package. The toolchain used to target the DS is devkitARM, one of the toolchains provided by devkitPro.
It may be possible to build the ada compiler but I doubt very much if you'll ever manage to get anything useful running on the DS itself. devkitPro will certainly never provide an ada compiler as part of the packages we produce.
Yes it is possible, see my project https://github.com/Lucretia/tamp and build the cross compiler as per my script. You would then be able to target NDS using Ada. I have build a basic RTS as well which will provide you with local exception handling.
And #Martin Beckett, why do think Ada is aimed squarely at DoD stuff? They dropped the mandate years ago and Ada is easily usable for any project, you do realise that Ada is a general purpose programming language don't you?
(Disclaimer: I don't know Ada)
Possibly.
You might be able to build devKitPro to use Ada, however, the pre-provided binaries (at least for OS X) do not have Ada support compiled in.
However, you will probably find yourself writing tons of C "glue" code to interface with the various hardware registers and the like.
One thing to consider when porting a language to the nintendo DS is the relatively small stack it has (16KB). There are possible workarounds such as swapping the SRAM stack content into DRAM (4MB) when stack gets full or just have the whole stack in DRAM (assumed to be auwfully slow).
And I second Dre on the fact that you'll have to provide yourself glue between the Ada library function you'd like to use and existing libraries on the DS (which are hopefully covering most of the hardware stuff).
On a practical plane, it is not possible.
On a theoretical plane, you could use one custom Ada parser (I found this one on the ANTLR site, but it is quite old) in order to translate Ada to C/C++, and then feed that to devkitpro.
However, the effort of building such translator is probably going to be equal (if not higher) to creating the game itself.

Resources