Can I mix arm-eabi with arm-elf? - gcc

I have a product which bootloader and application are compiled using a compiler (gnuarm GCC 4.1.1) that generates "arm-elf".
The bootloader and application are segregated in different FLASH memory areas in the linker script.
The application has a feature that enables it to call the bootloader (as a simple c-function with 2 parameters).
I need to be able to upgrade existing products around the world, and I can safely do this using always the same compiler.
Now I'd like to be able to compile this product application using a new GCC version that outputs arm-eabi.
Everything will be fine for new products, where both application and bootloader are compiled using the same toolchain, but what happens with existing products?
If I flash a new application, compiled with GCC 4.6.x and arm-none-eabi, will my application still be able to call the bootloader function from the old arm-elf bootloader?
Furthermore, not directly related to the above question, can I mix object files compiled with arm-elf into a binary compiled with arm-eabi?
EDIT:
I think is good to make clear I am building for a bare metal ARM7, if it makes any difference...

No. An ABI is the magic that makes binaries compatible. The Application Binary Interface determines various conventions on how to communicate with other libraries/applications. For example, an ABI will define calling convention, which makes implicit assumptions about things like which registers are used for passing arguments to C functions, and how to deal with excess arguments.
I don't know the exact differences between EABI and ABI, but you can find some of them by reading up on EABI. Debian's page mentions the syscall convention is different, along with some alignment changes.
Given the above, of course, you cannot mix arm-elf and arm-eabi objects.
The above answer is given on the assumption that you talk to the bootloader code in your main application. Given that the interface may be very simple (just a function call with two parameters), it's possible that it might work. It'd be an interesting experiment to try. However, it is not ** guaranteed** to work.
Please keep in mind you do not have to use EABI. You can generate an arm-elf toolchain with gcc 4.6 just as well as with older versions. Since you're using a binary toolchain on windows, you may have more of a challenge. I'd suggest investigating crosstool-ng, which works quite well on Linux, and may work okay on cygwin to build the appropriate toolchain.

There is always the option of making the call to bootloader in inline assembly, in which case you can adhere to any calling standard you need :).
However, besides the portability issue it introduces, this approach will also make two assumptions about your bootloader and application:
you are able to detect in your app that a particular device has a bootloader built with your non-EABI toolchain, as you can only call the older type bootloader using the assembly code.
the two parameters you mentioned are used as primitive data by your bootloader. Should the bootloader use them, for example, as pointers to structs then you could be facing issues with incorrect alignment, padding and so forth.

I Think that this will be OK. I did a migration something like this myself, from what I remember I only ran into a problem to do with handling division.
This is the best info I can find about the differences, it suggests that if you don't have struct alignment issues, you may be OK.

Related

Do DLLs built with Rust require libgcc.dll on run time?

If I build a DLL with Rust language, does it require libgcc*.dll to be present on run time?
On one hand:
I've seen a post somewhere on the Internet, claiming that yes it does;
rustc.exe has libgcc_s_dw2-1.dll in its directory, and cargo.exe won't run without the dll when downloaded from the http://crates.io website;
On the other hand:
I've seen articles about building toy OS kernels in Rust, so they most certainly don't require libgcc dynamic library to be present.
So, I'm confused. What's the definite answer?
Rust provides two main toolchains for Windows: x86_64-pc-windows-gnu and x86_64-pc-windows-msvc.
The -gnu toolchain includes an msys environment and uses GCC's ld.exe to link object files. This toolchain requires libgcc*.dll to be present at runtime. The main advantage of this toolchain is that it allows you to link against other msys provided libraries which can make it easier to link with certain C\C++ libraries that are difficult to under the normal Windows environment.
The -msvc toolchain uses the standard, native Windows development tools (either a Windows SDK install or a Visual Studio install). This toolchain does not use libgcc*.dll at either compile or runtime. Since this toolchain uses the normal windows linker, you are free to link against any normal Windows native libraries.
If you need to target 32-bit Windows, i686- variants of both of these toolchains are available.
NOTE: below answer summarizes situation as of Sep'2014; I'm not aware if it's still current, or if things have changed to better or worse since then. But I strongly suspect things have changed, given that 2 years have already passed since then. It would be cool if somebody tried to ask steveklabnik about it again, then update below info, or write a new, fresher answer!
Quick & raw transcript of a Rust IRC chat with steveklabnik, who gave me a kind of answer:
Hi; I have a question: if I build a DLL with Rust, does it require libgcc*.dll to be present on run time? (on Windows)
I believe that if you use the standard library, then it does require it;
IIRC we depend on one symbol from it;
but I am unsure.
How can I avoid using the standard library, or those parts of it that do? (and/or do you know which symbol exactly?)
It involves #[no_std] at your crate root; I think the unsafe guide has more.
Running nm -D | grep gcc shows me __gc_personality_v0, and then there is this: What is __gxx_personality_v0 for?,
so it looks like our stack unwinding implementation depends on that.
I seem to recall I've seen some RFCs to the effect of splitting standard library, too; are there parts I can use without pulling libgcc in?
Yes, libcore doesn't require any of that.
You give up libstd.
Also, quoting parts of the unsafe guide:
The core library (libcore) has very few dependencies and is much more portable than the standard library (libstd) itself. Additionally, the core library has most of the necessary functionality for writing idiomatic and effective Rust code. (...)
Further libraries, such as liballoc, add functionality to libcore which make other platform-specific assumptions, but continue to be more portable than the standard library itself.
And fragment of the current docs for unwind module:
Currently Rust uses unwind runtime provided by libgcc.
(The transcript was edited slightly for readability. Still, I'll happily delete this answer if anyone provides something better formatted and more thorough!)

How to defeat framework injections?

Is anyone hardening their code in an attempt to detect injections? For example, if someone is trying to intercept a username/password via NSUrlConnection, they could use LD_PRELOAD/DYLD_LIBRARY_PATH, provide exports for my calls into NSUrlConnection, and then forward the calls to the real NSUrlConnection.
Ali gave excellent information below, but I'm trying to determine what measures should be take for a hostile environment, where a phone might be jail broken. Most applications don't have to care, but one class of apps do - high integrity software.
If you are hardening, what method(s) are you using? Is there a standard way to detect injections on Macs and iPhones? How are you defeating framework injections?
For iOS / CocoaTouch, loading dynamic libraries is not allowed* (except for the System frameworks). To build and distribute an Application thru the AppStore, you can only link with static libraries and system frameworks, no dynamic library.
So on iOS you can't use that for code injection, neither can you use LD_PRELOAD of course (as you don't have access to such environment variables on iOS).
Except for jailbroken iPhones probably, but people jailbreaking their iPhone should take upon themselves that jailbreaking is by definition lifting all securities provided by iOS to avoid things such as injections (so you can't expect to remove the lock on your door to avoid having to use your key… and still expect that you're still protected against thieves robbing your house ;-))
That's the advantage of the Sandboxing + CodeSigning + No dylib constraints on iOS. No Code injection possible.
(On OSX it is still possible anyway, inparticular using LD_PRELOAD)
[EDIT] Since iOS8, iOS also allows dynamic frameworks. But as that's still sandboxed (you can only load code-signed frameworks that are inside your application bundles, and can't load frameworks that comes from outside your app bundle) injection is still not possible*
*except if the user jailbreaks its phone but it means that s/he chose to get rid of all protections and purpose and thus put its phone at risk — we can't crack our phone security and still expect it to provide all the protections those securities provided
This is an answer specific to UNIX like operating systems, I apologize if it doesn't make sense for your question but I don't know your platform well. Simply don't create a dynamically linked executable.
There are two ways I can think of to do this. Method #2 is probably best for you. They're both similar.
Important for both, the executable must be statically compiled using -static at build time
Method 1 - static exe, manual load shared libraries by their trusted full paths
Manually dlopen each library you need via a full path and then get the function addresses via dlsym at runtime and assign them to function pointers to use them. You'll need to do this for every external function you want to use. I believe reentrant unsafe functions won't like this so for those that use static variables- you'll need to use the reentrant safe versions, these end with "_r" i.e. use strtok_r instead of strtok
This will be difficult or simple depending on what your app does and how many functions you're using.
Method 2 - Statically link the executable, period
You can solve your subversion problem by just linking a static executable to avoid using dynamic libraries at all. This will generate a much larger exe than the the dlopen()/dlsym() method. Build using the -static compile flag and instead of using, for example gcc bah.c -o bah lssl use gcc -static bah.c -o bah /usr/lib/libssl.a to use the statically compiled version of the libraries you need instead of the dynamic shared libraries. In other words, use -static and don't use -l while building
For either method:
Once built, use file bah to confirm the executable is statically linked. Or confirm by running ldd on it
Note you'll need statically compiled versions of all the libraries you're linking against present in your system. These files end with.a instead of .so)
Also note upgrading system libraries will not update your executable. If there's a new security bug in OpenSSL, you'll need to get the latest libssl.a and recompile it. If you use the dlopen()/dlsym() method you won't have this problem but you will have portability issues if symbols change in different versions
Each method has its pros and cons based on your needs.
Taking the method 1 dlopen and dlsym approach makes your code more "obfuscated" and smaller, but sacrifices portability in most cases so probably isn't what you want. The upside is that it can possibly benefit when security bugs are fixed system wide.

Configuring GCC with FreeRTOS and OpenOCD

I'm pretty sure this is possible but I'm not sure how to go about it. I'm very new to building with GCC in general and I have never used FreeRTOS, but I'd like to try getting the OS up and running on a TI ARM Cortex MCU but with a slight twist: I'd like to get it up and running with Pascal. I'm curious:
Is this even possible to get work? If not, the next issues are kind of moot points.
From my Delphi days, I vaguely recall the ability to access functions in C libraries. I'm wondering if I would have access to the C routines in FreeRTOS.
If I use the GCC version (preferable) would I be able to debug using OpenOCD on the target? I'm not quite sure how debug symbols work and if it's more or less language agnostic (hopefully, in this case).
As kind of a bonus question a bit outside the scope of the original query, can I simulate FreeRTOS on an x86 processor (e.g. my development PC) for easier debugging during development? (With a Pascal program, of course..)
I haven't found any documentation on achieving this, so hopefully someone here can shed some light! Any resources would be most helpful. Like I said, I'm very new to this kind of development. I'm also open to suggestions if you think there is a better alternative.
FYI, my preferred host configuration would be something similar to:
Linux (Ubuntu/Debian)
Eclipse IDE for development, unit testing, and hopefully simulation / debugging
OpenOCD for target debugging
GNU Pascal + FreeRTOS on target
FreeRTOS is C source code, so like you say you would have to have some mechanism for linking C with your Pascal programs. Also, FreeRTOS relies on certain registers to be used for things like passing a parameter into a task (as a hypothetical example, the task might always expect the parameter to be in register R0) so you would have to ensure the ABI for the C compiler and the Pascal compiler was the same - or have your task entry in C then have it call a Pascal function (very nasty). Then there is the issue of interrupts, calling inline macros, etc. I would say this would be extremely difficult to achieve.
Both GNU Pascal and Free Pascal support linking to C (gcc) and ARM, as well as calling pascal code from C etc. Writing a header and declaring the prototypes with cdecl is all there is to it.
Macros are a bit bigger problem. Usually I just rewrite them to inline functions (what they should have been anyway). Except for the macro/header issue, the problems are more compiler specific functionality (which you also would have a problem with when porting from one C compiler to the next)
If you prefer TP/Delphi dialect, Free Pascal is the better choice.
I run my old Delphi code fine on my sheevaplug.
There is already an example for FreeRTOS/GCC/OpenOCD on a TI Cortex-M3 (was Luminary Micro Cortex-M3). Be aware though that this is a really old example and both the Eclipse and OpenOCD versions used are out of date.
Although there is an Eclipse project provided, the project is configured as a standard make (as opposed to a managed make) project, so there is a standard makefile that can be just as easily executed from the command line as from within Eclipse.
http://www.freertos.org/portLM3Sxxxx_Eclipse.html

Lua compiled scripts on Mac OS X - Intel vs PPC

Been using Lua 5.0 in a Mac OS X universal binary app for some years. Lua scripts are compiled using luac and the compiled scripts are bundled with the app. They have worked properly in Tiger and Leopard, Intel or PPC.
To avoid library problems at the time, I simply added the Lua src tree to my Xcode project and compiled as is, with no problems.
It was time to update to a more modern version of Lua so I replaced my source tree with that of 5.1.4. I rebuilt luac using make macosx (machine is running Leopard on Intel).
Uncompiled scripts work properly in Tiger and Leopard, Intel and PPC, as always.
However, now compiled scripts fail to load on PPC machines.
So I rebuilt luac with the 'ansi' flag, and recompiled my scripts. Same error. Similarly, a build flag of 'generic' produced no joy.
Can anyone please advise on what I can do next?
Lua's compiled scripts are pretty much the raw bytecode dumped out after a short header. The header documents some of the properties of the platform used to compile the bytecode, but the loader only verifies that the current platform has the same properties.
Unfortunately, this creates problems when loading bytecode compiled on another platform, even if compiled by the very same version of Lua. Of course, scripts compiled by different versions of Lua cannot be expected to work, and since the version number of Lua is included in the bytecode header, the attempt to load them is caught by the core.
The simple answer is to just not compile scripts. If Lua compiles the script itself, you only have to worry about possible version mismatches between Lua cores in your various builds of your application, and that isn't hard to deal with.
Actually supporting a full cross compatibility for compiled bytecode is not easy. In that email, Mike Pall identified the following issues:
Endianess: swap on output as needed.
sizeof(size_t), affects huge string constants: check for overflow when
downgrading.
sizeof(int), affectsMAXARG_Bx and MAXARG_sBx: check for overflow when
downgrading.
typeof(lua_Number): easy in C, but only when the host and the target
follow the same FP standard; precision
loss when upgrading (rare case);
warn about non-integer numbers when
downgrading to int32.
From all the discussions that I've seen about this issue on the mailing list, I see two likely viable approaches, assuming that you are unwilling to consider just shipping the uncompiled Lua scripts.
The first would be to fix the byte order as the compiled scripts are loaded. That turns out to be easier to do than you'd expect, as it can be done by replacing the low-level function that reads the script file without recompiling the core itself. In fact, it can even be done in pure Lua, by supplying your own chunk reader function to lua_load(). This should work as long as the only compatibility issue over your platforms is byte order.
The second is to patch the core itself to use a common representation for compiled scripts on all platforms. This has been described as possible by Luiz Henrique de Figueiredo:
....
I'm convinced that the best route to
byte order or cross-compiling is
third-party dump/undump pairs. The
files ldump.c and lundump.c are
completely replaceable; they export a
single, well-defined, entry point. The
format of precompiled chunks is not
sacred at all; you can use any format,
as long as ldump.c and lundump.c agree
about it. (For instance, Rici Lake is
considering writing a text format for
precompiled chunks.)
....
Personally, I'd recommend giving serious consideration to not pre-compiling the scripts and thus avoid the platform portability issues entirely.
Edit: I've updated my description of the bytecode header thanks to lhf's comment. I hadn't read this part of the Lua source yet, and I probably should have checked it before being quite so assertive about what information is or is not present in the header.
Here is the fragment from lundump.c that forms a copy of the header matching the running platform for comparison to the bytecode being loaded. It is simply compared with memcmp() for an exact match to the header from the file, so any mismatch will cause the stock loader (luaU_undump()) to reject the file.
/*
* make header
*/
void luaU_header (char* h)
{
int x=1;
memcpy(h,LUA_SIGNATURE,sizeof(LUA_SIGNATURE)-1);
h+=sizeof(LUA_SIGNATURE)-1;
*h++=(char)LUAC_VERSION;
*h++=(char)LUAC_FORMAT;
*h++=(char)*(char*)&x; /* endianness */
*h++=(char)sizeof(int);
*h++=(char)sizeof(size_t);
*h++=(char)sizeof(Instruction);
*h++=(char)sizeof(lua_Number);
*h++=(char)(((lua_Number)0.5)==0); /* is lua_Number integral? */
}
As can be seen, the header is 12 bytes long and contains a signature (4 bytes, "<esc>Lua"), version and format codes, a flag byte for endianness, sizes of the types int, size_t, Instruction, and lua_Number, and a flag indicating whether lua_Number is an integral type.
This allows most platform distinctions to be caught, but doesn't attempt to catch every way in which platforms can differ.
I still stand by the recommendations made above: first, ship compilable sources; or second, customize ldump.c and lundump.c to store and load a common format, with the additional note that any custom format should redefine the LUAC_FORMAT byte of the header so as to not be confused with the stock bytecode format.
You may want to use a patched bytecode loader that supports different endianness.
See this.
I would have commented on RBerteig's post, but I apparently don't have enough reputation yet to be able to do so. In working on bringing LuaRPC up to speed with Lua 5.1.x AND making it work with embedded targets, I've been modifying the ldump.c and lundump.c sources to make them both a bit more flexible. The embedded Lua project (eLua) already had some of the patches you can find on the Lua list, but I've added a bit more to make lundump a little more friendly to scripts compiled on different architectures. There's also cross-compilation support provided so that you can build for targets differing from the host system (see luac.c in the same directory as the links below).
If you're interested in checking out the modifications, you can find them in the eLua source repository:
http://svn.berlios.de/wsvn/elua/trunk/src/lua/lundump.c
http://svn.berlios.de/wsvn/elua/trunk/src/lua/lundump.h
http://svn.berlios.de/wsvn/elua/trunk/src/lua/ldump.c
Standard Disclaimer:
I make no claim that the modifications are perfect or work in every situation. If you use it and find anything broken, I'd be glad to hear about it so that it can be fixed.
Lua bytecode is not portable. You should ship source scripts with your application.
If download size is a concern, they are generally shorter than the bytecode form.
If intellectual property is a concern, you can use a code obfuscator, and keep in mind that disassembling Lua bytecode is anything but difficult.
If loading time is a concern, you can precompile the sources locally in your installation script.
I conjecture that you compiled the scripts on an Intel box.
Compiled scripts are wildly unportable. If you really want to precompile scripts, you'll need to include two versions of each compiled script: one for Intel and one for PPC. Your app will have to interrogate which program it's running on and use the correct compiled script.
I don't have enough reputation to comment, so I have to provide this as an answer instead even though it's not an appropriate answer to the question asked. Sorry.
There is an Lua Obfuscator available here:
http://www.capprime.com/CapprimeLuaObfuscator/CapprimeLuaObfuscator.aspx
Full disclosure: I am the author of the obfuscator and I am aware it is not perfect. Feedback is welcome and encouraged (there is a feedback page available from the above page).

Nintendo DS homebrew with Ada?

Note: I know very little about the GCC toolchain, so this question may not make much sense.
Since GCC includes an Ada front end, and it can emit ARM, and devKitPro is based on GCC, is it possible to use Ada instead of C/C++ for writing code on the DS?
Edit: It seems that the target that devKitARM uses is arm-eabi.
devkitPro is not a toolchain, compiler or indeed any software package. The toolchain used to target the DS is devkitARM, one of the toolchains provided by devkitPro.
It may be possible to build the ada compiler but I doubt very much if you'll ever manage to get anything useful running on the DS itself. devkitPro will certainly never provide an ada compiler as part of the packages we produce.
Yes it is possible, see my project https://github.com/Lucretia/tamp and build the cross compiler as per my script. You would then be able to target NDS using Ada. I have build a basic RTS as well which will provide you with local exception handling.
And #Martin Beckett, why do think Ada is aimed squarely at DoD stuff? They dropped the mandate years ago and Ada is easily usable for any project, you do realise that Ada is a general purpose programming language don't you?
(Disclaimer: I don't know Ada)
Possibly.
You might be able to build devKitPro to use Ada, however, the pre-provided binaries (at least for OS X) do not have Ada support compiled in.
However, you will probably find yourself writing tons of C "glue" code to interface with the various hardware registers and the like.
One thing to consider when porting a language to the nintendo DS is the relatively small stack it has (16KB). There are possible workarounds such as swapping the SRAM stack content into DRAM (4MB) when stack gets full or just have the whole stack in DRAM (assumed to be auwfully slow).
And I second Dre on the fact that you'll have to provide yourself glue between the Ada library function you'd like to use and existing libraries on the DS (which are hopefully covering most of the hardware stuff).
On a practical plane, it is not possible.
On a theoretical plane, you could use one custom Ada parser (I found this one on the ANTLR site, but it is quite old) in order to translate Ada to C/C++, and then feed that to devkitpro.
However, the effort of building such translator is probably going to be equal (if not higher) to creating the game itself.

Resources