Cross-compiling baremetal Rust for Raspberry Pi 3 B from Windows - windows

I'm trying to follow this blog but on Windows and with the latest Rust. It seems to me that the correct way of doing things like this is changing very frequently with Rust, so I'm hoping for an up-to-date Windows adaptation.
What I've tried so far:
I installed gcc-arm-embedded.
I had unverified partial success manually cross-compiling libcore, but then I switched to use the recommended xargo, the functionality of which (I read) is on its way to being included in Cargo eventually. While I don't understand any of it very well, I'm hoping to get to the part where I can write/run the code and then maybe I can back into understanding the compilation better.
With japaric's awesome help, I was able to get the "aarch64" targeted build working to generate the .o file (as of this particular commit).
And this part seems to verify:
$ file target/aarch64-raspi3-none-elf/release/deps/rust_rasp-ed0c2377e0a7df81.o
target/aarch64-raspi3-none-elf/release/deps/rust_rasp-ed0c2377e0a7df81.o: ELF 64-bit LSB relocatable, ARM aarch64, version 1 (SYSV), not stripped
When I try to use the GNU Arm Embedded Toolchain linker, I get:
$ arm-none-eabi-gcc -O0 -mfpu=vfp -mfloat-abi=hard -march=armv6zk -mtune=arm1176jzf-s -nostartfiles target/aarch64-raspi3-none-elf/release/deps/rust_rasp-ed0c2377e0a7df81.o -o kernel.elf target/aarch64-raspi3-none-elf/release/deps/rust_rasp-ed0c2377e0a7df81.o: file not recognized: File format not recognized
collect2.exe: error: ld returned 1 exit status
And #rust IRC chatroom helpfuls told me that rpi3 is aarch64, not arm, so I need to find an aarch64 linker ...

I think it's working! Things I learned:
xargo is good
rpi3 is different enough from rpi2 to cause my problems in tool selection
xargo doesn't care what toolchain rustup defaults to because I'm not asking it to link for me and it does its own toolchain selection
I needed to target aarch64, not arm. For this I used the linaro aarch64 mingw32 download, unpacked, added its bin folder to my PATH. Then the aarch64 tools were easy to adapt from the blog.
For people who want to do this themselves, see https://github.com/JasonKleban/rust-rasp . Not so complicated!
I aim to blink the onboard activity led as confirmation that we do really have control, but looks like that will be kinda complicated on the rpi3 (see my readme, if still applicable)

Related

MacOS assembly linker throws error while linking

I'm trying to compile and link an assembly file to an executable with NASM and the standard LD linker on my MacBook Air M1. I have no problems with getting the .o file, but if I want to link it with LD, it throws that error:
ld: file not found: elf_i386
Command:
ld -m elf_i386 -s -o hello hello.o
What do I have to change?
Those are options for GNU ld on x86 Linux. (Note the ELF part of the target object-file format, and the i386). MacOS uses the MachO object-file format, not ELF, and apparently their ld takes different options.
Also, MacOS hasn't supported 32-bit x86 for a few versions now, so an M1 mac with an AArch64 CPU definitely can't run 32-bit x86 executables natively.
So get an emulator for a 32-bit Linux environment if you want to follow a tutorial for that environment, or find a tutorial for AArch64 MacOS. Or possibly x86-64 MacOS which should still work transparently thanks to Rosetta, but make sure single-step debugging actually works. That's an essential part of a development environment for learning asm.
Assembly language is not portable at all, you need a tutorial for the OS, CPU-architecture, and mode (32-bit vs. 64-bit) that you're going to built in. Don't waste your time trying to port a tutorial at the same time you're learning the basics it's trying to teach. You'd have to already know both systems to know which parts of the code and build commands need to change.

GCC error with -mcpu32 flag, CPU32 compiler needed

I am patching code into my car's ECU. This has a Motorola MC68376 processor, so I'm using the appropriate CPU32 instruction set.
I want to continue to write in assembly code so that I can explicitly manage control registers, RAM access and allocation, as well as copying code structures which are already in use.
My first patch was successfully compiled in EASy68k, but that program does not support the full instruction set for the CPU32. For example, the DIVS.L command is not supported, so I cannot take a quotient of a 32-bit value.
Thus, before writing my own compiler out of sheer incompetence with available tools, I'm looking for an easier path. I read that gcc has the capability to compile code for the CPU32, but I have failed to get it to work.
I'm using MinGW's gcc (6.3.0) and Eclipse (2020-03). I added the '-mcpu32' or '-march=cpu32' flags to the compiler call, according to:
https://gcc.gnu.org/onlinedocs/gcc/M680x0-Options.html
Unfortunately this returns an error:
gcc: error: unrecognized command line option '-mcpu32'; did you mean '-mcpu='?
or
error: bad value (cpu32) for -march= switch
May I please have some advice for making this work? Does anyone know of a better CPU32 compiler that works with Eclipse?
I did not understand that gcc is conventionally distributed as binary files that are compiled with different functionality to suit the needs of a given user.
There seem to be two paths forward:
1) compile my own cross-compiler version of GCC
2) download a pre-compiled cross-compiler version of GCC
I chose to follow route 2).
I began the process of installing the 'Windows Subsystem for Linux' and Ubuntu 20.04 Focal Fossa, because I found a pre-made compiler that should be capable of performing cross compilation for the m68k processor: "gobjc-10-m68k-linux-gnu"
https://ubuntu.pkgs.org/20.04/ubuntu-universe-i386/gobjc-10-m68k-linux-gnu_10-20200411-0ubuntu1cross1_i386.deb.html
While I was installing that, I also found an m68k-elf gcc toolchain that is pre-compiled for windows 10:
https://gnutoolchains.com/m68k-elf/
I played with the latter for much of today. Although I was unable to get the toolchain integrated well with Eclipse, it works from the command line to compile a *.s assembly code file. This includes compatibility with the '-mcpu32' flag that I wanted at the outset.
There is still a lot for me to figure out, even after floundering through learning gcc's assembler directives (https://www.eecs.umich.edu/courses/eecs373/readings/Assembler.pdf) and the differences in gcc's assembly syntax compared to the MC68k reference manual (https://www.nxp.com/files-static/archives/doc/ref_manual/M68000PRM.pdf).
I can even convert the code section of the output file to be a proper s-record by using objcopy with the '-O srec' and '--only-section=.text' flags. This helps me patch the code into my ECU.
Thus I've answered my original question.

Building binutils-2.31.1: No linker produced

As part of trying to build a gcc 8.2 cross-compiler (targeting ia64-hp-hpux11.31), I'm running into problems building binutils 2.31.1. The build actually seems to complete just fine. I end with a bunch of binaries (ar, objdump, strings, etc.), but some important ones like as and ld are missing. I think I configured binutils properly, explicitely enabling ld and disabling gold: ../binutils-2.31.1/configure --target=ia64-hp-hpux11.31 --enable-ld=yes --enable-gold=no.
I scanned through the stdout + stderr output of the entire build process, but didn't find any hints. The only suspicous thing is that configure outputs: checking whether we are cross compiling... no. Shouldn't that say yes, since I'm building for cross compilation? If my understanding of how --build, --host and --target work is correct, shouldn't that imply cross compilation?
I should note this is my first time trying to build a cross-compiler. I should also note that my Linux "machine" is Ubuntu 16.04.2 LTS under the Windows Subsystem for Linux, perhaps this has something to do with it.
My config.log
See the configure script at line 3744:
ia64*-**-hpux*)
# No ld support yet.
noconfigdirs="$noconfigdirs gdb libgui itcl ld"
;;
That causes the ld directory to be skipped during the build.
You should have an assembler though, built as gas/as-new (after make install that will get installed as ia64-hp-hpux11.31-as).

gcc dll - compiled under Linux

I have a project written in gcc - bison -flex on Linux environment. All the project is implemented into a *.so file and is called from python-tkinter graphic surface.
There is a need to run it on windows. However I'd avoid to install all the windows equivalent of gcc - bison -flex programs.
Is it possible to force gcc IN LINUX ENVIRONMENT to compile WINDOWS DLL instead of *.so? It could make life easier to use the same technics as I do now: just do calls from python-tkinter graphic surface.
You can, of course, cross-compile it.
You'll need some packages installed, though.
Your normal project would be able to build if you use the MINGW equivalent of GCC for the target architecture.
Also, take a look at this:
Manual for cross-compiling a C++ application from Linux to Windows?
The linking can be kind of troublesome though, since it could come a time where softlinking fails due to versions. In that case you'll need to create some symbolic links to the correct version.
The output of the compilation process should be with -o DYNAMIC-LIBRARIE-NAME.dll and of course use the -shared flag.
Hope it gives you some pointers..
Regards.

seg fault when running arm-elf-gcc compiled code

Using MacPorts i have just installed arm-elf-gcc on to my MacBook Pro. This worked flawlessly and all seems to run fine.
However, after compiling a simple hello world test program in C and C++ and trying to run either on the target board (an ARM9 based board running Debian Linux) they immediately seg fault.
I'm a bit stuck as how to go about debugging this, as the target board has limited tools available and no gdb. I have successfully built and run other code using a Linux hosted cross compiler so it should work.
Any ideas?
Following the suggestion I have built and run gdbserver, I get the following in gdb on the host:
Program received signal SIGSEGV, Segmentation fault.
0x00000000 in ?? ()
I thought it may be a problem with the standard c libs so I removed any calls and have just an empty main that return 0, it is compiled with -Wall -g hello-arm.cpp -static. As a test I compiled the same source with a Linux hosted cross compiler and it runs and exits fine. The only difference I can see is the that Linux compiled version is over twice the size and the difference in output from the file command:
arm-elf-gcc: ELF 32-bit LSB executable, ARM, version 1, statically linked, not stripped
arm-*-linux: ELF 32-bit LSB executable, ARM, version 1, statically linked, for GNU/Linux 2.4.18, not stripped
The usual method of debugging in this situation is to run gdbserver on the target board, and connect to it (via ethernet) with gdb running on a host computer.
Alternately, you could try comparing the assembly in a Mac-compiled "Hello World" program and a (working) Linux-compiled one to see what's different.
After digging around for a couple of days I am starting to understand a bit more about embedded compilers. I wasn't really sure of the difference between arm-elf-gcc installed via MacPorts and the arm-unknown-linux toolchain I had installed on my Linux box. I just came across a pdf titled "An introduction to the GNU compiler" which contains the following paragraph:
Important: Using the GNU Compiler to
create your executable is not quite
the same as using the GNU Linker,
arm-elf-ld, yourself. The reason is
that the GNU Compiler automatically
links a number of standard system
libraries into your executable. These
libraries allow your program to
interact with an operating system, to
use the standard C library functions,
to use certain language features and
operations (such as division), and so
on. If you wish to see exactly which
libraries are being linked into the
executable, you should pass the
verbose flag
-v to the compiler.
This has important implications for
embedded systems! Such systems do not
usually have an operating system.
This means that linking in the system
libraries is almost always
meaningless: if there is no operating
system, for example, then calling the
standard printf function does not make
much sense.
So when I get back to my dev machine later I will determine the libraries linked in with the Linux build and add them to the arm-elf-gcc build.
I'll update this when I have more information but I just want to document my findings in case any one else has these problems.

Resources