How and when to use -mbig-endian gcc option on AArch64? - gcc

I tried to use -mbig-endian gcc option on AArch64 (Raspberry Pi 3 Model B with little-endian byte order configuration) with the intention of reading from and writing to the memory in big-endian byte order. I got the following error and the compilation terminated.
/usr/include/gnu/stubs.h:11:11: fatal error: gnu/stubs-lp64_be.h: No such file or directory `#include <gnu/stubs-lp64_be.h>`
I actually went to that folder and couldn't find that file. Am I missing something?
The gcc online documentation says -mbig-endian option generates big-endian code. What exactly does that mean?

You are not missing anything, but it seems than even gcc toolchains provided by ARM don't allow using -mbig-endian with aarch64-linux-gnu-gcc, nor -mlittle-endian with aarch64_be-linux-gnu-gcc: In both cases, a .h file related to the 'alien' endianess will be missing.
That probably means that you should just use aarch64_be-linux-gnu-gcc for cross-compiling big-endian aarch64 Linux executables. But you will still not be able to run those executables on a little-endian aarch64 Linux system.

Related

GCC error with -mcpu32 flag, CPU32 compiler needed

I am patching code into my car's ECU. This has a Motorola MC68376 processor, so I'm using the appropriate CPU32 instruction set.
I want to continue to write in assembly code so that I can explicitly manage control registers, RAM access and allocation, as well as copying code structures which are already in use.
My first patch was successfully compiled in EASy68k, but that program does not support the full instruction set for the CPU32. For example, the DIVS.L command is not supported, so I cannot take a quotient of a 32-bit value.
Thus, before writing my own compiler out of sheer incompetence with available tools, I'm looking for an easier path. I read that gcc has the capability to compile code for the CPU32, but I have failed to get it to work.
I'm using MinGW's gcc (6.3.0) and Eclipse (2020-03). I added the '-mcpu32' or '-march=cpu32' flags to the compiler call, according to:
https://gcc.gnu.org/onlinedocs/gcc/M680x0-Options.html
Unfortunately this returns an error:
gcc: error: unrecognized command line option '-mcpu32'; did you mean '-mcpu='?
or
error: bad value (cpu32) for -march= switch
May I please have some advice for making this work? Does anyone know of a better CPU32 compiler that works with Eclipse?
I did not understand that gcc is conventionally distributed as binary files that are compiled with different functionality to suit the needs of a given user.
There seem to be two paths forward:
1) compile my own cross-compiler version of GCC
2) download a pre-compiled cross-compiler version of GCC
I chose to follow route 2).
I began the process of installing the 'Windows Subsystem for Linux' and Ubuntu 20.04 Focal Fossa, because I found a pre-made compiler that should be capable of performing cross compilation for the m68k processor: "gobjc-10-m68k-linux-gnu"
https://ubuntu.pkgs.org/20.04/ubuntu-universe-i386/gobjc-10-m68k-linux-gnu_10-20200411-0ubuntu1cross1_i386.deb.html
While I was installing that, I also found an m68k-elf gcc toolchain that is pre-compiled for windows 10:
https://gnutoolchains.com/m68k-elf/
I played with the latter for much of today. Although I was unable to get the toolchain integrated well with Eclipse, it works from the command line to compile a *.s assembly code file. This includes compatibility with the '-mcpu32' flag that I wanted at the outset.
There is still a lot for me to figure out, even after floundering through learning gcc's assembler directives (https://www.eecs.umich.edu/courses/eecs373/readings/Assembler.pdf) and the differences in gcc's assembly syntax compared to the MC68k reference manual (https://www.nxp.com/files-static/archives/doc/ref_manual/M68000PRM.pdf).
I can even convert the code section of the output file to be a proper s-record by using objcopy with the '-O srec' and '--only-section=.text' flags. This helps me patch the code into my ECU.
Thus I've answered my original question.

How can I make GCC generate ELF object files?

I need to use the TCC compiler to link object files generated by GCC. However, GCC in MinGW outputs object files in COFF format, and TCC only supports the ELF format. How can I make GCC generate ELF object files?
$ cat test.c
int main(void)
{
return 0;
}
$ gcc -c test.c
$ file test.o
test.o: MS Windows COFF Intel 80386 object file
$ tcc -c test.c
$ file test.o
test.o: ELF 32-bit LSB relocatable, Intel 80386, version 1 (SYSV), not stripped
However, GCC in MinGW outputs object files in COFF format
GCC can be configured to generate various outputs (including ELF) regardless of which host it runs on.
That is, a GCC running on Linux could be configured to generate COFF, and a GCC running on Windows could be configured to generate ELF32 or ELF64, for various processors (e.g. x86, or SPARC, or MIPS).
A compiler that runs on one kind of host, but generates code for a different kind, is called a cross-compiler.
TCC only supports the ELF format
This is not a meaningful statement: it could mean that you want GCC to generate ELF32 for i686 Linux, or ELF64 for SPARC Solaris, or any number of other processor/os/bit combinations.
You should figure out what target processor and operating system you want to run your final executable on, and then build (non-trivial) or download appropriate cross-compiler from Windows to that target.
file test.o
test.o: ELF 32-bit LSB relocatable, Intel 80386, version 1 (SYSV), not stripped
Ok, you want Windows to Linux/i386/ELF32 cross-compiler.
strip might help. strip accepts various object file formats for input and output type (the bfdname). strip --info for the supported formats.
strip -o outputname -O elf32-i386 objfile Doing so on a 64 bit executable, converted to 32bit control headers will lead to nothing but crash, so pick your output form carefully. Make sure you aren't changing assumed bitwidths / endians along with headers.
Not running MinGW, so, not tested, may not work for your needs, or worse, may jump and catch fire.
You want your compiler (MinGW) to generate binaries that are not of the type usable for your host system (Windows). This is called cross-compiling, and it is a somewhat involved subject -- because to create complete executables you will also need the various libraries: standard libraries, target OS libraries, third-party libraries... so it is not merely the subject of "how do I get the compiler to create ELF", but also "how do I get the whole supporting cast of ELF libs so I can link against them?".
OSDev has quite extensive documentation on the subject of setting up a cross-compiler; however, since you did not tell us what exactly your problem is, it is difficult to advise you further.
If what you want is generating Linux binaries, my advise would be to not bother with cross-compilation (which is a tricky subject -- and much better supported the other way around, i.e. targeting Windows from Linux), but rather install a Linux distribution in parallel to your Windows, and work natively with that.

Cross-compiling baremetal Rust for Raspberry Pi 3 B from Windows

I'm trying to follow this blog but on Windows and with the latest Rust. It seems to me that the correct way of doing things like this is changing very frequently with Rust, so I'm hoping for an up-to-date Windows adaptation.
What I've tried so far:
I installed gcc-arm-embedded.
I had unverified partial success manually cross-compiling libcore, but then I switched to use the recommended xargo, the functionality of which (I read) is on its way to being included in Cargo eventually. While I don't understand any of it very well, I'm hoping to get to the part where I can write/run the code and then maybe I can back into understanding the compilation better.
With japaric's awesome help, I was able to get the "aarch64" targeted build working to generate the .o file (as of this particular commit).
And this part seems to verify:
$ file target/aarch64-raspi3-none-elf/release/deps/rust_rasp-ed0c2377e0a7df81.o
target/aarch64-raspi3-none-elf/release/deps/rust_rasp-ed0c2377e0a7df81.o: ELF 64-bit LSB relocatable, ARM aarch64, version 1 (SYSV), not stripped
When I try to use the GNU Arm Embedded Toolchain linker, I get:
$ arm-none-eabi-gcc -O0 -mfpu=vfp -mfloat-abi=hard -march=armv6zk -mtune=arm1176jzf-s -nostartfiles target/aarch64-raspi3-none-elf/release/deps/rust_rasp-ed0c2377e0a7df81.o -o kernel.elf target/aarch64-raspi3-none-elf/release/deps/rust_rasp-ed0c2377e0a7df81.o: file not recognized: File format not recognized
collect2.exe: error: ld returned 1 exit status
And #rust IRC chatroom helpfuls told me that rpi3 is aarch64, not arm, so I need to find an aarch64 linker ...
I think it's working! Things I learned:
xargo is good
rpi3 is different enough from rpi2 to cause my problems in tool selection
xargo doesn't care what toolchain rustup defaults to because I'm not asking it to link for me and it does its own toolchain selection
I needed to target aarch64, not arm. For this I used the linaro aarch64 mingw32 download, unpacked, added its bin folder to my PATH. Then the aarch64 tools were easy to adapt from the blog.
For people who want to do this themselves, see https://github.com/JasonKleban/rust-rasp . Not so complicated!
I aim to blink the onboard activity led as confirmation that we do really have control, but looks like that will be kinda complicated on the rpi3 (see my readme, if still applicable)

Compiling assembly code for aarch64

I have generated an assembly file try.s with aarch64 instruction set.I want to compile this on an ARM8 (aarch64 processor) running ubuntu.
my native compiler is gcc(4.8) and i use the following command to compile
gcc -o try.o try.s
I am getting the following errors
Error : ARM register expected -- mov x10,x0
It seems like the aarch4 registers are not being recognized although i thought gcc 4.8 supported aarch64. Can someone tell me what am i missing or is there any special option i should include.Or suggest me a native compiler(not cross-compilers) for doing aarch64.I would also like to use gdb to debug this natively.
gcc is for a 32b targets. 'Xn' registers are not defined for a aarch32 instruction set. That's what compiler tells you.
Right toolchain is aarch64-elf-gcc.
PS: that's a good idea to make asm file extention .S (capital s)

Booting custom kernel on xeon-phi

I am trying to boot a custom kernel on Xeon-phi instead of the default Linux kernel. At this link, I found a way to cross compile my kernel which compiles successfully using k1om-mpss-linux-gcc cross compiler. Is cross compiling enough ? I get the error
mykernel.img is not a k1om Linux bzImage
Edit:
So, I used /usr/linux-k1om-4.7/bin/x86_64-k1om-linux-gcc compiler to compile a simple helloworld.c program and the kernel source. I get two different types of results for objdump -f on the executables.
for helloworld.c:
hello: file format elf64-k1om
architecture: k1om, flags 0x00000112:
EXEC_P, HAS_SYMS, D_PAGED
start address 0x0000000000400400
for mykernel:
mykernel: file format elf32-i386
architecture: i386, flags 0x00000112:
EXEC_P, HAS_SYMS, D_PAGED
start address 0x0010000c
I compiled using the same compiler, yet they show different architectures. What is the reason for this ?
The first thing to do is figure out what mykernel.img is. Try running file on it.
$ file /opt/mpss/3.4/sysroots/k1om-mpss-linux/boot/vmlinux-2.6.38.8+mpss3.4
/opt/mpss/3.4/sysroots/k1om-mpss-linux/boot/vmlinux-2.6.38.8+mpss3.4: ELF 64-bit LSB executable, version 1 (SYSV), statically linked, BuildID[sha1]=0xa4c16ee85c11aca4e78dc4ae46d3827fb74289c1, not stripped
$ objdump -f /opt/mpss/3.4/sysroots/k1om-mpss-linux/boot/vmlinux-2.6.38.8+mpss3.4
/opt/mpss/3.4/sysroots/k1om-mpss-linux/boot/vmlinux-2.6.38.8+mpss3.4: file format elf64-k1om
architecture: k1om, flags 0x00000112:
EXEC_P, HAS_SYMS, D_PAGED
start address 0x0000000001000000
The answer to your original question - no, unfortunately, it is not as simple as just cross-compiling. There were a number of changes made to the kernel that comes with the MPSS. I don't know all the changes but a big one that I do know is that they had to add support for the larger register set on the coprocessor in order to be able to save state on a context switch.
As to why the file format is elf32-i386 instead of elf32-k1om -
The web site you referenced referred to recompiling the kernel that came with the MPSS after possibly make a few changes in the files. You'll notice that they also copied over a configuration file for the installed version of the kernel. So they had all the files to remake the kernel exactly as it had been made.
I suspect that, in your case, either a) there was a configuration script of some sort in your source directory that picked up the architecture you were running on and caused confusion when the makefile ran or b) your makefile had no idea what k1om was. In either case, it fell back to what it believed to the the lowest common denominator i386. As I say, this is just a suspicion on my part but a careful reading of your makefiles should lead to the answer.

Resources