I have seen in Specifying an architecture in gdb-multiarch :
If I compile a C program with any arm compiler (e.g. arm-none-eabi-gcc) and afterwards call gdb-multiarch with the binary as second param(e)ter, it will correctly determine the machine type and I can debug my remote application.
I am using mingw-w64-x86_64-gdb-multiarch 12.1-1 on MINGW64 (MSYS2) on Windows 10; and unfortunately, it seems it cannot determine the architecture. I have built an executable for Pico/RP2040 with gcc on this same system, and the system sees it as:
$ file myexecutable.elf
myexecutable.elf: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), statically linked, with debug_info, not stripped
However, if I try to run this in gdb-multiarch, I get:
$ gdb-multiarch myexecutable.elf
GNU gdb (GDB) 12.1
...
warning: A handler for the OS ABI "Windows" is not built into this configuration
of GDB. Attempting to continue with the default armv6s-m settings.
Reading symbols from myexecutable.elf...
(gdb)
Well, as the warning says, it seems that gdb-multiarch sees this .elf as 'OS ABI "Windows"' - similar to what was noted in Specifying an architecture in gdb-multiarch :
If however I call gdb-multiarch on its own, it will assume my machine type (x86_64) and tries to debug the remote target with the wrong architecture..
... except, here I'm calling gdb-multiarch with the binary as second parameter - and I still have that problem!
The linked question already explains that set architecture arch from within GDB should work; however, I'm not exactly sure what to enter: file says this executable is "ARM, EABI5", and I've tried to derive an architecture label from that, which doesn't quite work:
(gdb) set architecture armeabi5
Undefined item: "armeabi5".
... and I'm not that knowledgeable with ARM, so that I'd know exactly what to enter here manually.
Therefore, I would prefer if gdb-multiarch found the architecture on its own automatically.
How can I persuade gdb-multiarch to detect the (ARM) architecture of the compiled file correctly?
You may be trying to use a multiarch GDB suitable for use on Intel architectures only.
You can check by starting GDB and entering the following command:
set architecture. If this is the case, the list of supported architectures will look like this:
(gdb) set architecture
Requires an argument. Valid arguments are i386, i386:x86-64, i386:x64-32, i8086, i386:intel, i386:x86-64:intel, i386:x64-32:intel, auto.
(gdb)
For debugging a cortex-m0 target, I would suggest using the GDB provided with the Arm arm-none-eabi toolchain, arm-none-eabi-gdb:
(gdb) set architecture
Requires an argument. Valid arguments are arm, armv2, armv2a, armv3, armv3m, armv4, armv4t, armv5, armv5t, armv5te, xscale, ep9312, iwmmxt, iwmmxt2, armv5tej, armv6, armv6kz, armv6t2, armv6k, armv7, armv6-m, armv6s-m, armv7e-m, armv8-a, armv8-r, armv8-m.base, armv8-m.main, armv8.1-m.main, arm_any, auto.
(gdb)
(gdb) set architecture armv6-m
The target architecture is set to "armv6-m".
(gdb)
Architecture for Cortex-M0/Cortex-M0+ is armv6-m, but I was always able to debug Cortex-M0 programs with GDB without having to use set architecture.
Related
I am trying to do profiling of the code written in C++ with the target Architecture RISC-V. The code has been cross-compiled using RISC-V GNU Toolchain. My executable is unit_tests "ELF 64-bit LSB executable, UCB RISC-V, version 1 (GNU/Linux), dynamically linked,nterpreter /lib/ld-linux-riscv64-lp64d.so.1, for GNU/Linux 4.15.0, with debug_info, not stripped" this information is retrieved using the file command.
What I am trying to do is the profiling of this using gprof. But to do the gprof gmon.out needs to be generated, to generate gmon.out the executable should be run first. I cannot run the binary elf of other architecture in some different architecture. I need a suggestion for this on which emulator or simulator does this for me or I can run on?
I have tried installing qemu using the follwing link:
https://www.google.com/url?q=https://risc-v-getting-started-guide.readthedocs.io/en/latest/linux-qemu.html&sa=D&source=hangouts&ust=1597422417473000&usg=AFQjCNERr6pHYmj0SU6an3WkBRGQI52aTw
but not able to successfully install it.
Also have tried with spike but got "bad synccall" error. Any leads how can I resolve this issue.
I solved this issue using qemu in user mode. Following the instructions in the below link:
Manual-qemu-user
Where I could run the binary elf generated for target RISC-V, which I could run on x86 Linux machine.
I've written a small go program to be run on a MIPS 32-bit router. I'm able to get a basic hello world program running on the router using the go build toolchain.
env GOOS=linux GOARCH=mips GOMIPS=softfloat go build -a
The program I'm trying to compile uses a the go-ethereum library and throws the following error when I try build
go build github.com/ethereum/go-ethereum/crypto/secp256k1: build constraints exclude all Go files in ~/go/src/github.com/ethereum/go-ethereum/crypto/secp256k1
I found the go cross-compilation tool xgo and have been successful in building a binary with that tool (https://github.com/karalabe/xgo). When I try to run the binary though I get the following 'Program terminated with signal SIGILL, Illegal instruction'. I was able to get a core dump from the file but I don't have much experience with GDB.
Program terminated with signal SIGILL, Illegal instruction.
#0 0x008274a8 in __sigsetjmp_aux ()
Running layout asm I get the following:
0x8274a4 <__sigsetjmp_aux+4> addiu gp,gp,-19312 │
>│0x8274a8 <__sigsetjmp_aux+8> sdc1 $f20,56(a0) │
│0x8274ac <__sigsetjmp_aux+12> sdc1 $f22,64(a0)
I'm unsure how to interpret this any help would be much appreciated.
Here it he output of cat /proc/cpuinfo :
system type : Qualcomm Atheros QCA9533 ver 2 rev 0
machine : GL.iNet GL-AR750
processor : 0
cpu model : MIPS 24Kc V7.4
BogoMIPS : 432.53
wait instruction : yes
microsecond timers : yes
tlb_entries : 16
extra interrupt vector : yes
hardware watchpoint : yes, count: 4, address/irw mask: [0x0ffc, 0x0ffc, 0x0ffb, 0x0ffb]
isa : mips1 mips2 mips32r1 mips32r2
ASEs implemented : mips16
shadow register sets : 1
kscratch registers : 0
package : 0
core : 0
VCED exceptions : not available
VCEI exceptions : not available
and the output of the file util on the binary:
ELF 32-bit MSB executable, MIPS, MIPS32 rel2 version 1, statically linked, for GNU/Linux 3.2.0, BuildID[sha1]=83c74323a279af9cba50869671ef03d5ad497db8, not stripped
I've spent quite a lot of time trying to get this program to run, even forking the xgo tool so it can accept the softfloat parameter. Any help or direction on this problem would be greatly appreciated, thanks.
I'm unsure how to interpret this
Google for "MIPS sdc1" shows that this is a floating-point "Store Doubleword from Coprocessor-1" instruction.
A guess: your embedded system doesn't have a floating-point co-processor?
You would likely need to add -msoft-float to your xgo command and rebuild.
Update:
it is crashing on the same sdc1 call, the registers are the same $f20,56(a0).
Yes, but is in the same function (__sigsetjmp_aux), or in some different one?
Here is the call I'm building with xgo: xgo --go=1.12 --targets=linux/mips --ldflags '-extldflags "-static -msoft-float"' ~/path/to/project
It looks like the routine __sigsetjmp_aux is coming from GLIBC, which is not built by xgo.
And the version of GLIBC you are using was built without -msoft-float, so you are still linking in the code that expects hardware floating point, that your system lacks.
Step 1: verify where __sigsetjmp_aux is coming from. To do so, you need to pass -y __sigsetjmp_aux to the linker. Maybe --ldflags '-extldflags "-static -msoft-float -Wl,-y,__sigsetjmp_aux"' will do that.
You should see something similar to this:
gcc t.o -Wl,-y,setjmp -static
t.o: reference to setjmp
/usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/libc.a(bsd-setjmp.o): definition of setjmp
Assuming your definition of __sigsetjmp_aux does come from libc.a, you'll need to rebuild it with -msoft-float in CFLAGS.
Note: passing -msoft-float to the linker is wrong and will have no effect -- it's a compiler flag.
I am using Buildroot 2018.02 for an IMX6 board and the Linaro external toochain 2017.11 based on GCC 2017.11 (GCC 7.2.1)
Now i am adding some debug tools such as gdbserver on the target.
Everything is OK if i use the option "Build cross gdb for the host" and selecting the gdb debugger version 7.11.x for the host along with the gdbserver (BR2_PACKAGE_GDB_SERVER) in the "Target Packages > Debugging,.." menu. There are also other version of gdb available in Buildroot such as 7.12.x and 8.0.x.
However, for an external toolchain the recommended way is :
https://github.com/mbats/eclipse-buildroot-bundle/wiki/Tutorial-:-How-to-debug-a-remote-application-%3F which means to activate only the "Copy gdb server to the Target" option in Buildroot (although the post is a bit old)
I have noticed that the BR2_TOOLCHAIN_EXTERNAL_LINARO_ARM description says that Linaro gdb is based on gdb 8.0 so a newer version that the one i am using (7.11.x).
But when i do that, i have the following message on the target board :
# gdbserver
-sh: gdbserver: not found
despite the following :
# which gdbserver
/usr/bin/gdbserver
gdbserver binary is on the target.
How to fix this in Buildroot ?
Moreover i have two additional questions :
Does it really matter to have the Linaro toolchain gdb instead of
the gdb 7.11.x that works "out of the box" in my case ?
If i don't use the Linaro gdb then should i use the gdb version
8.0.x ( because the Linaro version is based on the 8.0 of GDB ) ?
Thank you for your help.
The gdbserver binary in the Linaro 2017.11 toolchain is broken: it requests /usr/lib/ld.so.1 as the program interpreter (see below), but that program interpreter doesn't exist.
You can try to create a symlink /usr/lib/ld.so.1 -> /lib/ld-linux-armhf.so.3 (add that to your filesystem overlay if it works). Alternatively, you can specify the program interpreter explicitly by putting it before the executable, i.e. /lib/ld-linux-armhf.so.3 /usr/bin/gdbserver.
The "program interpreter" is a parameter of an ELF file that points to a program that is used to load the ELF file into memory and to start executing it. The main responsibility of the program interpreter is to find and load the dynamic libraries that the program needs. Thus, it is usually called the "dynamic library loader", or ld.so. It is built and installed together with the toolchain - specifically, the standard C library (glibc). When a program is linked, the linker will also set the program interpreter (it is copied from libc.so). Apparently Linaro did something really peculiar to end up with a wrong program interpreter in the gdbserver executable.
# gdbserver
-sh: gdbserver: not found
depsite the following :
# which gdbserver
/usr/bin/gdbserver
Most likely:
The gdbserver is a dynamically linked binary, and
The ELF interpreter that this binary is linked to use is not present on the target system.
Use readelf -l /usr/bin/gdbserver | grep -i interpreter to find out what runtime loader this gdbserver requires. Verify that that file is not present on the target. Copy it to the target and enjoy.
I am trying to boot a custom kernel on Xeon-phi instead of the default Linux kernel. At this link, I found a way to cross compile my kernel which compiles successfully using k1om-mpss-linux-gcc cross compiler. Is cross compiling enough ? I get the error
mykernel.img is not a k1om Linux bzImage
Edit:
So, I used /usr/linux-k1om-4.7/bin/x86_64-k1om-linux-gcc compiler to compile a simple helloworld.c program and the kernel source. I get two different types of results for objdump -f on the executables.
for helloworld.c:
hello: file format elf64-k1om
architecture: k1om, flags 0x00000112:
EXEC_P, HAS_SYMS, D_PAGED
start address 0x0000000000400400
for mykernel:
mykernel: file format elf32-i386
architecture: i386, flags 0x00000112:
EXEC_P, HAS_SYMS, D_PAGED
start address 0x0010000c
I compiled using the same compiler, yet they show different architectures. What is the reason for this ?
The first thing to do is figure out what mykernel.img is. Try running file on it.
$ file /opt/mpss/3.4/sysroots/k1om-mpss-linux/boot/vmlinux-2.6.38.8+mpss3.4
/opt/mpss/3.4/sysroots/k1om-mpss-linux/boot/vmlinux-2.6.38.8+mpss3.4: ELF 64-bit LSB executable, version 1 (SYSV), statically linked, BuildID[sha1]=0xa4c16ee85c11aca4e78dc4ae46d3827fb74289c1, not stripped
$ objdump -f /opt/mpss/3.4/sysroots/k1om-mpss-linux/boot/vmlinux-2.6.38.8+mpss3.4
/opt/mpss/3.4/sysroots/k1om-mpss-linux/boot/vmlinux-2.6.38.8+mpss3.4: file format elf64-k1om
architecture: k1om, flags 0x00000112:
EXEC_P, HAS_SYMS, D_PAGED
start address 0x0000000001000000
The answer to your original question - no, unfortunately, it is not as simple as just cross-compiling. There were a number of changes made to the kernel that comes with the MPSS. I don't know all the changes but a big one that I do know is that they had to add support for the larger register set on the coprocessor in order to be able to save state on a context switch.
As to why the file format is elf32-i386 instead of elf32-k1om -
The web site you referenced referred to recompiling the kernel that came with the MPSS after possibly make a few changes in the files. You'll notice that they also copied over a configuration file for the installed version of the kernel. So they had all the files to remake the kernel exactly as it had been made.
I suspect that, in your case, either a) there was a configuration script of some sort in your source directory that picked up the architecture you were running on and caused confusion when the makefile ran or b) your makefile had no idea what k1om was. In either case, it fell back to what it believed to the the lowest common denominator i386. As I say, this is just a suspicion on my part but a careful reading of your makefiles should lead to the answer.
Using MacPorts i have just installed arm-elf-gcc on to my MacBook Pro. This worked flawlessly and all seems to run fine.
However, after compiling a simple hello world test program in C and C++ and trying to run either on the target board (an ARM9 based board running Debian Linux) they immediately seg fault.
I'm a bit stuck as how to go about debugging this, as the target board has limited tools available and no gdb. I have successfully built and run other code using a Linux hosted cross compiler so it should work.
Any ideas?
Following the suggestion I have built and run gdbserver, I get the following in gdb on the host:
Program received signal SIGSEGV, Segmentation fault.
0x00000000 in ?? ()
I thought it may be a problem with the standard c libs so I removed any calls and have just an empty main that return 0, it is compiled with -Wall -g hello-arm.cpp -static. As a test I compiled the same source with a Linux hosted cross compiler and it runs and exits fine. The only difference I can see is the that Linux compiled version is over twice the size and the difference in output from the file command:
arm-elf-gcc: ELF 32-bit LSB executable, ARM, version 1, statically linked, not stripped
arm-*-linux: ELF 32-bit LSB executable, ARM, version 1, statically linked, for GNU/Linux 2.4.18, not stripped
The usual method of debugging in this situation is to run gdbserver on the target board, and connect to it (via ethernet) with gdb running on a host computer.
Alternately, you could try comparing the assembly in a Mac-compiled "Hello World" program and a (working) Linux-compiled one to see what's different.
After digging around for a couple of days I am starting to understand a bit more about embedded compilers. I wasn't really sure of the difference between arm-elf-gcc installed via MacPorts and the arm-unknown-linux toolchain I had installed on my Linux box. I just came across a pdf titled "An introduction to the GNU compiler" which contains the following paragraph:
Important: Using the GNU Compiler to
create your executable is not quite
the same as using the GNU Linker,
arm-elf-ld, yourself. The reason is
that the GNU Compiler automatically
links a number of standard system
libraries into your executable. These
libraries allow your program to
interact with an operating system, to
use the standard C library functions,
to use certain language features and
operations (such as division), and so
on. If you wish to see exactly which
libraries are being linked into the
executable, you should pass the
verbose flag
-v to the compiler.
This has important implications for
embedded systems! Such systems do not
usually have an operating system.
This means that linking in the system
libraries is almost always
meaningless: if there is no operating
system, for example, then calling the
standard printf function does not make
much sense.
So when I get back to my dev machine later I will determine the libraries linked in with the Linux build and add them to the arm-elf-gcc build.
I'll update this when I have more information but I just want to document my findings in case any one else has these problems.