Getting a segmentation fault when trying to build my GCC backend - gcc

I am currently trying to write a GCC backend for a new architecture, but when I try to compile it I get the following error message:
xgcc: internal compiler error: Segmentation fault signal terminated program cc1
The build is configured with the following command:
../gcc/configure --prefix=--prefix=$HOME/GCC-10.0.1 --disable-bootstrap --target=arch_name --enable-languages=c
How would I go about fixing this error so that I can build my backend?
As far as I am aware, I have implemented the target macro's, functions and insn patterns required to get GCC to build.
Sorry that the question is a bit vague, I am not sure what extra information I can provide. If more specific information is needed please let me know and I will edit the question.
Thanks in advance.

How would I go about fixing this error so that I can build my backend?
Debug cc1.
xgcc is located in $builddir/gcc. Hence run $builddir/xgcc -B$builddir -v -save-temps <options-that-crash-cc1>.
xgcc -v ... will print the sub-commands it is calling, record the options it supplies to the crashing cc1 call.
Run a debugger against that cc1 call, supply the right options and put a breakpoint at abort (will be fancy_abort) actually.
Build the compiler without optimization. It's enough to run make in $builddir/gcc for that. You can supply additional option if you like, e.g. make -j4 cc1 CXXFLAGS='<flags-to-pass>'.
$builddir/gcc provides .gdbinit to augment gdb with additional hooks to improve debugging experience.

Related

ld: too many sections (90295)

I am trying to build a haskell project from Ludum Dare, but whenever I attempt the build I get an error message saying the object file has too many sections. Here is the error:
C:\Users\REDACTED\AppData\Local\Programs\stack\x86_64-windows\ghc-8.10.2\lib\../mingw/bin\ld.exe: .stack-work\dist\a3a5fe88\build\HSsingletons-2.7-J1xRPYS9ah3kGEIOoeLuX.o: too many sections (90295)
singletons > C:\Users\REDACTED\AppData\Local\Programs\stack\x86_64-windows\ghc-8.10.2\lib\../mingw/bin\ld.exe: final link failed: file too big
-- While building package singletons-2.7 using:
C:\Users\REDACTED\AppData\Local\Temp\stack-5ba10ebdb151d9fa\singletons-2.7\.stack-work\dist\a3a5fe88\setup\setup --builddir=.stack-work\dist\a3a5fe88 build --ghc-options " -fdiagnostics-color=always"
Process exited with code: ExitFailure 1
I am using stack 2.3.3 and Windows 10. The project uses the vulkan library.
I tried adding -opta-mbig-obj, but gcc then failed with error: unrecognized command line option '-mbig-obj'
It looks like you may need to try explicitly using the “large object” file format, which I believe you can do by adding -opta-mbig-obj or -Wa,-mbig-obj to the GHC flags in the project’s build config (package.yaml or .cabal file) to add -mbig-obj to the assembler options. You may also need to add --oformat pe-bigobj-x86-64 to the linker flags, using (I think) -optl--oformat -optlpe-bigobj-x86-64 or -Wl,--oformat,pe-bigobj-x86-64. Are you using a 32-bit MinGW? I would expect MinGW64 to handle this by default. (And I’m not actually sure whether 32-bit supports these flags, so you may need to upgrade anyway.)
Since about a year ago (https://gitlab.haskell.org/ghc/ghc/-/commit/1ef90f990da90036d481c830d8832e21b8f1571b) GHC already uses the -mbig-obj and --oformat,pe-bigobj-x86-64 when assembling and linking on 64 bit MinGW. Adding these flags manually will not help on recent GHC versions.
I was able to replicate this problem for both the sdl2 and vulkan Haskell packages using Stack, however neither of them exhibit this issue when compiled with Cabal (and --enable-split-sections) on Windows; this looks to be a bug in stack.

GCC error with -mcpu32 flag, CPU32 compiler needed

I am patching code into my car's ECU. This has a Motorola MC68376 processor, so I'm using the appropriate CPU32 instruction set.
I want to continue to write in assembly code so that I can explicitly manage control registers, RAM access and allocation, as well as copying code structures which are already in use.
My first patch was successfully compiled in EASy68k, but that program does not support the full instruction set for the CPU32. For example, the DIVS.L command is not supported, so I cannot take a quotient of a 32-bit value.
Thus, before writing my own compiler out of sheer incompetence with available tools, I'm looking for an easier path. I read that gcc has the capability to compile code for the CPU32, but I have failed to get it to work.
I'm using MinGW's gcc (6.3.0) and Eclipse (2020-03). I added the '-mcpu32' or '-march=cpu32' flags to the compiler call, according to:
https://gcc.gnu.org/onlinedocs/gcc/M680x0-Options.html
Unfortunately this returns an error:
gcc: error: unrecognized command line option '-mcpu32'; did you mean '-mcpu='?
or
error: bad value (cpu32) for -march= switch
May I please have some advice for making this work? Does anyone know of a better CPU32 compiler that works with Eclipse?
I did not understand that gcc is conventionally distributed as binary files that are compiled with different functionality to suit the needs of a given user.
There seem to be two paths forward:
1) compile my own cross-compiler version of GCC
2) download a pre-compiled cross-compiler version of GCC
I chose to follow route 2).
I began the process of installing the 'Windows Subsystem for Linux' and Ubuntu 20.04 Focal Fossa, because I found a pre-made compiler that should be capable of performing cross compilation for the m68k processor: "gobjc-10-m68k-linux-gnu"
https://ubuntu.pkgs.org/20.04/ubuntu-universe-i386/gobjc-10-m68k-linux-gnu_10-20200411-0ubuntu1cross1_i386.deb.html
While I was installing that, I also found an m68k-elf gcc toolchain that is pre-compiled for windows 10:
https://gnutoolchains.com/m68k-elf/
I played with the latter for much of today. Although I was unable to get the toolchain integrated well with Eclipse, it works from the command line to compile a *.s assembly code file. This includes compatibility with the '-mcpu32' flag that I wanted at the outset.
There is still a lot for me to figure out, even after floundering through learning gcc's assembler directives (https://www.eecs.umich.edu/courses/eecs373/readings/Assembler.pdf) and the differences in gcc's assembly syntax compared to the MC68k reference manual (https://www.nxp.com/files-static/archives/doc/ref_manual/M68000PRM.pdf).
I can even convert the code section of the output file to be a proper s-record by using objcopy with the '-O srec' and '--only-section=.text' flags. This helps me patch the code into my ECU.
Thus I've answered my original question.

Can binutils Be Built Without libiberty? Or Can report_times Be Disabled?

TLDR: Getting fatal error 'failed to get process times' on cross-native build of gcc. Can I remove report_times code from gcc.c OR use gcc command line option to disable report_times OR build gcc without libiberty (which contains pex_get_times used by report_times
DETAIL
After beating my head against various problems I've (finally) successfully used the Android NDK standalone toolchain to build binutils 2.23 and gcc 4.70.
My current problem is getting it to run on my device.
I've written a standard 'hello world' (copied from here) to test gcc on my device. When I run:
arm-linux-eabi-gcc hello.c -o hello
or:
arm-linux-eabi-gcc hello.c
I get the following error:
arm-linux-eabi-gcc: fatal error: failed to get process times: No such file or directory.
Google did not return much except for links to gcc.c source. Examining the source, I found the error in a function (module? extension?) called report_times. The error is returned by the function (module? extension?) pex_get_times....I'm guessing it does so if it can't get the process times.
The pex_get_times function (module? extension? I'm not sure what it is) is defined in libiberty. I can use --disable-build-libiberty, but it doesn't help for the host (my NookHD) gcc build.
My question(s):
Can this portion of gcc.c be safely (and easily) removed...i.e. the report_times function and everything associated with it?
or
Is there a command line option to tell arm-linux-eabi-gcc NOT to use report_times?
or
Is there a way to disable build of libiberty for host/target for both gcc and binutils, and would that fix the error?
As always...I'll keep researching while awaiting an answer.
Found this about an hour after posting this question. Maybe two.
Apparently report_times is part of debugging symbols (?) for GCC. To exclude report_times (which causes the 'failed to get process times' from the original question) you have to build the non-debug...or release...version of gcc.
To do this, I used info from this link: http://www-gpsg.mit.edu/~simon/gcc_g77_install/build.html
BUT, I omitted the -g from the LIBCXXFLAGS and LIBCFLAGS and I added LIBCPPFLAGS without -g just in case. Ran make DESTDIR=/staging/install/path install-host, tarballed and transferred to device. No more 'failed to get process times' error.
I am seeing another error, but it is not related to this question

gdb won't run in tui mode

i'm tryin to debug (actually i just want to understand the program on assembly level) a program. Usin gdb is ok but in tui mode it would be just great, unfortunately i get an error when i'm debuggin in tui while displaying the assembly and source code (-g option in gcc) mode sayin: error while reading shared library symbols
I can run the program if i do not show the assembly code but that is not what i want, i really want to step through every assembly line to fully understand the program. Also, when i try this with si sometimes i get an error for example in printf but that's another story
so any tips? Note:this is not a bug of my program, i tried this with other programs
Your shared libraries were not compiled with symbols enabled. You need to look for, usually, "debug" versions of gcc libraries (or your other libraries that you are linking against). If you have custom libraries that you are building, add the -g option to the gcc commands that are being run to compile them.

CUDA: Debug with -deviceemu and gdb

I wrote a CUDA application that has some hardcoded parameters in it (via #defines). Everything seemed to work right, so I tried some other parameters. Now, the program doesn't work correctly anymore.
So, I want to debug it. I compile the application with -deviceemu -g -O0 options, because I read that I can then use gdb to debug it. In gdb, I set a breakpoint at the kernel start using break kernelstart.
However, gdb, jumps at the start of my CUDA kernel, but I can not step through it, because it doesn't let me inspect things within the kernel. I think it's best if I give the output of gdb:
Breakpoint 1, kernelstart (__cuda_0=0x100000, __cuda_1=0x101000, __cuda_2=0x102000, __cuda_3=0x102100) at cudatest.cu:287
(gdb) s
__device_stub__Z12kernelstartPjS_S_S_ (__par0=0x100000, __par1=0x101000, __par2=0x102000, __par3=0x102100) at /tmp/tmpxft_000003c4_00000000-1_cudatest.cudafe1.stub.c:7
7 /tmp/tmpxft_000003c4_00000000-1_cudatest.cudafe1.stub.c: No such file or directory.
in /tmp/tmpxft_000003c4_00000000-1_cudatest.cudafe1.stub.c
(gdb) s
cudaLaunch<char> (entry=0x804a98d "U\211\345\203\354\030\213E\024\211D$\f\213E\020\211D$\b\213E\f\211D$\004\213E\b\211\004$\350\r\377\377\377\311\303U\211\345\203\354\070\307\004$\340 \005\b\350\345\341\377\377\243P!\005\b\307\004$x\234\004\b\350\b\001") at /usr/local/cuda/bin/../include/cuda_runtime.h:773
(gdb) s
(gdb) s
cudatest (__cuda_0=0x100000, __cuda_1=0x101000, __cuda_2=0x102000, __cuda_3=0x102100) at cudatest.cu:354
(gdb) s
After, this, it jumps back to my main procedure.
I know that my specifications are more than vague, but can anybody guess where the problem is? Is it possible to inspect kernels using gdb?
Use cuda-gdb
Compile: nvcc -g -G filename.cu
Invoke cuda-gdb on your a.out
You can set breakpoint inside your kernel function as usual.
Run the program, and it should stop inside your kernel function.
You can even get details of the current thread which is being executed using commands like cuda thread. Other commands like cuda block exist.
To switch between threads say cuda thread (x,y,z)
For more details refer to the latest version of cuda-gdb's documentation. If you are using the latest version of cuda toolkit (ie, 3.2 as of today), make sure you are looking at the latest version of the documentation (as the options have changed a lot).
And also make sure you are running cuda-gdb from a console (outside X11), since you are stopping your GPU for debugging.
Hope this helps.
Compiling with :
nvcc -g -G --keep
fixed this problem for me. This ensures all the intermediate files generated during compilation are not erased so that the debugger can find them.

Resources