GCC flag to ignore instruction dependencies - gcc

I am trying to decompile a piece of code and apparently gcc prefers less complicated instructions over the more complex ones. After reading this answer I suspect it's because gcc is trying to reduce instruction dependencies. Is there any way to instruct gcc to ignore these dependencies?

If it's about decompiling then it is not GCC's business but Binutils'. Hence check your host's Binutils options whether you can disable printing instructions in their simplified representation.

Related

Compile single static library for Cortex M3, M4, M23 and M33

I'm currently working on a rather generic communication stack. It gets bytes in on one end, parses the packet and calls a callback.
I want to have this stack in a static library (i.e. libcommstack.a).
The library is aimed towards embedded ARM Cortex-M devices. At the moment we have specified that at least a Cortex-M3 should be used (but it should also work for an M4 or M33).
Right now I'm integrating it into another application to verify that linking it is possible. In the future the idea is that we will ship this .a file to customers so they can build their application around it, without having direct access to our sources (to encapsulate our IP).
We are using GCC ARM v7.2.1 to compile both the library and the application that is linked to it.
The application I'm trying to integrate it with is compiled for a Cortex M33 with -mfloat-abi=hard -mfpu-fpv6-sp-d16.
The code for the library does not use any floating points and is compiled using -march=archv7-m (both have the -mthumb flag).
Linking seemed to all go well, until I actually called a function from the lib. At that point the linker starts to complain:
application.elf uses VFP register arguments, libcommstack.a(somefile.c.obj) does not
failed to merge target specific data of file libcommstack.a(somefile.c.obj)
Since I'm not using floating points in the library and I don't know (upfront) if the target application does or does not have an FPU (or even uses floats), I'm not sure how to approach this.
I figured there would be two approaches:
Compile a single version of the lib, using an instruction set that all of the microcontrollers understand. I was hoping that this would be the case with ARMv7 (although I'm not yet 100% confident that the M23/M33 also support this).
Compile a lot of different libs for the different flavors based on the different architectures, FPU, etc.
As you can imagine, I would prefer to keep it simple and go for option 1, but I'm not sure how to "convince" the linker to link these two (or perhaps how to convince the compiler NOT to care about floating points for the lib).
Does anyone know if option 1 is feasible and how it can be achieved?
If it is not feasible, what would be the variables to keep in mind to determine the different build flavors?
Does anyone know if option 1 is feasible
Well, feasible, probably.
how it can be achieved?
Get all the processors you want to support and determine the instructions sets available on all these processors. Then compile for that instruction set.
But, please don't, that is a workaround.
If it is not feasible, what would be the variables to keep in mind to determine the different build flavors?
Gcc has something like "multilib profiles". See arm-none-eabi-gcc --print-multi-lib output. If you have newlib installed, you can go to /usr/arm-none-eabi/lib/thumb/ and see the directories there - newlib is compiled for each profile and installs separate library for it and different library is picked up depending on configuration. Compile for each of those profiles, and package your library by putting libraries in proper /usr/arm-none-eabi/lib/proper/directory/here and compiler will pick them up by itself (see gcc -v output for library search paths). For an example search newlib sources where it happens, can't find it. (Here's my example). With cmake as a backend as a example you could compile and install as follows:
arm-none-eabi-gcc --print-multi-lib |
while IFS=';' read -r dir opts; do
cmake -B builddir CMAKE_C_FLAGS="$opts" CMAKE_INSTALL_LIBDIR="$dir"
cmake --build builddir
cmake --install builddir --prefix "/usr/arm-none-eabi/"
done

Is there a way to build RISC-V GNU tools for multiple architectures?

I cloned and built the RISC-V GNU toolchain. I built the Newlib version for RV32I architecture(--with-arch=rv32i). However, I also need to have the RV32IM architecture build. Problem is, if I build the compiler for RV32IM, and compile my RISC-V code for RV32I, the compiler emulates multiply/divide operations for the RV32I architecture but still uses mul instrucions. I think this happens because there are mul instructions in the libgcc.a file, because that's the architecture it was built for. This is why I desire to have two seperate builds, i.e. RV32I and RV32IM.
Is this possible? If so, how can I achieve this?
I guess a multilib version could be built. riscv-none-embed-gcc-xpack is such a multilib version without libgloss, but I haven't delved into the details of implementation. Hope it'll help.

link SO against libbfd

I need to link my SO against libbfd, for the purpose of having human-readable backtraces.
Static linking against libbfd.a fails, because it's not compiled with -fPIC, so as I understand, it can participate in executable only.
Though linking against libbfd.so also gives some troubles.
I need to compile on both Ubuntu-14.04 and Debian Wheezy 7.8
And they have non-intersecting sets of binutils versions. In particular, Ubuntu has 2.24, and Debian has 2.22 and 2.25. And the problem is, gcc doesn't want to take symlink's name libbfd.so to reference it, and uses SONAME instead. So i end with either libbfd-2.24-system.so or libbfd-2.25-system.so in dependencies.
For now I see several approaches:
There's some hidden flag which allows to override SONAME during linking. This is the preferred path
I have no way other than compile libbfd by hand. I would evade this as much as possible.
Manual dlopen+dlsym for everything I need.
I read answer gcc link shared library against symbolic link, but it suggests to change SONAME I'm not able to do.
Any suggestions?
Thanks.
EDIT: it seems that virtually all static libs in Ubuntu repos are not position-independent. Can't guess why. With the inability to override SONAME it makes things much more complicated.
Not sure whether i understand your (4 years old problem) correctly, but having similar problems with libbfd, I found this solution:
Using the linker flag -lbfd seems to work.
It is a linker flag that specifies g++ to link against libbfd.
My full command was g++ loader.cc -lbfd.
At least for me, errors at link-time a la "unknown function" are solved.

Migrating from Winarm to Yagarto

This question must apply to so few people...
I am busy mrigrating my ARM C project from Winarm GCC 4.1.2 to Yagarto GCC 4.3.3.
I did not expect any differences and both compile my project happily using the same makefile and .ld files.
However while the Winarm version runs the Yagarto version doesn't. The processor is an Atmel AT91SAM7S.
Any ideas on where to look would be most welcome. i am thinking that my assumption that a makefile is a makefile is incorrect or that the .ld file for Winarm is not applicable to Yagarto.
Since they are both GCC toolchains and presumably use the same linker they must surely be compatable.
TIA
Ends.
I agree that the gcc's and the other binaries (ld) should be the same or close enough for you not to notice the differences. but the startup code whether it is your or theirs, and the C library can make a big difference. Enough to make the difference between success and failure when trying to use the same source and linker script. Now if this is 100% your code, no libraries or any other files being used from WinARM or Yagarto then this doesnt make much sense. 3.x.x to 4.x.x yes I had to re-spin my linker scripts, but 4.1.x to 4.3.x I dont remember having problems there.
It could also be a subtle difference in compiler behavior: code generation does change from gcc release to gcc release, and if your code contains pieces which are implementation-dependent for their semantics, it might well bite you in this way. Memory layouts of data might change, for example, and code that accidentally relied on it would break.
Seen that happen a lot of times.
Try it with different optimization options in the compile and see if that makes a difference.
Both WinARM and YAGARTO are based on gcc and should treat ld files equally. Also both are using gnu make utility - make files will be processed the same way. You can compare the two toolchains here and here.
If you are running your project with an OCD, then there is a difference between the implementation of the OpenOCD debugger. Also the commands sent to the debugger to configure it could be different.
If you are producing an hex file, then this could be different as the two toolchains are not using the same version of newlib library.
In order to be on the safe side, make sure that in both cases the correct binutils are first in the path.
If I were you I'd check the compilation/linker flags - specifically the defaults. It is very common for different toolchains to have different default ABIs or FP conventions. It might even be compiling using an instruction set extension that isn't supported by your CPU.

Cross compile Boost 1.40 for VxWorks 6.4

I'm trying to migrate a project which uses Boost (particularly boost::thread and boost::asio) to VxWorks.
I can't get boost to compile using the vxworks gnu compiler. I figured that this wasn't going to be an issue as I'd seen patches on the boost trac that purport to make this possible, and since the vxworks compiler is part of the gnu tool chain I should be able to follow the directions in the boost docs for cross compilation.
I'm building on windows for a ppc vxworks.
I changed the user-config.jam file as specified in the boost docs, and used the target-os=linux option to bjam, but bjam appears to hang before it can compile. Closer inspection of the commands issued by bjam (by invoking it using the -n option) reveal that it's trying to compile with boost::thread's win32 files. This can't be right, as vxworks uses pthreads.
My bjam command: .\bjam --with-thread toolset=gcc-ppc target-os=linux gcc-ppc is set in user-config to point to the g++ppc vxworks cross compiler.
What am I doing wrong? I believe I have followed the docs to the letter.
If it's #including win32 headers instead of the pthread ones, there could be a discrepancy between the set of macros your compiler is defining and the macros the boost headers are checking for. I had a problem like that with the smart pointer headers, which in an older version of boost would check for __ppc but my compiler defined __ppc__ (or vice versa, can't remember).
touch empty.cpp
ccppc -dD -E empty.cpp
That will show you what macros are predefined by your compiler.
I never tried to compile boost for VxWorks, since I only needed a few of the headers.
Try also adding
threadapi=pthread
The documentation you mention is for Boost.Build -- which is standalone build tool -- and the above flag is something specific to Boost.Thread library. What do you mean by "hang"? Because Boost libraries are huge, it sometimes take a lot of time to scan dependencies prior to build.
If it actually hangs, can you catch bjam in a debugger and produce a backtrace? Also, log of any output will help.

Resources