gcc's option dose not work until newer version - gcc

I recently struggle with gcc's optimization, hope to help with performance improvement.
I found the option -freorder-blocks-and-partition.
(This option exist on very old versions of gcc, such as gcc 5.6.)
It can be used to partition cold/hot cold to separate function, together with -ffunction-sections and linker script, cold/hot code can be put into separate ELF sections (.text section and .text.cold section).
But this option only works on recent gcc version.
I test it on gcc 10.1, and use the following compiler exploration tool, make sure it works start from version 8.1:
in gcc8.1: https://godbolt.org/z/hGcnMM, the assembly code contains a function like this:
fa(int, int) [clone .cold.0]:
In gcc7.1: https://godbolt.org/z/rhqjE1, the assembly code does not contain such function.
Why does it not work on older gcc?
And is there any way to control the older version gcc to apply this optimization?

Related

What is lto1.exe?

when you inspect mingw you will find the c compiler cc1.exe (funny growing in size in 9.2 it is almost 25 MB ) cc1plus.exe is c++ compiler (same sizes) collect2.exe is linker (afiak) nut what is lto1.exe?
i googled for it more than hour but not found info on this.. is there some good info on this avaliable to read? tnx
ps. i suspect that it may be related to link time optimisation but found nothing more about it and i would like to know better
there is also a question what is gcc.exe g++.exe and mingw32-gcc.exe mingw32-g++.exe then
i need more informatin, the most the better , tnx
This is not mingw / Windos specific; it's a feature / component of GCC.
what is lto1.exe?
It's the lto compiler :o). lto is basically byte code written when compiled with -flto where "lto" or "LTO" stands for "link-time optimization".
These optimizations are not performed by the linker but by the compiler at link time to perform global optimizations when (byte-)code from all the modules is available. The flow is as follows:
The C/C++ compiler (cc1 for C or cc1plus for C++) compiles C/C++ to byte-code and writes it to the assembly file *.s.
Assembly is assembled by the assembler to object code *.o as usual. lto-code is shipped in dedicated sections.
At link time, the linker (plugin) calls back the compiler and supplies all the objects and libraries as requested. byte-code is extracted, and the lto1 compiler compiles it to final machine code.
The final machine code is assembled by the assembler to object code.
The linker links / locates these objects to final executable code.
You see the byte-code in the lto sections with -save-temps and having a look at the saved *.s files. Recent versions of GCC don't even bother with writing assembly code; they are just writing lto code. To see the assembly code, specify -ffat-lto-objects. Notice however that this is not the final code.
funny growing in size in 9.2 it is almost 25 MB
The sizes of GCC executables depend not only on the GCC major version, but also highly on how well the compiler used to build GCC is optimizing.
[Edit] Some information on LTO can be found on the GCC wiki for LTO. Notice however that this page is no more active. One goto place for gory GCC internals is the gcc-help#gcc.gnu.org mailing list where all the developers are. There is also a section about LTO in the GCC internals.

How to switch android-ndk example to use gcc instead of clang?

How to switch this example here to use gcc?:
https://github.com/googlesamples/android-ndk/tree/master/other-builds/ndkbuild/hello-libs
Is there a way to set this over the gradle files?
As of the latest ndk(18), gcc is no longer supported and you are forced to use clang.
I faced some problems switching to clang and I am considering switching to cmake as it is recommended.
About the argument that gcc is better
gcc produces more optimized binaries than clang in android NDK
while there is some truth to this, there seems to be a lot of discussion on this topic that it is because of the gcc implicitly using -Bisymbolic which is bad. You can find in depth conversation on ndk github repo here
https://github.com/android-ndk/ndk/issues/495
Its quite long, but very insightful.

How does one find what C++11 features have been implemented given a GLIBCXX version

Given a GLIBCXX version of the stdc++ library (example GLIBCXX_3.4.17) given this version, where would one find documentation which specifies what features have been implemented?
Further is there a way to which given the SO NAME version will provide the this same document.
I am working on an embedded system which has an existing version of libstdc++; unfortunately the supplied cross compiler (g++) is at a greater version than what the stdc++ library on the target supports. Upgrading the stdc++ library on the target is not an option. Before I write a lot of code, to only find that it does not run on the target; I would like to know beforehand what is and is not supported.
I have found the GNU Documentation to be useful; however, I am hoping there is a document in which one can get what has been implemented given the symbol version and/or the SO NAME and I just have somehow missed it.
Thanks for any help in advance
given this version, where would one find documentation which specifies what features have been implemented?
You can map a GLIBCXX_A.B.C symbol version to a GCC release by checking
https://gcc.gnu.org/onlinedocs/libstdc++/manual/abi.html
N.B. that won't be precise, because e.g. GCC 5.1 and GCC 5.2 both use GLIBCXX_3.4.21 in the shared library. To tell them apart check the __GLIBCXX__ macro defined by the libstdc++ headers, also documented on that page.
The manuals for libstdc++ releases are at
gcc.gnu.org/onlinedocs/gcc-[X.Y.Z]/libstdc++/manual/
e.g.
https://gcc.gnu.org/onlinedocs/gcc-5.3.0/libstdc++/manual/
Within that manual is a status table showing the implementation status for each standard, the table for C++11 support in GCC 5.3.0 is at
https://gcc.gnu.org/onlinedocs/gcc-5.3.0/libstdc++/manual/manual/status.html#status.iso.2011
Before I write a lot of code, to only find that it does not run on the target; I would like to know beforehand what is and is not supported.
It's not enough to simply avoid using features that aren't supported by the library on the target system. If you link with the cross-compiler then it will depend on the libstdc++.so from that cross-compiler, and will fail to run on the target system if it only has an older libstdc++.so
Upgrading the stdc++ library on the target is not an option.
Then you either need to link statically (creating large executables) or downgrade your cross-compiler to match the target. Or at least force it to use the headers and dynamic library from the same version as found on the target (by overriding the header and library search paths to point to copies of the older files), although that might not work, as the newer g++ might not be able to compile the older headers if they contain some invalid C++ that the older g++ didn't diagnose.

Migrating from Winarm to Yagarto

This question must apply to so few people...
I am busy mrigrating my ARM C project from Winarm GCC 4.1.2 to Yagarto GCC 4.3.3.
I did not expect any differences and both compile my project happily using the same makefile and .ld files.
However while the Winarm version runs the Yagarto version doesn't. The processor is an Atmel AT91SAM7S.
Any ideas on where to look would be most welcome. i am thinking that my assumption that a makefile is a makefile is incorrect or that the .ld file for Winarm is not applicable to Yagarto.
Since they are both GCC toolchains and presumably use the same linker they must surely be compatable.
TIA
Ends.
I agree that the gcc's and the other binaries (ld) should be the same or close enough for you not to notice the differences. but the startup code whether it is your or theirs, and the C library can make a big difference. Enough to make the difference between success and failure when trying to use the same source and linker script. Now if this is 100% your code, no libraries or any other files being used from WinARM or Yagarto then this doesnt make much sense. 3.x.x to 4.x.x yes I had to re-spin my linker scripts, but 4.1.x to 4.3.x I dont remember having problems there.
It could also be a subtle difference in compiler behavior: code generation does change from gcc release to gcc release, and if your code contains pieces which are implementation-dependent for their semantics, it might well bite you in this way. Memory layouts of data might change, for example, and code that accidentally relied on it would break.
Seen that happen a lot of times.
Try it with different optimization options in the compile and see if that makes a difference.
Both WinARM and YAGARTO are based on gcc and should treat ld files equally. Also both are using gnu make utility - make files will be processed the same way. You can compare the two toolchains here and here.
If you are running your project with an OCD, then there is a difference between the implementation of the OpenOCD debugger. Also the commands sent to the debugger to configure it could be different.
If you are producing an hex file, then this could be different as the two toolchains are not using the same version of newlib library.
In order to be on the safe side, make sure that in both cases the correct binutils are first in the path.
If I were you I'd check the compilation/linker flags - specifically the defaults. It is very common for different toolchains to have different default ABIs or FP conventions. It might even be compiling using an instruction set extension that isn't supported by your CPU.

Cross compile Boost 1.40 for VxWorks 6.4

I'm trying to migrate a project which uses Boost (particularly boost::thread and boost::asio) to VxWorks.
I can't get boost to compile using the vxworks gnu compiler. I figured that this wasn't going to be an issue as I'd seen patches on the boost trac that purport to make this possible, and since the vxworks compiler is part of the gnu tool chain I should be able to follow the directions in the boost docs for cross compilation.
I'm building on windows for a ppc vxworks.
I changed the user-config.jam file as specified in the boost docs, and used the target-os=linux option to bjam, but bjam appears to hang before it can compile. Closer inspection of the commands issued by bjam (by invoking it using the -n option) reveal that it's trying to compile with boost::thread's win32 files. This can't be right, as vxworks uses pthreads.
My bjam command: .\bjam --with-thread toolset=gcc-ppc target-os=linux gcc-ppc is set in user-config to point to the g++ppc vxworks cross compiler.
What am I doing wrong? I believe I have followed the docs to the letter.
If it's #including win32 headers instead of the pthread ones, there could be a discrepancy between the set of macros your compiler is defining and the macros the boost headers are checking for. I had a problem like that with the smart pointer headers, which in an older version of boost would check for __ppc but my compiler defined __ppc__ (or vice versa, can't remember).
touch empty.cpp
ccppc -dD -E empty.cpp
That will show you what macros are predefined by your compiler.
I never tried to compile boost for VxWorks, since I only needed a few of the headers.
Try also adding
threadapi=pthread
The documentation you mention is for Boost.Build -- which is standalone build tool -- and the above flag is something specific to Boost.Thread library. What do you mean by "hang"? Because Boost libraries are huge, it sometimes take a lot of time to scan dependencies prior to build.
If it actually hangs, can you catch bjam in a debugger and produce a backtrace? Also, log of any output will help.

Resources