Unrolling gcc compiler optimization - gcc

I am interested in seeing the code where gcc has actually optimized the code. Is there a way I could do?
I have gone through few other similar questoins, I have tried following few things,
-Wa,ahl=filename.lst :- this option is really good, you can browse the code and corresponding machine code, but it is not good when I enable O3 option.
Dumping optimized tree :- I am sure gcc is giving me good amount of debug information. But I do not how to decipher it. I will be glad if someone could point to any available information.
Is there any other better way, to find out what part of the code gcc optimized?
Thanks,
Madhur

You can compile the code twice, first with:
$ gcc -O0 -S -o yourfile_o0.s
Then with:
$ gcc -O3 -S -o yourfile_o3.s
Then you can diff the two resulting assembly files:
$ diff -u yourfile_o0.s yourfile_o3.s
$ vim -d yourfile_o0.s yourfile_o3.s
$ emacs --eval '(ediff "yourfile_o0.s" "yourfile_o3.s")'

Look at the assember code or decompile your compiled application. C decompilers produce ugly C code, but for analyzing which code was generated, it have to suffice.

Related

Use custom stdlib and libc with GCC

I am using GCC and I want to essentially read and load the stdlib/libc stuff from another location than /usr/include and /usr/lib. I tried to copy them to another place and compile it like this, but it doesn't work. I am not surprised that this naive approach didn't work, but it was worth a try.
gcc -nostdlib -nolibc -I<custompath>/include -L<custompath>/lib -xc test.c
Could someone nudge me in the right direction here?
With this command:
gcc -nostdlib -nolibc ...
you are asking GCC to not link with libc.
Of course it doesn't work (if your program is using libc functions). What did you expect?
Start by dropping these two flags. And if the result doesn't work then, tell us exactly what doesn't work (by editing your question).
See also documentation for the -sysroot option.

GCC entrypoint address

I am compiling a very basic "hello world" program with gcc, with this command line:
gcc -m32 prog_cible.c -o prog_cible
I am very surprised of the entry point address:
readelf -h prog_cible
...
Entry point: 0x420
I have tunrned off alsr with this command:
echo 0 | sudo tee /proc/sys/kernel/randomize_va_space
I think this cannot be the real entry point.
I suppose a base address is added to 0x420 ?
In the past, 10 years ago, readelf gave me the good entry point. What has changed since ?
Thanks
I think this cannot be the real entry point.
You are correct. Your gcc is likely configured to build PIE binaries by default. PIE binary is really a special form of a shared library.
If you look at the type of the binary (which readelf -h also printed), you'll see that it's a DYN, not EXEC.
You can disable PIE with gcc -m32 -no-pie ..., and then your entry point will look something like 0x8048420.

When i should use ld instead of gcc?

I want to know when i should use ld linker instead off gcc. I just wrote a simply hello world in c++, of course i include iostream library. If i want make a binary file with gcc i just use:
g++ hello hello.cpp
and i've got my binary file.
Later i try to use ld linker. To get object file i use:
g++ -c hello.cpp. Ok that was easy, but the link command was horrible long:
ld -o hello.out hello.o \
-L /usr/lib/gcc/x86_64-linux-gnu/4.8.4/ \
/usr/lib/gcc/x86_64-linux-gnu/4.8.4/crtbegin.o \
/usr/lib/gcc/x86_64-linux-gnu/4.8.4/crtend.o \
/usr/lib/x86_64-linux-gnu/crti.o \
/usr/lib/x86_64-linux-gnu/crtn.o \
/usr/lib/x86_64-linux-gnu/crt1.o \
-dynamic-linker /lib64/ld-linux-x86-64.so.2 -lstdc++ -lc
I know fact that gcc uses the ld.
Using gcc is better in all cases or just in most cases? Please, tell me somethink about cases where ld linker has advantage.
As you mentioned, gcc merely acts as a front-end to ld at link time; it passes all the linker directives (options, default/system libraries, etc..), and makes sure everything fits together nicely by taking care of all these toolchain-specific details for you.
I believe it's best to consider the GNU toolchain as a whole, tightly integrated environment (as anyone with an experience of building toolchains for some exotic embedded platforms with, say, dietlibc integration will probably agree).
Unless you have some very specific platform integration requirements, or have reasons not to use gcc, I can hardly think of any advantage of invoking ld directly for linking. Any extra linker-specific option you may require could easily be specified with the -Wl, prefix on the gcc command line (if not already available as a plain gcc option).
It is mostly a matter of taste: you would use ld directly when the command-lines are simpler than using gcc. That would be when you are just using the linker to manipulate a small number of shared objects, e.g., to create a shared library with few dependencies.
Because you can pass options to ld via the -Wl option, often people will recommend just using gcc to manage the command-line.

What's the "correct" way to determine target and architecture for GNU binutils?

In my build chain, I need to do this:
objcopy -I binary -O $BFDNAME -B $BFDARCH <this> <that>
in order to get a binary file into library form. Because I want other people to be able to use this, I need to know how to get $BFDNAME and $BFDARCH from their toolchain when they run the build. I can get the values locally by running objdump -f against a file I've already built, but is there a better way which won't leave me compiling throw-away files just to get configuration values?
Thank you for pointing this out, regularfry! Your answer helped me to find another solution which works without specifying the architecture at all:
ld -r -b binary -o data.o data.txt
On my system (Ubuntu Linux, binutils 2.22) both objcopy and ld approaches produce identical object files.
All credit goes to:
http://stupefydeveloper.blogspot.de/2008/08/cc-embed-binary-data-into-elf.html
For future reference, the answer seems to be this: the first entry in the output of objdump -i is the default, native format of the system.

using openmp with a makefile and g++

I am building a large project with a makefile that was originally built with icpc, and now I need to get it running with g++.
When it compiles the file that uses openmp, it uses the -c flag, and doesn't use any libraries, so it ends up being serial instead of openmp. All of the examples I am seeing aren't using this -c flag.
Is there some way to compile without linking, but using openmp?
edit:
I've been using the -lgomp flag(and the library is on the library path):
g++ -lgomp -c -w -O4 mainS.cpp
g++: -lgomp: linker input file unused because linking not done
Edit: my boss made several mistakes in the code, the makefile, and the documentation. Sorry to have wasted your time, at least it was less than the 5 hours I spend on it =/
Are you passing the flag to enable OpenMP (IIRC it's something like -fopenmp? If you don't chances are the compiler will ignore the OpenMP-related primitives and just produce serial code.
I don't think that -c (ie, compile only, don't like) has anything to do with your problem.
Perhaps the documentation helps...

Resources