Compiler flags which make reverse engineering harder - gcc

Recently I read that using specific compiler flags can prevent or make reverse engineering much more complicated. I'm using those flags
-s -O3 -Os -fdata-sections -ffunction-sections -fvisibility=hidden -fvisibility-inlines-hidden -Wl,--gc-sections
Is it enough protection or maybe i used too many flags?
I'm using MinGW-W64 x86_64-posix 11.3.0

Related

Problem with autoconf not making gcc with -Wall warnings

I have a simple project with a simple configure.ac script:
AC_INIT(...)
AM_INIT_AUTOMAKE([-Wall -Werror foreign])
AC_PROG_CC
AC_CONFIG_HEADERS([config.h])
AC_CONFIG_FILES(...)
AC_OUTPUT
using GNU Autoconf version 2.69 (OpenSUSE Linux with gcc 9.2.1), but gcc is being called with no warning flags:
gcc -DHAVE_CONFIG_H -I. -I.. -g -O2 -MT aprog.o -MD -MP -MF .deps/aprog.Tpo -c -o aprog.o aprog.c
mv ...
gcc -g -O2 -o aprog aprog.o -lgmp
In particular, I found -Wformat not working. Shouldn't -Wall include -Wformat? And shouldn't all warnings appear on the make line? If I run gcc line directly with -Wformat the warning shows in compile but it doesn't when I run autoconf, configure and make.
What I'm doing wrong?
The -Wall flag in the AM_INIT_AUTOMAKE(...) invocation refers to warnings from automake and related tools like aclocal, not to compiler warnings. You will see these warnings when you are running autoreconf.
Note that while you can also add -Werror to AM_INIT_AUTOMAKE(...) to make your autoreconf run fail on warnings, many common macros (like those shipped with gettext or libtool) will still use deprecated macros which generates a warning, so -Werror means you cannot use this standard set of tools, so -Werror is not very useful in many cases.
If you want to add compiler options, there are a third party macros (e.g. AX_CHECK_COMPILE_FLAG) which test whether the compiler recognizes a given compile option and you can then add them to some variable and use that in places. That is a different stackoverflow question, though.

Negate previous -D[efine] flag for GCC

Under GNUStep on Arch Linux, I'm running into an interesting error on a fresh install.
Using my build system I run
gcc `gnustep-config --debug-flags` [other command line args]
in order to build up the command line per the operating system's necessary flags.
This works fine under Ubuntu, but on Arch Linux I'm getting a rather random error:
/usr/include/features.h:328:4: error: #warning _FORTIFY_SOURCE requires compiling with optimization (-O) [-Werror=cpp]
Well, gnustep-config --debug-flags spits out the following:
-MMD -MP -D_FORTIFY_SOURCE=2 -DGNUSTEP -DGNUSTEP_BASE_LIBRARY=1 -DGNU_GUI_LIBRARY=1 -DGNU_RUNTIME=1 -DGNUSTEP_BASE_LIBRARY=1 -fno-strict-aliasing -pthread -fPIC -g -DDEBUG -fno-omit-frame-pointer -Wall -DGSWARN -DGSDIAGNOSE -Wno-import -march=x86-64 -mtune=generic -pipe -fstack-protector-strong --param=ssp-buffer-size=4 -fgnu-runtime -fconstant-string-class=NSConstantString -fexec-charset=UTF-8 -I. -I/home/qix/GNUstep/Library/Headers -I/usr/include -D_FORTIFY_SOURCE=2 -I/usr/include -I/usr/include -I/usr/include -I/usr/lib/libffi-3.1/include/ -I/usr/lib/libffi-3.1/include -I/usr/include/libxml2 -I/usr/include/p11-kit-1
As well, I wish not to have optimizations on my debug builds (and later on I even override GNUStep's -g parameter to -g2).
Is there a way to explicitly undefine -D_FORTIFY_SOURCE later on in the command line, after the call to gnustep-config?
For example, something like
gcc `gnustep-config --debug-flags` -U_FORTIFY_SOURCE ...
where the -U undefines the previously defined macro?
Something to mention; I have -Werror enabled on purpose, and I'd like to keep it.
For now, using sed to work around this works. It appears this is a known issue with _FORTIFY_SOURCE causing issues, and there isn't a straightforward fix.
`gnustep-config --debug-flags | sed 's/-D_FORTIFY_SOURCE=2//g'`

About -ffunction-sections -fdata-sections and --gc-sections options

In my ARM project, I use following to build os-less application binary:
arm-linux-gcc -Os -ffunction-sections -fdata-sections -o boot.o boot.S
arm-linux-gcc -Os -ffunction-sections -fdata-sections -o main.o main.c
arm-linux-ld -T link.lds --gc-sections -o target.bin boot.o main.o
These works fine. Because If I remove "-ffunction-sections", "-fdata-sections" and "--gc-sections" options, the target.bin file size will increase nearly twice..
But on the x86 platform, same method, I found that:
If I don't use those gcc and ld options, the output is normal, but the output file will be 0 byte if I use those options as arm platform.
-Os -ffunction-sections -fdata-sections and --gc-sections should work on x86 system. Are you sure your program and your linker script are suitable for x86 ? As your program is meant for bare-metal ARM, it probably does not have entry points for your x86 OS, and if there is no entry point, everything is garbaged by --gc-sections option.
BTW, your "question" actually enclose no question.

Bootloader issues due to GCC-4.7.0

This is a weird problem. I am having a custom bootloader for MIPS 34Kc processor which was consistently booting my target. This was compiled with GCC-4.2.4. Recently we had moved to GCC-4.7.0 and the bootloader is failing to boot the target all the time.
The optimizations are as below:
W_OPTS = -Wimplicit -Wformat -Werror
CC_OPTS = -c -O -mips32r2 $(W_OPTS) -fomit-frame-pointer -fno-pic -nostdinc -mno-abicalls
CC_OPTS_16 = -c -O -mips16 $(W_OPTS) -fomit-frame-pointer -fno-pic -nostdinc -mno-abicalls
CC_OPTS_A = $(CC_OPTS) -D_ASSEMBLER_
Any pointers to debug this issue would be helpful.

Is my CUDA kernel really runs on device or is being mistekenly executed by host in emulation?

I just got my GPU-enabled video card and started playing with CUDA. Just to get my head straight with blocks and threads I wrote a simple kernel that just stores its identifiers to the shared memory that I later copy back to host and print. But then I though, why not simply use printf inside the kernel function? I have tried that even though I believed that it was impossible. Here is what my attempt looked like:
__global__ void
printThreadXInfo (int *data)
{
int i = threadIdx.x;
data[i] = i;
printf ("%d\n", i);
}
.. but all of the sudden I saw the output in console. Then I searched developer's manual and found printf mentioned in the section about device emulation. It was said that device emulation provides a benefit of running a host-specific code in the kernel, like calling printf.
I don't really need to call printf. But now I am a little bit confused. I have two assumption. First is that NVidia developers implemented some specific printf on device that somehow transparently for the developer accesses calling process and executed standard printf function, and takes care of memory copying etc. That sounds a bit crazy. Another assumption is that the code I have compiled somehow runs in emulation rather than on a real device. But that doesn't sound right either because I simply measured a performance of adding two numbers on 1 million elements array and CUDA kernel manages to do it like 200 faster than I can do on a CPU. Or maybe it runs in emulation when it detects some host-specific code? If that is true, why am I not issued a warning then?
Please help me sort it out. I am using NVidia GeForce GTX 560 Ti on Linux (Intel Xeon, 1 CPU with 4 physical cores, 8 GB of RAM, if that matters). Here is my nvcc version:
$ /usr/local/cuda/bin/nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2011 NVIDIA Corporation
Built on Thu_May_12_11:09:45_PDT_2011
Cuda compilation tools, release 4.0, V0.2.1221
And here is how I compile my code:
/usr/local/cuda/bin/nvcc -gencode=arch=compute_20,code=\"sm_21,compute_20\" -m64 --compiler-options -fno-strict-aliasing -isystem /opt/boost_1_46_1/include -isystem /usr/local/cuda/include -I../include --compiler-bindir "/usr/local/cuda/bin" -O3 -DNDEBUG -o build_linux_release/ThreadIdxTest.cu.o -c ThreadIdxTest.cu
/usr/local/cuda/bin/nvcc -gencode=arch=compute_20,code=\"sm_21,compute_20\" -m64 --compiler-options -fno-strict-aliasing -isystem /opt/boost_1_46_1/include -isystem /usr/local/cuda/include -I../include --compiler-bindir "/usr/local/cuda/bin" -O3 -DNDEBUG --generate-dependencies ThreadIdxTest.cu | sed -e "s;ThreadIdxTest.o;build_linux_release/ThreadIdxTest.cu.o;g" > build_linux_release/ThreadIdxTest.d
g++ -pipe -m64 -ftemplate-depth-1024 -fno-strict-aliasing -fPIC -pthread -DNDEBUG -fomit-frame-pointer -momit-leaf-frame-pointer -fno-tree-pre -falign-loops -Wuninitialized -Wstrict-aliasing -ftree-vectorize -ftree-loop-linear -funroll-loops -fsched-interblock -march=native -mtune=native -g0 -O3 -ffor-scope -fuse-cxa-atexit -fvisibility-inlines-hidden -Wall -Wextra -Wreorder -Wcast-align -Winit-self -Wmissing-braces -Wmissing-include-dirs -Wswitch-enum -Wunused-parameter -Wredundant-decls -Wreturn-type -isystem /opt/boost_1_46_1/include -isystem /usr/local/cuda/include -I../include -L/opt/boost_1_46_1/lib -L/usr/local/cuda/lib64 -lcudart -lgtest -lgtest_main build_linux_release/ThreadIdxTest.cu.o ../src/build_linux_release/libspartan.a -o build_linux_release/ThreadIdxTest
... and by the way, both host code and kernel code is mixed in one source file with .cu extension (maybe I am not supposed to do that, but I saw this style in SDK examples).
Your help is highly appreciated. Thank you!
As of CUDA ?3.1?, they no longer do any device emulation. Printf's are now supported in the kernel.

Resources