trouble building universal binary with autotools - macos

I have a project that I'm building on OS X using autotools. I'd like to build a universal binary, but putting multiple -arch options in OBJCFLAGS conflicts with gcc's -M (which automake uses for dependency tracking). I can see a couple workarounds, but none seems straightforward.
Is there a way to force preprocessing to be separate from compilation (so -M is given to CPP, while -arch is handed to OBJC)?
I can see that automake supports options for disabling dependency tracking, and enabling it when it can't be done as a side-effect. Is there a way to force the use of the older style of tracking even when the side-effect based tracking is available?
I don't have any experience with lipo. Is there a good way to tie it into the autotools work flow?

This Apple Technical Note looks promising, but it's something I haven't done. I would think you'd only need to do a universal build when preparing a release, so perhaps you can do without dependency tracking?

There's a few solutions here; and likely one that has evaded me.
The easiest and fastest is to add --disable-dependency-tracking to your run of ./configure.
This will tell it not to generate dependencies at all. The dependency phase is what's killing you because the -M dependency options are being used during code generation; which can't be done if there are multiple architectures being targetted.
So this is 'fine' if you are doing a clean build on someone else's package; or you don't mind doing a 'make clean' before each build. If you are hacking at the source, especially header files, this is no good as make probably won't know what to rebuild and will leave you with stale binaries.
Better, but more dangerous is to do something like:
CC=clang CXX=clang++ ./configure
This will make the compiler clang instead of gcc. You have clang if you have a recent Xcode. Configure will realize that clang meets the compilation requirements, but will also decide that it isn't safe for auto-dependency generation. Instead of disabling auto dependency gen, it will do old-style 2 pass generation.
One caveat: This may or may not work as I described depending on how you set your architecture flags. If you have flags you want to pass to all compiler invocations (i.e.: -I for include paths), you should set CPPFLAGS. For code generation, set CFLAGS and CXXFLAGS for C and C++, (and I suppose COBJFLAGS for ObjC). Typically you would add $CPPFLAGS to those. I typically whip up a shell script something like:
#!/bin/bash
export CC=clang
export CXX=clang
export CPPFLAGS="-isysroot /Developer/SDKs/MacOSX10.5.sdk -mmacosx-version-min=10.5 -fvisibility=hidden"
export CFLAGS="-arch i386 -arch x86_64 -O3 -fomit-frame-pointer -momit-leaf-frame-pointer -ffast-math $CPPFLAGS"
export CXXFLAGS=$CFLAGS
./configure
You probably don't want these exact flags, but it should get you the idea.
lipo. You've been down this road it sounds like. I've found the best way is as follows:
a. Make top level directories like .X86_64 and .i386. Note the '.' in front. If you target a build into the source directories it usually needs to start with a dot to avoid screwing up 'make clean' later.
b. Run ./configure with something like: --prefix=`pwd`/.i386` and however you set the architecture (in this case to i386).
c. Do the make, and make install, and assuming it all went well make clean and make sure the stuff is still in .i386 Repeat for each architecture. The make clean at the end of each phase is pretty important as the reconfig may change what gets cleaned and you really want to make sure you don't pollute an architecture with old architecture files.
d. Assuming you have all your builds the way you want, I usually make a shell script that looks and feels something like this to run at the end that will lipo things up for you.
# move the working builds for posterity and debugging
mv .i386 ./Build/i386
mv .x86_64 ./Build/x86_64
for path in ./Build/i386/lib/*
do
file=${path##*/}
# only convert 'real' files
if [ -f "$file" -a ! -L "$file" ]; then
partner="./Build/x86_64/Lib/$file"
if [ -f $partner -a ! -L $partner ]; then
target="./Build/Lib/$file"
lipo -create "$file" "$partner" -output "$target" || { echo "Lipo failed to get phat"; exit 5; }
echo Universal Binary Created: $target
else
echo Skipping: $file, no valid architecture pairing at: $partner
fi
else
# this is a pretty common case, openssl creates symlinks
# echo Skipping: $file, NOT a regular file
true
fi
done
What I have NOT figured out is the magic that would let me use gcc, and old school 2-pass dep gen. Frankly I care less and less each day as I become more and more impressed with clang/llvm.
Good luck!

Related

Why does a target archive behave like a .PHONY target in a Makefile?

I have a simple Makefile that builds an archive, libfoo.a, from a single object file, foo.o, like this:
CC=gcc
CFLAGS=-g -Wall
AR=ar
libfoo.a: libfoo.a(foo.o)
foo.o: foo.c
The first time I run make, it compiles the C file, then creates an archive with the object file:
$ make
gcc -g -Wall -c -o foo.o foo.c
ar rv libfoo.a foo.o
ar: creating libfoo.a
a - foo.o
However, if I run make again immediately (without touching foo.o), it still tries to update the archive with ar r (insert foo.o with replacement):
$ make
ar rv libfoo.a foo.o
r - foo.o
Why does Make do this when it shouldn't have to? (If another target depends on libfoo.a, that target will be rebuilt as well, etc.)
According to the output of make -d, it seems to be checking for the non-existent file named libfoo.a(foo.o), and apparently decides to rerun ar r because of that. But is this supposed to happen? Or am I missing something in my Makefile?
You are seeing this because the people who put together your Linux distribution (in particular the people that built the ar program you're using) made a silly decision.
An archive file like libfoo.a contains within it a manifest of the object files contained in the archive, along with the time that the object was added to the archive. That's how make can know if the object is out of date with respect to the archive (make works by comparing timestamps, it has no other way to know if a file is out of date).
In recent times it's become all the rage to have "deterministic builds", where after a build is complete you can do a byte-for-byte comparison between it and some previous build, to tell if anything has changed. When you want to perform deterministic builds it's obviously a non-starter to have your build outputs (like archive files) contain timestamps since these will never be the same.
So, the GNU binutils folks added a new option to ar, the -D option, to enable a "deterministic mode" where a timestamp of 0 is always put into the archive so that file comparisons will succeed. Obviously, doing this will break make's handling of archives since it will always assume the object is out of date.
That's all fine: if you want deterministic builds you add that extra -D option to ar, and you can't use the archive feature in make, and that's just the way it is.
But unfortunately, it went further than that. The GNU binutils developers unwisely (IMO) provided a configuration parameter that allowed the "deterministic mode" to be specified as the default mode, instead of requiring it to be specified via an extra flag.
Then the maintainers of some Linux distros made an even bigger mistake, by adding that configuration option when they built binutils for their distributions.
You are apparently the victim of one of these incorrect Linux distributions and that's why make's archive management doesn't work for your distribution.
You can fix it by adding the -U option, to force timestamps to be used in your archives, when you invoke ar:
ARFLAGS += -U
Or, you could get your Linux distribution to undo this bad mistake and remove that special configuration parameter from their binutils build. Or you could use a different distribution that doesn't have this mistake.
I have no problem with deterministic builds, I think they're a great thing. But it loses features and so it should be an opt-in capability, not an on-by-default capability.

how to check if binary is "runnable"

In my CI-setup I'm compiling my C-code to a number of different architectures (x86_64-linux-gnu, i386-linux-gnu, aarch64-linux-gnu, arm-linux-gnueabihf, x86_64-apple-darwin, i386-apple-darwin, i686-w64-mingw32, x86_64-w64-mingw32,...).
I can add new architectures (at least for the *-linux-gnu) simply by "enabling them".
The same CI-configuration is used for a number of projects (with different developers), and strives to be practically "zero config" (as in: "drop this CI-configuration in your project and forget about it, the rest will be taken care of for you").
Some of the targets are being compiled natively, others are cross-compiled. some cross-compiled architectures are runnable on the build machines (e.g. i could run the i386-apple-darwin binaries on the x86_64-apple-darwin), others are incompatible (e.g. i cannot run aarch64-linux-gnu binaries on the x86_64-linux-gnu builder).
Everything works great so far.
However, I would also like to run unit-tests during the CI - but only if the unit-tests can actually be executed on the build machine.
I'm not interested at all in getting a lot of failed tests simply because I'm cross-building binaries.
To complicate things a bit, what I'm building are not self-contained executables, but plugins that are dlopen()ed (or whatever is the equivalent on the target platform) by a host application. The host application is typically slow to startup, so I'd like to avoid running it if it cannot use the plugins anyhow.
Building plugins also means that I cannot just try-run them.
I'm using the GNU toolchain (make, gcc), or at least something compatible (like clang)).
In my first attempt to check whether I am cross-compiling, I compare the target-architecture of the build process (as returned by ${CC} -dumpmachine) with the architecture of GNU make (GNU make will output the architecture triplet used to build make itself when invoked with the -v flag).
Something like the following works surprisingly well, at least for the *-linux-gnu targets:
if make --version | egrep ${TARGETARCH:-$(${CC:-cc} -dumpmachine) >/dev/null; then
echo "native compilation"
else
echo "cross compiling"
fi
However, it doesn't work at all on Windows/MinGW (when doing a native build, gcc targets x86_64-w64-mingw32 but make was built for x86_64-pc-msys; and worse when building 32bit binaries which are of course fully runnable) or macOS (gcc says x86_64-apple-darwin18.0.0, make says i386-apple-darwin11.3.0 (don't ask me why)).
It's becoming even more of an issue, as, while I am writing this and doing some checks I noticed that even on Linux I get differences like x86_64-pc-linux-gnu vs x86_64-linux-gnu; these differences haven't emerged on my CI-builders yet, but I'm sure that's only a matter of time).
So, I'm looking for a more robust solution to detect whether my build-host will be able to run the produced binaries, and skip unit-tests if it does not.
From what I understand of your requirements (I will remove this answer in the case I missed the point), you could proceed in three steps:
Instrument your build procedure so that it will produce the exact list of all (gcc 'dumpmachine', make 'built for') pairs you are using.
Keep in the list only the the pairs that would allow executing the program.
Determine from bash if you can execute the binary or not given the pair reflecting your system, using the information you collected:
#!/bin/bash
# Credits (some borrowed code):
# https://stackoverflow.com/questions/12317483/array-of-arrays-in-bash/35728122
# bash 4 could use associative arrays, but darwin probably only has bash3 (and zsh)
# pairs of gcc -dumpmachine
# ----- collected/formatted/filtered information begins -----
entries=(
'x86_64-w64-mingw32 x86_64-pc-msys'
'x86_64-pc-linux-gnu x86_64-linux-gnu'
'x86_64-apple-darwin18.0.0 i386-apple-darwin11.3.0'
)
# ----- collected/formatted/filtered information ends -----
is_executable()
{
local gcc
local make
local found=0
if [ $# -ne 2 ]
then
echo "is_executable() requires two parameters - terminating."
exit 1
fi
for page in "${entries[#]}"
do
read -r -a arr <<< "${page}"
gcc="${arr[0]}"
make="${arr[1]}"
if [ "$1" == "${gcc}" ] && [ "$2" == "${make}" ]
then
found=1
break;
fi
done
return ${found}
}
# main
MAKE_BUILT_FOR=$( make --version | sed -n 's/Built for //p')
GCC_DUMPMACHINE=$(gcc -dumpmachine)
# pass
is_executable ${MAKE_BUILT_FOR} ${GCC_DUMPMACHINE}
echo $?
# pass
is_executable x86_64-w64-mingw32 x86_64-pc-msys
echo $?
# fail
is_executable arm-linux-gnueabihf x86_64-pc-msys
echo $?
As an extra precautionary measure, you should probably verify that the gcc 'dumpmachine' and the make 'built for' you are using are in the list of gcc, make you are using, and log an error message and/or exit if this is not the case.
Perhaps include an extra unit-test that is directly runnable, just a "hello world" or return EXIT_SUCCESS;, and if it fails, skip all the other plugin tests of that architecture?
Fun fact: on Linux at least, a shared library (ELF shared object) can have an entry point and be executable. (That's how PIE executables are made; what used to be a silly compiler / linker trick is now the default.) IDK if it would be useful to bake a main into one of your existing plugin tests.

Disable optimizations for a specific file with autotools

I'm working on setting up autotools for a large code base that was once just a bash script compile and later just hand written Makefiles.
We have a set of files that require that compiler optimizations be turned off. These files are already in their own subdirectory, so they will have their own Makefile.am.
What's the proper way to drop any existing compiler optimizations and force a -O0 flag on the compiler for these specific files?
I went with Brett Hale's comment to use subpackages. I was able to insert
: ${CFLAGS="-O0"}
before AC_PROG_CC, which sets the appropriate optimization. The other solutions do not work, since the -g -O2 was getting added very last. You can never get another -O variable after it.
You don't have to remove existing optimizations: the last value of -O on the compiler invocation will be used, so it's good enough to just add -O0 at the end.
This is not directly supported by automake, but there's a trick you can use defined in the documentation.
Otherwise if you know you'll only ever invoke your makefile with GNU make you can play other tricks that are GNU make specific; you may have to disable automake warnings about non-portable content.

Suppressing GCC warnings on a per directory basis

I'm dealing with a C/C++ codebase that includes some 3-rd party sources which produce large amounts of GCC warnings, which I'd like to hide. The 3-rd party code can't be modified or compiled into a library (due to shortcomings of the build system). The project is being compiled with -Werror.
How do I ask GCC to ignore all warnings in a part of the codebase (contained in a subdirectory), or at least make these warnings non-fatal?
I'm aware of the flag -isystem, but it doesn't work for me because:
It doesn't suppress warnings in the source files, only in headers.
It forces C linkage, so it can't be used with C++ headers.
GCC version is 4.7 or 4.8, build is make powered.
GCC can't help with this, directly. The correct fix would be to tweak your build system (recursive make, perhaps).
However, you could write a little wrapper script that scans the parameters, and strips -Werror if it finds the right pattern.
E.g.
#!/bin/bash
newargs=()
werror=true
for arg; do
case "$arg" in
*directory-name-i-care-about* )
newargs=("${newargs[#]}" "$arg")
werror=false
;;
-Werror )
;;
* )
newargs=("${newargs[#]}" "$arg")
;;
esac
done
if $werror; then
newargs=("${newargs[#]}" "-Werror")
fi
exec gcc "${newargs[#]}"
Then, run your build with CC=my-wrapper-script.sh, and you're done. You could call the script gcc and place it earlier on the path than the real gcc, but be careful to select the correct "gcc" at the end of the script.
(I've not actually tested that script, so it might be buggy.)

How to compile OpenSSL with relative rpath

I have been trying to compile openssl 1.0.0g with the following rpath:
$ORIGIN/../lib64
Everytime I readelf -d apps/openssl, I am getting results like the following depending on what escaping variation I tried:
\RIGIN/../lib64
RIGIN/../lib64
ORIGIN/../lib64
I want to setup my rpath without using external tools like chrpath. Is it at all possible? I will basically accept anything that does not involve using external tools like chrpath (though I would already be done with that).
Ideally, I would like to do it by passing options on the command line (any form of -Wl,-rpath,$ORIGIN/../lib64).
I don't mind editing the generated Makefile, which is what I have been trying last. If only I could get it to print a stupid dollar sign!!! I tried modifying LIBRPATH under the BUILDENV= block with no luck. My best results so far:
LIBRPATH=$$'ORIGIN/../lib64 # result: /../lib64
LIBRPATH=$$$$'ORIGIN/../lib64 # result: 12345<pid>/../lib64
I have read various rpath related questions and tried various escaping and quoting tricks but nothing worked so far!
In your makefile try:
-Wl,-rpath,${ORIGIN}/../lib64
I am assuming that the ORIGIN is a shell variable.
EDIT
I have just found an answer to your question (better late then never):
You need to prevent make from interpolating variables, to do that you need to use $$ (double dolar sign):
-Wl,-rpath,'$$ORIGIN/../lib64'
I know that it works because I have tested it with my own application, enjoy :)
I went the chrpath way.
http://enchildfone.wordpress.com/2010/03/23/a-description-of-rpath-origin-ld_library_path-and-portable-linux-binaries/
It is quite complicated to counter shell expansion of `$$ORIGIN`` in openssl. Sooner or later, it gets expanded because of the dollar sign. If you really want to go this way, you can do it. I have found the following to work with openssl 1.0.1g on Linux. In Makefile.shared, look for this line:
DO_GNU_APP=LDFLAGS="$(CFLAGS) -Wl,-rpath,$(LIBRPATH)"
Replace it with the following. This quoting-fu neutralize the expansion of $. The double $$ is the way to get a single dollar sign in makefiles.
DO_GNU_APP=LDFLAGS="$(CFLAGS) -Wl,-rpath,'"'$$'"ORIGIN/../lib64'"
After compiling:
readelf -d apps/openssl | grep RPATH
0x000000000000000f (RPATH) Library rpath: ['$ORIGIN/../lib64']
OK I spent several hours fighting with this same issue and trying all manner of crazy escaping, at one point I was up to eight $ signs, at which point I decided that there must be another way.
In fact, it appears that there is, at least with GNU ld.
Instead of -Wl,-rpath,\\$$$\$\$\$$$\$\\\\$ or some other elder god invoking monstrosity, just do this:
echo '-rpath=$ORIGIN/../lib64' > rpathorigin
./config -Wl,#$(pwd)/rpathorigin ...
I don't see that ld.gold documents the # flag, and I have no idea about, say, lld. But if you are using GCC and it is invoking BFD ld, the above may just work for you.
Of course, the actual path used with origin should be customized as needed, and I have no opinion on ./config vs ./Configure. But using the response file trick seems to entirely sidestep the shell/make escaping nightmare.
I don't mind editing the generated Makefile, which is what I have been trying last...
I'm not sure you can set it with a shell variable and relative path. I don't think ldd expands the $ORIGIN in $ORIGIN/../lib64. In this case, I think you need to use ldconfig to add $ORIGIN/../lib64 to the library search paths. See finding ldd search path on Server Fault for more details.
Since I'm not sure, I'll provide the instructions anyway. You don't need to change the Makefiles. As a matter of fact, I did not have any luck doing so in the past because things get overwritten, and other things like CFLAGS and LDFLAGS get ignored.
Also see Build OpenSSL with RPATH? Your question and the cited question are different question that converge on similar answers (no duplicates between them). But it provides the OpenSSL dev's position on RPATHs. It was a private email, so I shared the relevant details rather than the whole message.
If you manage to embed $ORIGIN/../lib64 in the ELF section and it works, then please report back. Below, I am using /usr/local/ssl/lib for my RPATH. You should substitute $ORIGIN/../lib64 for /usr/local/ssl/lib.
OpenSSL supports RPATH's out of the box for BSD targets (but not others). From Configure:
# Unlike other OSes (like Solaris, Linux, Tru64, IRIX) BSD run-time
# linkers (tested OpenBSD, NetBSD and FreeBSD) "demand" RPATH set on
# .so objects. Apparently application RPATH is not global and does
# not apply to .so linked with other .so. Problem manifests itself
# when libssl.so fails to load libcrypto.so. One can argue that we
# should engrave this into Makefile.shared rules or into BSD-* config
# lines above. Meanwhile let's try to be cautious and pass -rpath to
# linker only when --prefix is not /usr.
if ($target =~ /^BSD\-/)
{
$shared_ldflag.=" -Wl,-rpath,\$(LIBRPATH)" if ($prefix !~ m|^/usr[/]*$|);
}
The easiest way to do it for OpenSSL 1.0.2 appears to be add it to linker flags during configuration
./config -Wl,-rpath=/usr/local/ssl/lib
You can also edit Configure line and hard code the rpath. For example, I am working on Debian x86_64. So I opened the file Configure in an editor, copied linux-x86_64, named it linux-x86_64-rpath, and made the following change to add the -rpath option:
"linux-x86_64-rpath", "gcc:-m64 -DL_ENDIAN -O3 -Wall -Wl,-rpath=/usr/local/ssl/lib::
-D_REENTRANT::-Wl,-rpath=/usr/local/ssl/lib -ldl:SIXTY_FOUR_BIT_LONG RC4_CHUNK DES_INT DES_UNROLL:
${x86_64_asm}:elf:dlfcn:linux-shared:-fPIC:-m64:.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR):::64",
Above, fields 2 and 6 were changed. They correspond to $cflag and $ldflag in OpenSSL's builds system.
Then, Configure with the new configuration:
$ ./Configure linux-x86_64-rpath shared no-ssl2 no-ssl3 no-comp \
--openssldir=/usr/local/ssl enable-ec_nistp_64_gcc_128
Finally, after make, verify the settings stuck:
$ readelf -d ./libssl.so | grep -i rpath
0x000000000000000f (RPATH) Library rpath: [/usr/local/ssl/lib]
$ readelf -d ./libcrypto.so | grep -i rpath
0x000000000000000f (RPATH) Library rpath: [/usr/local/ssl/lib]
$ readelf -d ./apps/openssl | grep -i rpath
0x000000000000000f (RPATH) Library rpath: [/usr/local/ssl/lib]
Once you perform make install, then ldd will produce expected results:
$ ldd /usr/local/ssl/lib/libssl.so
linux-vdso.so.1 => (0x00007ffceff6c000)
libcrypto.so.1.0.0 => /usr/local/ssl/lib/libcrypto.so.1.0.0 (0x00007ff5eff96000)
...
$ ldd /usr/local/ssl/bin/openssl
linux-vdso.so.1 => (0x00007ffc30d3a000)
libssl.so.1.0.0 => /usr/local/ssl/lib/libssl.so.1.0.0 (0x00007f9e8372e000)
libcrypto.so.1.0.0 => /usr/local/ssl/lib/libcrypto.so.1.0.0 (0x00007f9e832c0000)
...
Don't ask me why but this worked for me in OpenSSL 1.1.1i in getting around the $ sign issue:
\$\$\$$ORIGIN
Example:
./Configure linux-x86_64 '-Wl,-rpath,\$\$\$$ORIGIN'
Alternatively, if this command line hack isn't congruent with you, you can always use chrpath after building as others have suggested:
./Configure linux-x86_64 '-Wl,-rpath,XORIGIN'
make depend
make all
chrpath -r "\$ORIGIN" libssl.so

Resources