In my CI-setup I'm compiling my C-code to a number of different architectures (x86_64-linux-gnu, i386-linux-gnu, aarch64-linux-gnu, arm-linux-gnueabihf, x86_64-apple-darwin, i386-apple-darwin, i686-w64-mingw32, x86_64-w64-mingw32,...).
I can add new architectures (at least for the *-linux-gnu) simply by "enabling them".
The same CI-configuration is used for a number of projects (with different developers), and strives to be practically "zero config" (as in: "drop this CI-configuration in your project and forget about it, the rest will be taken care of for you").
Some of the targets are being compiled natively, others are cross-compiled. some cross-compiled architectures are runnable on the build machines (e.g. i could run the i386-apple-darwin binaries on the x86_64-apple-darwin), others are incompatible (e.g. i cannot run aarch64-linux-gnu binaries on the x86_64-linux-gnu builder).
Everything works great so far.
However, I would also like to run unit-tests during the CI - but only if the unit-tests can actually be executed on the build machine.
I'm not interested at all in getting a lot of failed tests simply because I'm cross-building binaries.
To complicate things a bit, what I'm building are not self-contained executables, but plugins that are dlopen()ed (or whatever is the equivalent on the target platform) by a host application. The host application is typically slow to startup, so I'd like to avoid running it if it cannot use the plugins anyhow.
Building plugins also means that I cannot just try-run them.
I'm using the GNU toolchain (make, gcc), or at least something compatible (like clang)).
In my first attempt to check whether I am cross-compiling, I compare the target-architecture of the build process (as returned by ${CC} -dumpmachine) with the architecture of GNU make (GNU make will output the architecture triplet used to build make itself when invoked with the -v flag).
Something like the following works surprisingly well, at least for the *-linux-gnu targets:
if make --version | egrep ${TARGETARCH:-$(${CC:-cc} -dumpmachine) >/dev/null; then
echo "native compilation"
else
echo "cross compiling"
fi
However, it doesn't work at all on Windows/MinGW (when doing a native build, gcc targets x86_64-w64-mingw32 but make was built for x86_64-pc-msys; and worse when building 32bit binaries which are of course fully runnable) or macOS (gcc says x86_64-apple-darwin18.0.0, make says i386-apple-darwin11.3.0 (don't ask me why)).
It's becoming even more of an issue, as, while I am writing this and doing some checks I noticed that even on Linux I get differences like x86_64-pc-linux-gnu vs x86_64-linux-gnu; these differences haven't emerged on my CI-builders yet, but I'm sure that's only a matter of time).
So, I'm looking for a more robust solution to detect whether my build-host will be able to run the produced binaries, and skip unit-tests if it does not.
From what I understand of your requirements (I will remove this answer in the case I missed the point), you could proceed in three steps:
Instrument your build procedure so that it will produce the exact list of all (gcc 'dumpmachine', make 'built for') pairs you are using.
Keep in the list only the the pairs that would allow executing the program.
Determine from bash if you can execute the binary or not given the pair reflecting your system, using the information you collected:
#!/bin/bash
# Credits (some borrowed code):
# https://stackoverflow.com/questions/12317483/array-of-arrays-in-bash/35728122
# bash 4 could use associative arrays, but darwin probably only has bash3 (and zsh)
# pairs of gcc -dumpmachine
# ----- collected/formatted/filtered information begins -----
entries=(
'x86_64-w64-mingw32 x86_64-pc-msys'
'x86_64-pc-linux-gnu x86_64-linux-gnu'
'x86_64-apple-darwin18.0.0 i386-apple-darwin11.3.0'
)
# ----- collected/formatted/filtered information ends -----
is_executable()
{
local gcc
local make
local found=0
if [ $# -ne 2 ]
then
echo "is_executable() requires two parameters - terminating."
exit 1
fi
for page in "${entries[#]}"
do
read -r -a arr <<< "${page}"
gcc="${arr[0]}"
make="${arr[1]}"
if [ "$1" == "${gcc}" ] && [ "$2" == "${make}" ]
then
found=1
break;
fi
done
return ${found}
}
# main
MAKE_BUILT_FOR=$( make --version | sed -n 's/Built for //p')
GCC_DUMPMACHINE=$(gcc -dumpmachine)
# pass
is_executable ${MAKE_BUILT_FOR} ${GCC_DUMPMACHINE}
echo $?
# pass
is_executable x86_64-w64-mingw32 x86_64-pc-msys
echo $?
# fail
is_executable arm-linux-gnueabihf x86_64-pc-msys
echo $?
As an extra precautionary measure, you should probably verify that the gcc 'dumpmachine' and the make 'built for' you are using are in the list of gcc, make you are using, and log an error message and/or exit if this is not the case.
Perhaps include an extra unit-test that is directly runnable, just a "hello world" or return EXIT_SUCCESS;, and if it fails, skip all the other plugin tests of that architecture?
Fun fact: on Linux at least, a shared library (ELF shared object) can have an entry point and be executable. (That's how PIE executables are made; what used to be a silly compiler / linker trick is now the default.) IDK if it would be useful to bake a main into one of your existing plugin tests.
Related
I'm developing an open source application where I'd like to include Perl conditionally (for different text processing purposes - that's just for information, not to be criticized as a concept :-). How would you normally check for availability of Perl headers using autoconf?
In my configure.ac I use the following for stuff that has pkg-config files:
PKG_CHECK_MODULES(GTK, gtk+-3.0, [AC_DEFINE([HAVE_GTK_3], 1, [Define to 1 if GTK+ 3 is present])])
PKG_CHECK_MODULES(SQLITE, sqlite3, [AC_DEFINE([HAVE_SQLITE], 1, [Define to 1 if SQLite is present])])
Unfortunately AFAIU Perl doesn't ship any .pc-s. In my Makefile.in to generate compiler flags I use their perl -MExtUtils::Embed -e ccopts -e ldopts instead of executing pkg-config.
Here rises the question - how would you do this in a prettier way?
I tried this:
AC_CHECK_HEADER([perl.h], AC_DEFINE(HAVE_PERL, 1, [Define to 1 if Perl headers are present]))
But it doesn't work unfortunately:
checking for perl.h... no
In my system (and probably much everywhere else) it's not in just /usr/include:
gforgx#shinjitsu nf $ locate perl.h | tail -n 1
/usr/lib64/perl5/CORE/perl.h
Is there at all a 'legal' way to extend search path for AC_CHECK_HEADER without using automake and AM_ macros?
So far I tried manipulating CPPFLAGS, and it's much better but still (probably due to other inclusions in perl.h):
configure: WARNING: perl.h: present but cannot be compiled
configure: WARNING: perl.h: check for missing prerequisite headers?
configure: WARNING: perl.h: see the Autoconf documentation
configure: WARNING: perl.h: section "Present But Cannot Be Compiled"
configure: WARNING: perl.h: proceeding with the compiler's result
configure: WARNING: ## ------------------------------------ ##
configure: WARNING: ## Report this to gforgx#protonmail.com ##
configure: WARNING: ## ------------------------------------ ##
checking for perl.h... no
Many thanks!
Update
Finally this works:
PERL_CPPFLAGS=`perl -MExtUtils::Embed -e ccopts`
PERL_LIBS=`perl -MExtUtils::Embed -e ldopts`
old_CPPFLAGS="$CPPFLAGS"
old_LIBS="$LIBS"
CPPFLAGS="$CPPFLAGS $PERL_CPPFLAGS"
LIBS="$LIBS $PERL_LIBS"
# TODO: figure out why first option doesn't work
#AC_CHECK_HEADER([perl.h], AC_DEFINE(HAVE_PERL, 1, [Define to 1 if Perl headers are present]))
AC_CHECK_FUNCS(perl_construct, AC_DEFINE(HAVE_PERL, 1, [Define to 1 if Perl headers are present]))
CPPFLAGS="$old_CPPFLAGS"
LIBS="$old_LIBS"
Not much of an autoconf expert, but I think: you can put plain shell snippets like
PERL_CFLAGS=`perl -MExtUtils::Embed -e ccopts`
PERL_LDFLAGS=`perl -MExtUtils::Embed -e ldopts`
into your configure.ac. Probably the right way to do it is to use AC_ARG_WITH to let the user specify those vars, and only get them from EU::E if the user hasn't overridden them. (likewise you can use one to have --with-perl override the HAS_PERL check entirely).
Then you can use AC_SUBST to make the values from configure-time available in the Makefile (so you don't need to call EU::E in Makefile.in).
And finally, the heart of the issue, I don't think there's a nice way to make AC_CHECK_HEADER aware that it needs some nonstandard flags, but you can do
old_CFLAGS="${CFLAGS}"
CFLAGS="${PERL_CFLAGS}"
AC_CHECK_HEADER(...)
CFLAGS="${old_CFLAGS}"
to run AC_CHECK_HEADER with PERL_CFLAGS in effect.
Note that you need Perl's C header(s) only if you want to build a Perl extension or embed a Perl interpreter in your binary. The latter seems more likely to be what you have in mind, but in that case, do consider whether it would work as well or better to simply use an external Perl interpreter, launched programmatically by your application at need. Use of an external Perl interpreter would not involve the C header at all.
However, you seem already committed to binary integration with Perl. In that case, configure is the right place to test for the availability and location of Perl's development headers, and to determine the appropriate compilation and linker flags. Putting it there also gives you the ability to use Automake conditionals to help you configure for and manage both with-Perl and without-Perl builds, if you should want to do that.
To that end, even though Autoconf does not provide built in macros for Perl detection / configuration, the Autoconf Archive has a few of them. In particular, ax_perl_ext_flags self describes its behavior as ...
Fetches the linker flags and C compiler flags for compiling and linking programs that embed a Perl interpreter.
... which I take to be appropriate for your purposes. After adding that macro to your project, you might incorporate it into your configure.ac like so:
PERL_CFLAGS=
PERL_LDFLAGS=
AX_PERL_EXT_FLAGS([PERL_CFLAGS], [PERL_LDFLAGS])
# ...
AC_SUBST([PERL_CFLAGS])
AC_SUBST([PERL_LDFLAGS])
That macro uses a technique similar to what you describe doing in your Makefile.in, but in a rather more robust way.
As for checking on the header, once you have the appropriate C compiler flags for Perl, you put those into effect (just) for the scope of the header check. This is necessary because configure uses the compiler to test for the presence of the header, and if the compiler requires extra options (say an -I option) to find the header at compile time, then it will need the same at configuration time. Something like this, then:
CFLAGS_SAVE=$CFLAGS
# Prepend the Perl CFLAGS to any user-specified CFLAGS
CFLAGS="${PERL_CFLAGS} ${CFLAGS}"
# This will automatically define HAVE_PERL_H if the header is found:
AC_CHECK_HEADERS([perl.h])
# Restore the user-specified CFLAGS
CFLAGS=$CFLAGS_SAVE
I'm dealing with a C/C++ codebase that includes some 3-rd party sources which produce large amounts of GCC warnings, which I'd like to hide. The 3-rd party code can't be modified or compiled into a library (due to shortcomings of the build system). The project is being compiled with -Werror.
How do I ask GCC to ignore all warnings in a part of the codebase (contained in a subdirectory), or at least make these warnings non-fatal?
I'm aware of the flag -isystem, but it doesn't work for me because:
It doesn't suppress warnings in the source files, only in headers.
It forces C linkage, so it can't be used with C++ headers.
GCC version is 4.7 or 4.8, build is make powered.
GCC can't help with this, directly. The correct fix would be to tweak your build system (recursive make, perhaps).
However, you could write a little wrapper script that scans the parameters, and strips -Werror if it finds the right pattern.
E.g.
#!/bin/bash
newargs=()
werror=true
for arg; do
case "$arg" in
*directory-name-i-care-about* )
newargs=("${newargs[#]}" "$arg")
werror=false
;;
-Werror )
;;
* )
newargs=("${newargs[#]}" "$arg")
;;
esac
done
if $werror; then
newargs=("${newargs[#]}" "-Werror")
fi
exec gcc "${newargs[#]}"
Then, run your build with CC=my-wrapper-script.sh, and you're done. You could call the script gcc and place it earlier on the path than the real gcc, but be careful to select the correct "gcc" at the end of the script.
(I've not actually tested that script, so it might be buggy.)
In this episode of "let's be stupid", we have the following problem: a C++ library has been wrapped with a layer of code that exports its functionality in a way that allows it to be called from C. This results in a separate library that must be linked (along with the original C++ library and some object files specific to the program) into a C program to produce the desired result.
The tricky part is that this is being done in the context of a rigid build system that was built in-house and consists of literally dozens of include makefiles. This system has a separate step for the linking of libraries and object files into the final executable but it insists on using gcc for this step instead of g++ because the program source files all have a .c extension, so the result is a profusion of undefined symbols. If the command line is manually pasted at a prompt and g++ is substituted for gcc, then everything works fine.
There is a well-known (to this build system) make variable that allows flags to be passed to the linking step, and it would be nice if there were some incantation that could be added to this variable that would force gcc to act like g++ (since both are just driver programs).
I have spent quality time with the gcc documentation searching for something that would do this but haven't found anything that looks right, does anybody have suggestions?
Considering such a terrible build system write a wrapper around gcc that exec's gcc or g++ dependent upon the arguments. Replace /usr/bin/gcc with this script, or modify your PATH to use this script in preference to the real binary.
#!/bin/sh
if [ "$1" == "wibble wobble" ]
then
exec /usr/bin/gcc-4.5 $*
else
exec /usr/bin/g++-4.5 $*
fi
The problem is that C linkage produces object files with C name mangling, and that C++ linkage produces object files with C++ name mangling.
Your best bet is to use
extern "C"
before declarations in your C++ builds, and no prefix on your C builds.
You can detect C++ using
#if __cplusplus
Many thanks to bmargulies for his comment on the original question. By comparing the output of running the link line with both gcc and g++ using the -v option and doing a bit of experimenting, I was able to determine that "-lstdc++" was the magic ingredient to add to my linking flags (in the appropriate order relative to other libraries) in order to avoid the problem of undefined symbols.
For those of you who wish to play "let's be stupid" at home, I should note that I have avoided any use of static initialization in the C++ code (as is generally wise), so I wasn't forced to compile the translation unit containing the main() function with g++ as indicated in item 32.1 of FAQ-Lite (http://www.parashift.com/c++-faq-lite/mixing-c-and-cpp.html).
I have a project that I'm building on OS X using autotools. I'd like to build a universal binary, but putting multiple -arch options in OBJCFLAGS conflicts with gcc's -M (which automake uses for dependency tracking). I can see a couple workarounds, but none seems straightforward.
Is there a way to force preprocessing to be separate from compilation (so -M is given to CPP, while -arch is handed to OBJC)?
I can see that automake supports options for disabling dependency tracking, and enabling it when it can't be done as a side-effect. Is there a way to force the use of the older style of tracking even when the side-effect based tracking is available?
I don't have any experience with lipo. Is there a good way to tie it into the autotools work flow?
This Apple Technical Note looks promising, but it's something I haven't done. I would think you'd only need to do a universal build when preparing a release, so perhaps you can do without dependency tracking?
There's a few solutions here; and likely one that has evaded me.
The easiest and fastest is to add --disable-dependency-tracking to your run of ./configure.
This will tell it not to generate dependencies at all. The dependency phase is what's killing you because the -M dependency options are being used during code generation; which can't be done if there are multiple architectures being targetted.
So this is 'fine' if you are doing a clean build on someone else's package; or you don't mind doing a 'make clean' before each build. If you are hacking at the source, especially header files, this is no good as make probably won't know what to rebuild and will leave you with stale binaries.
Better, but more dangerous is to do something like:
CC=clang CXX=clang++ ./configure
This will make the compiler clang instead of gcc. You have clang if you have a recent Xcode. Configure will realize that clang meets the compilation requirements, but will also decide that it isn't safe for auto-dependency generation. Instead of disabling auto dependency gen, it will do old-style 2 pass generation.
One caveat: This may or may not work as I described depending on how you set your architecture flags. If you have flags you want to pass to all compiler invocations (i.e.: -I for include paths), you should set CPPFLAGS. For code generation, set CFLAGS and CXXFLAGS for C and C++, (and I suppose COBJFLAGS for ObjC). Typically you would add $CPPFLAGS to those. I typically whip up a shell script something like:
#!/bin/bash
export CC=clang
export CXX=clang
export CPPFLAGS="-isysroot /Developer/SDKs/MacOSX10.5.sdk -mmacosx-version-min=10.5 -fvisibility=hidden"
export CFLAGS="-arch i386 -arch x86_64 -O3 -fomit-frame-pointer -momit-leaf-frame-pointer -ffast-math $CPPFLAGS"
export CXXFLAGS=$CFLAGS
./configure
You probably don't want these exact flags, but it should get you the idea.
lipo. You've been down this road it sounds like. I've found the best way is as follows:
a. Make top level directories like .X86_64 and .i386. Note the '.' in front. If you target a build into the source directories it usually needs to start with a dot to avoid screwing up 'make clean' later.
b. Run ./configure with something like: --prefix=`pwd`/.i386` and however you set the architecture (in this case to i386).
c. Do the make, and make install, and assuming it all went well make clean and make sure the stuff is still in .i386 Repeat for each architecture. The make clean at the end of each phase is pretty important as the reconfig may change what gets cleaned and you really want to make sure you don't pollute an architecture with old architecture files.
d. Assuming you have all your builds the way you want, I usually make a shell script that looks and feels something like this to run at the end that will lipo things up for you.
# move the working builds for posterity and debugging
mv .i386 ./Build/i386
mv .x86_64 ./Build/x86_64
for path in ./Build/i386/lib/*
do
file=${path##*/}
# only convert 'real' files
if [ -f "$file" -a ! -L "$file" ]; then
partner="./Build/x86_64/Lib/$file"
if [ -f $partner -a ! -L $partner ]; then
target="./Build/Lib/$file"
lipo -create "$file" "$partner" -output "$target" || { echo "Lipo failed to get phat"; exit 5; }
echo Universal Binary Created: $target
else
echo Skipping: $file, no valid architecture pairing at: $partner
fi
else
# this is a pretty common case, openssl creates symlinks
# echo Skipping: $file, NOT a regular file
true
fi
done
What I have NOT figured out is the magic that would let me use gcc, and old school 2-pass dep gen. Frankly I care less and less each day as I become more and more impressed with clang/llvm.
Good luck!
Is it possible to include Makefiles dynamically? For example depending on some environment variable? I have the following Makefiles:
makefile
app1.1.mak
app1.2.mak
And there is an environment variable APP_VER which could be set to 1.1.0.1, 1.1.0.2, 1.2.0.1, 1.2.0.2.
But there will be only two different makefiles for 1.1 and 1.2 lines.
I have tried to write the following Makefile:
MAK_VER=$$(echo $(APP_VER) | sed -e 's/^\([0-9]*\.[0-9]*\).*$$/\1/')
include makefile$(MAK_VER).mak
all: PROD
echo MAK_VER=$(MAK_VER)
But it does not work:
$ make all
"makefile$(echo", line 0: make: Cannot open makefile$(echo
make: Fatal errors encountered -- cannot continue.
UPDATE:
As far as I understand make includes files before it calculates macros.
That's why it tries to execute the following statement
include makefile.mak
instead of
include makefile1.1.mak
You have two problems: your method of obtaining the version is too complicated, and your include line has a flaw. Try this:
include app$(APP_VER).mak
If APP_VER is an environmental variable, then this will work. If you also want to include the makefile called makefile (that is, if makefile is not the one we're writing), then try this:
include makefile app$(APP_VER).mak
Please note that this is considered a bad idea. If the makefile depends on environmental variables, it will work for some users and not others, which is considered bad behavior.
EDIT:
This should do it:
MAK_VER := $(subst ., ,$(APP_VER))
MAK_VER := $(word 1, $(MAK_VER)).$(word 2, $(MAK_VER))
include makefile app$(MAK_VER).mak
Try this:
MAK_VER=$(shell echo $(APP_VER) | sed -e 's/^\([0-9]*\.[0-9]*\).*$$/\1/')
MAK_FILE=makefile$(MAK_VER).mak
include $(MAK_FILE)
all:
echo $(MAK_VER)
echo $(MAK_FILE)
Modifying the outline solution
Have four makefiles:
makefile
app1.1.mak
app1.2.mak
appdummy.mak
The app.dummy.mak makefile can be empty - a symlink to /dev/null if you like. Both app.1.1.mak and app.1.2.mak are unchanged from their current content.
The main makefile changes a little:
MAK_VER = dummy
include makefile$(MAK_VER).mak
dummy:
${MAKE} MAK_VER=$$(echo $(APP_VER) | sed -e 's/^\([0-9]*\.[0-9]*\).*$$/\1/') all
all: PROD
...as now...
If you type make, it will read the (empty) dummy makefile, and then try to build the dummy target because it appears first. To build the dummy target, it will run make again, with APP_VER=1.1 or APP_VER=1.2 on the command line:
make APP_VER=1.1 all
Macros set on the command line cannot be changed within the makefile, so this overrides the line in the makefile. The second invocation of make, therefore, will read the correct version-specific makefile, and then build all.
This technique has limitations, most noticeably that it is fiddly to arrange for each and every target to be treated like this. There are ways around it, but usually not worth it.
Project organization
More seriously, I think you need to review what you're doing altogether. You are, presumably, using a version control system (VCS) to manage the source code. Also, presumably, there are some (significant) differences between the version 1.1 and 1.2 source code. So, to be able to do a build for version 1.1, you have to switch from the version 1.1 maintenance branch to the version 1.2 development branch, or something along those lines. So, why isn't the makefile just versioned for 1.1 or 1.2? If you switch between versions, you need to clean out all the derived files (object files, libraries, executables, etc) that may have been built with the wrong source. You have to change the source code over. So why not change the makefile too?
A build script to invoke make
I also observe that since you have the environment variable APP_VER driving your process, that you can finesse the problem by requiring a standardized 'make invoker' that sorts out the APP_VER value and invokes make correctly. Imagine that the script is called build:
#!/bin/sh
: ${APP_VER:=1.2.0.1} # Latest version is default
case $APP_VER in
[0-9].[0-9].*)
MAK_VER=`echo $APP_VER | sed -e 's/^\(...\).*/\1/'`
;;
*) echo "`basename $0 .sh`: APP_VER ($APP_VER) should start with two digits followed by dots" 1>&2;
exit 1;;
esac
exec make MAK_VER=$MAK_VER "$#"
This script validates that APP_VER is set, giving an appropriate default if it is not. It then processes that value to derive the MAK_VER (or errors out if it is incorrect). You'd need to modify that test after you reach version 10, of course, since you are planning to be so successful that you will reach double-digit version numbers in due course.
Given the correct version information, you can now invoke your makefile with any command line arguments.
The makefile can be quite simple:
MAK_VER = dummy
include app$(MAK_VER).mak
all: PROD
...as now...
The appdummy.mak file now contains a rule:
error:
echo "You must invoke this makefile via the build script" 1>&2
exit 1
It simply points out the correct way to do the build.
Note that you can avoid the APP_VER environment variable if you keep the product version number under the VCS in a file, and the script then reads the version number from the file. And there could be all sorts of other work done by the script, ensuring that correct tools are installed, other environment variables are set, and so on.