I execute the following code on AIX box using TCL and it fails
The reason for failure is that somehow 'gcc' is ENABLED by DEFAULT on that AIX LPAR.
I want to DISABLE gcc. How can I do that?
AC_DEFUN(SC_ENABLE_GCC, [
AC_ARG_ENABLE(gcc, [ --enable-gcc allow use of gcc if available [--disable-gcc]],
[ok=$enableval], [ok=no])
if test "$ok" = "yes"; then
CC=gcc
AC_PROG_CC
else
CC=${CC-cc}
fi
])
Please help me to resolve the issue
You probably need to do two things:
Specify the --disable-gcc option to configure.
Set the exported CC environment variable to the compiler you actually want to use prior to running configure.
This might be combined into:
CC=cc ./configure --disable-gcc ...
(I commonly have my CC variable set to a clang variant on my platform…)
Related
In my CI-setup I'm compiling my C-code to a number of different architectures (x86_64-linux-gnu, i386-linux-gnu, aarch64-linux-gnu, arm-linux-gnueabihf, x86_64-apple-darwin, i386-apple-darwin, i686-w64-mingw32, x86_64-w64-mingw32,...).
I can add new architectures (at least for the *-linux-gnu) simply by "enabling them".
The same CI-configuration is used for a number of projects (with different developers), and strives to be practically "zero config" (as in: "drop this CI-configuration in your project and forget about it, the rest will be taken care of for you").
Some of the targets are being compiled natively, others are cross-compiled. some cross-compiled architectures are runnable on the build machines (e.g. i could run the i386-apple-darwin binaries on the x86_64-apple-darwin), others are incompatible (e.g. i cannot run aarch64-linux-gnu binaries on the x86_64-linux-gnu builder).
Everything works great so far.
However, I would also like to run unit-tests during the CI - but only if the unit-tests can actually be executed on the build machine.
I'm not interested at all in getting a lot of failed tests simply because I'm cross-building binaries.
To complicate things a bit, what I'm building are not self-contained executables, but plugins that are dlopen()ed (or whatever is the equivalent on the target platform) by a host application. The host application is typically slow to startup, so I'd like to avoid running it if it cannot use the plugins anyhow.
Building plugins also means that I cannot just try-run them.
I'm using the GNU toolchain (make, gcc), or at least something compatible (like clang)).
In my first attempt to check whether I am cross-compiling, I compare the target-architecture of the build process (as returned by ${CC} -dumpmachine) with the architecture of GNU make (GNU make will output the architecture triplet used to build make itself when invoked with the -v flag).
Something like the following works surprisingly well, at least for the *-linux-gnu targets:
if make --version | egrep ${TARGETARCH:-$(${CC:-cc} -dumpmachine) >/dev/null; then
echo "native compilation"
else
echo "cross compiling"
fi
However, it doesn't work at all on Windows/MinGW (when doing a native build, gcc targets x86_64-w64-mingw32 but make was built for x86_64-pc-msys; and worse when building 32bit binaries which are of course fully runnable) or macOS (gcc says x86_64-apple-darwin18.0.0, make says i386-apple-darwin11.3.0 (don't ask me why)).
It's becoming even more of an issue, as, while I am writing this and doing some checks I noticed that even on Linux I get differences like x86_64-pc-linux-gnu vs x86_64-linux-gnu; these differences haven't emerged on my CI-builders yet, but I'm sure that's only a matter of time).
So, I'm looking for a more robust solution to detect whether my build-host will be able to run the produced binaries, and skip unit-tests if it does not.
From what I understand of your requirements (I will remove this answer in the case I missed the point), you could proceed in three steps:
Instrument your build procedure so that it will produce the exact list of all (gcc 'dumpmachine', make 'built for') pairs you are using.
Keep in the list only the the pairs that would allow executing the program.
Determine from bash if you can execute the binary or not given the pair reflecting your system, using the information you collected:
#!/bin/bash
# Credits (some borrowed code):
# https://stackoverflow.com/questions/12317483/array-of-arrays-in-bash/35728122
# bash 4 could use associative arrays, but darwin probably only has bash3 (and zsh)
# pairs of gcc -dumpmachine
# ----- collected/formatted/filtered information begins -----
entries=(
'x86_64-w64-mingw32 x86_64-pc-msys'
'x86_64-pc-linux-gnu x86_64-linux-gnu'
'x86_64-apple-darwin18.0.0 i386-apple-darwin11.3.0'
)
# ----- collected/formatted/filtered information ends -----
is_executable()
{
local gcc
local make
local found=0
if [ $# -ne 2 ]
then
echo "is_executable() requires two parameters - terminating."
exit 1
fi
for page in "${entries[#]}"
do
read -r -a arr <<< "${page}"
gcc="${arr[0]}"
make="${arr[1]}"
if [ "$1" == "${gcc}" ] && [ "$2" == "${make}" ]
then
found=1
break;
fi
done
return ${found}
}
# main
MAKE_BUILT_FOR=$( make --version | sed -n 's/Built for //p')
GCC_DUMPMACHINE=$(gcc -dumpmachine)
# pass
is_executable ${MAKE_BUILT_FOR} ${GCC_DUMPMACHINE}
echo $?
# pass
is_executable x86_64-w64-mingw32 x86_64-pc-msys
echo $?
# fail
is_executable arm-linux-gnueabihf x86_64-pc-msys
echo $?
As an extra precautionary measure, you should probably verify that the gcc 'dumpmachine' and the make 'built for' you are using are in the list of gcc, make you are using, and log an error message and/or exit if this is not the case.
Perhaps include an extra unit-test that is directly runnable, just a "hello world" or return EXIT_SUCCESS;, and if it fails, skip all the other plugin tests of that architecture?
Fun fact: on Linux at least, a shared library (ELF shared object) can have an entry point and be executable. (That's how PIE executables are made; what used to be a silly compiler / linker trick is now the default.) IDK if it would be useful to bake a main into one of your existing plugin tests.
Here is some code in my configure.ac:
THIS="h5cc"
AC_MSG_WARN([$THIS])
AC_MSG_WARN(m4_bmatch([h5pcc],
[h5pcc], [parallel],
[h5cc], [serial],
[neither]
))
AC_MSG_ERROR(m4_bmatch([$THIS],
[h5pcc], [parallel],
[h5cc], [serial],
[neither]
))
I autoconf and then configure, which results in this:
configure: WARNING: h5cc
configure: WARNING: parallel
configure: error: neither
As far as I can tell, that's not supposed to happen, right? What am I missing?
You're mixing M4 code within your configure, but m4 only executes before expansion (i.e. when you run autoconf), while THIS=h5cc is a shell construct that gets executed by your shell (when you run ./configure).
So what m4_bmatch sees is a literal $THIS which indeed is neither.
Short version, don't use m4_* functions for things that you want to change at configure time.
Is there a way to have a conditional passed through automake so it is passed on to the resulting Makefile.in and Makefile later on?
I check whether JAVA_HOME is defined in the environment in a Makefile using
ifeq (undefined,$(origin JAVA_HOME))
#CALL with defaults
else
#CALL according to the variable
endif
But when I process this in a Makefile.am with automake I get two erros:
else without if
endif without if
Looks like automake does not digest the ifeq. Is there a way to pass this through it (if it makes sense doing so), or is there another autotools-friendly way of getting the same result?
The idea is also to allow setting/changing the variable just before running make to easily target different JDKs.
What I think's the right way:
Rely on $(JAVA_HOME) being set in Makefile.am and make sure a sensible value for it is set by configure.
Answering the Question as Written:
Because automake wants to generate Makefiles that work on POSIX make, it doesn't work too well with GNU-make conditionals.
So you do the test in configure.ac:
AC_SUBST([JAVA_HOME])
AM_CONDITIONAL([JAVA_HOME_SET], [test ! -z "$JAVA_HOME"])
Then in Makefile.am:
if JAVA_HOME_SET
## Something that relies on JAVA_HOME
else
## Defaults
endif
Is there a way to force '-m64' not overriding CXXFLAGS/CFLAGS. I want automatic x64 build environment like in Linux/BSD amd64.
Why do I need this?
The problem is complexity of the project I need to be buit as x64 on Solaris. It contains several parts and each may use specific C/C++ compiler flags. So, I can't just run:
CXXFLAGS=-m64 O2 ...
CFLAGS=-m64 -O2 ...
./configure
because there are no common C/C++ flags.
All I need is the way to transparently append '-m64' to every gcc/g++ call.
You can write a wrapper (eg: ~/bin/gcc) that would add the required option(s) and put ~/bin first in your PATH. eg:
#!/bin/ksh
/usr/sfw/bin/gcc -m64 "$#"
CPPFLAGS is used for the c preprocessor. It should be picked up by both gcc and g++.
Reference: http://www.gnu.org/software/make/manual/html_node/Implicit-Variables.html
On HPUX I need to use the +h link option to get the boost 1.39.0 shared libraries to contain correct paths.
-Wl,+h$(SPACE)-Wl,$(<[-1]:D=)
(From http://www.nabble.com/HPUX-aCC:-Howto-avoid-building-boost-libraries-containing-absolute-library-path-references-when-calling-bjam-install-td17619511.html)
I've tested that this works by hacking the gcc.jam toolset file:
796c796
< "$(CONFIG_COMMAND)" -L"$(LINKPATH)" -Wl,$(RPATH_OPTION:E=-R)$(SPACE)-Wl,"$(RPATH)" "$(.IMPLIB-COMMAND)$(<[1])" -o "$(<[-1])" $(HAVE_SONAME)-Wl,$(SONAME_OPTION)$(SPACE)-Wl,$(<[-1]:D=) -shared $(START-GROUP) "$(>)" "$(LIBRARIES)" $(FINDLIBS-ST-PFX) -l$(FINDLIBS-ST) $(FINDLIBS-SA-PFX) -l$(FINDLIBS-SA) $(END-GROUP) $(OPTIONS) $(USER_OPTIONS)
---
> "$(CONFIG_COMMAND)" -L"$(LINKPATH)" -Wl,+h$(SPACE)-Wl,$(<[-1]:D=) -Wl,$(RPATH_OPTION:E=-R)$(SPACE)-Wl,"$(RPATH)" "$(.IMPLIB-COMMAND)$(<[1])" -o "$(<[-1])" $(HAVE_SONAME)-Wl,$(SONAME_OPTION)$(SPACE)-Wl,$(<[-1]:D=) -shared $(START-GROUP) "$(>)" "$(LIBRARIES)" $(FINDLIBS-ST-PFX) -l$(FINDLIBS-ST) $(FINDLIBS-SA-PFX) -l$(FINDLIBS-SA) $(END-GROUP) $(OPTIONS) $(USER_OPTIONS)
But now I want a permanent solution, and I can't work out how.
First I tried putting a bjam conditional in the actions link.dll section, but that section contains shell commands.
Then I tried adding the extra section to the OPTIONS variable for those targets. But that didn't seem to have any effect on the link.
Finally I tried creating a separate toolset as a copy of gcc.jam (hpuxgcc.jam), but I couldn't get that to work at all. I guess there are more places I need to change variable names, but the Jam syntax is beyond what I understand.
Does anyone have some better idea how to get this to work? Or should I just convert the hacky version into a patch I run before building Boost? Surely there's a better way?
Are guess the question is either:
a) How do I (conditional on the platform) add the text to the linker command in the gcc.jam
Or:
b) How do I create a new toolset based on gcc.jam?
Which ever is easier...
What does -h option do? Does it set the "soname"? If so, note the HAVE_SONAME and SONAME_OPTION use in the same action. Then, note the block of code in gcc.jam where it is set:
if [ os.name ] != NT && [ os.name ] != OSF && [ os.name ] != HPUX && [ os.name ] != AIX
{
# OSF does have an option called -soname but it does not seem to work as
# expected, therefore it has been disabled.
HAVE_SONAME = "" ;
SONAME_OPTION = -h ;
}
You can tweak this according to your platform.
I suggest you follow up with this on boost-boost#lists.boost.org, which is much better place for Boost.Build questions than stack overflow.