On HPUX I need to use the +h link option to get the boost 1.39.0 shared libraries to contain correct paths.
-Wl,+h$(SPACE)-Wl,$(<[-1]:D=)
(From http://www.nabble.com/HPUX-aCC:-Howto-avoid-building-boost-libraries-containing-absolute-library-path-references-when-calling-bjam-install-td17619511.html)
I've tested that this works by hacking the gcc.jam toolset file:
796c796
< "$(CONFIG_COMMAND)" -L"$(LINKPATH)" -Wl,$(RPATH_OPTION:E=-R)$(SPACE)-Wl,"$(RPATH)" "$(.IMPLIB-COMMAND)$(<[1])" -o "$(<[-1])" $(HAVE_SONAME)-Wl,$(SONAME_OPTION)$(SPACE)-Wl,$(<[-1]:D=) -shared $(START-GROUP) "$(>)" "$(LIBRARIES)" $(FINDLIBS-ST-PFX) -l$(FINDLIBS-ST) $(FINDLIBS-SA-PFX) -l$(FINDLIBS-SA) $(END-GROUP) $(OPTIONS) $(USER_OPTIONS)
---
> "$(CONFIG_COMMAND)" -L"$(LINKPATH)" -Wl,+h$(SPACE)-Wl,$(<[-1]:D=) -Wl,$(RPATH_OPTION:E=-R)$(SPACE)-Wl,"$(RPATH)" "$(.IMPLIB-COMMAND)$(<[1])" -o "$(<[-1])" $(HAVE_SONAME)-Wl,$(SONAME_OPTION)$(SPACE)-Wl,$(<[-1]:D=) -shared $(START-GROUP) "$(>)" "$(LIBRARIES)" $(FINDLIBS-ST-PFX) -l$(FINDLIBS-ST) $(FINDLIBS-SA-PFX) -l$(FINDLIBS-SA) $(END-GROUP) $(OPTIONS) $(USER_OPTIONS)
But now I want a permanent solution, and I can't work out how.
First I tried putting a bjam conditional in the actions link.dll section, but that section contains shell commands.
Then I tried adding the extra section to the OPTIONS variable for those targets. But that didn't seem to have any effect on the link.
Finally I tried creating a separate toolset as a copy of gcc.jam (hpuxgcc.jam), but I couldn't get that to work at all. I guess there are more places I need to change variable names, but the Jam syntax is beyond what I understand.
Does anyone have some better idea how to get this to work? Or should I just convert the hacky version into a patch I run before building Boost? Surely there's a better way?
Are guess the question is either:
a) How do I (conditional on the platform) add the text to the linker command in the gcc.jam
Or:
b) How do I create a new toolset based on gcc.jam?
Which ever is easier...
What does -h option do? Does it set the "soname"? If so, note the HAVE_SONAME and SONAME_OPTION use in the same action. Then, note the block of code in gcc.jam where it is set:
if [ os.name ] != NT && [ os.name ] != OSF && [ os.name ] != HPUX && [ os.name ] != AIX
{
# OSF does have an option called -soname but it does not seem to work as
# expected, therefore it has been disabled.
HAVE_SONAME = "" ;
SONAME_OPTION = -h ;
}
You can tweak this according to your platform.
I suggest you follow up with this on boost-boost#lists.boost.org, which is much better place for Boost.Build questions than stack overflow.
Related
In my CI-setup I'm compiling my C-code to a number of different architectures (x86_64-linux-gnu, i386-linux-gnu, aarch64-linux-gnu, arm-linux-gnueabihf, x86_64-apple-darwin, i386-apple-darwin, i686-w64-mingw32, x86_64-w64-mingw32,...).
I can add new architectures (at least for the *-linux-gnu) simply by "enabling them".
The same CI-configuration is used for a number of projects (with different developers), and strives to be practically "zero config" (as in: "drop this CI-configuration in your project and forget about it, the rest will be taken care of for you").
Some of the targets are being compiled natively, others are cross-compiled. some cross-compiled architectures are runnable on the build machines (e.g. i could run the i386-apple-darwin binaries on the x86_64-apple-darwin), others are incompatible (e.g. i cannot run aarch64-linux-gnu binaries on the x86_64-linux-gnu builder).
Everything works great so far.
However, I would also like to run unit-tests during the CI - but only if the unit-tests can actually be executed on the build machine.
I'm not interested at all in getting a lot of failed tests simply because I'm cross-building binaries.
To complicate things a bit, what I'm building are not self-contained executables, but plugins that are dlopen()ed (or whatever is the equivalent on the target platform) by a host application. The host application is typically slow to startup, so I'd like to avoid running it if it cannot use the plugins anyhow.
Building plugins also means that I cannot just try-run them.
I'm using the GNU toolchain (make, gcc), or at least something compatible (like clang)).
In my first attempt to check whether I am cross-compiling, I compare the target-architecture of the build process (as returned by ${CC} -dumpmachine) with the architecture of GNU make (GNU make will output the architecture triplet used to build make itself when invoked with the -v flag).
Something like the following works surprisingly well, at least for the *-linux-gnu targets:
if make --version | egrep ${TARGETARCH:-$(${CC:-cc} -dumpmachine) >/dev/null; then
echo "native compilation"
else
echo "cross compiling"
fi
However, it doesn't work at all on Windows/MinGW (when doing a native build, gcc targets x86_64-w64-mingw32 but make was built for x86_64-pc-msys; and worse when building 32bit binaries which are of course fully runnable) or macOS (gcc says x86_64-apple-darwin18.0.0, make says i386-apple-darwin11.3.0 (don't ask me why)).
It's becoming even more of an issue, as, while I am writing this and doing some checks I noticed that even on Linux I get differences like x86_64-pc-linux-gnu vs x86_64-linux-gnu; these differences haven't emerged on my CI-builders yet, but I'm sure that's only a matter of time).
So, I'm looking for a more robust solution to detect whether my build-host will be able to run the produced binaries, and skip unit-tests if it does not.
From what I understand of your requirements (I will remove this answer in the case I missed the point), you could proceed in three steps:
Instrument your build procedure so that it will produce the exact list of all (gcc 'dumpmachine', make 'built for') pairs you are using.
Keep in the list only the the pairs that would allow executing the program.
Determine from bash if you can execute the binary or not given the pair reflecting your system, using the information you collected:
#!/bin/bash
# Credits (some borrowed code):
# https://stackoverflow.com/questions/12317483/array-of-arrays-in-bash/35728122
# bash 4 could use associative arrays, but darwin probably only has bash3 (and zsh)
# pairs of gcc -dumpmachine
# ----- collected/formatted/filtered information begins -----
entries=(
'x86_64-w64-mingw32 x86_64-pc-msys'
'x86_64-pc-linux-gnu x86_64-linux-gnu'
'x86_64-apple-darwin18.0.0 i386-apple-darwin11.3.0'
)
# ----- collected/formatted/filtered information ends -----
is_executable()
{
local gcc
local make
local found=0
if [ $# -ne 2 ]
then
echo "is_executable() requires two parameters - terminating."
exit 1
fi
for page in "${entries[#]}"
do
read -r -a arr <<< "${page}"
gcc="${arr[0]}"
make="${arr[1]}"
if [ "$1" == "${gcc}" ] && [ "$2" == "${make}" ]
then
found=1
break;
fi
done
return ${found}
}
# main
MAKE_BUILT_FOR=$( make --version | sed -n 's/Built for //p')
GCC_DUMPMACHINE=$(gcc -dumpmachine)
# pass
is_executable ${MAKE_BUILT_FOR} ${GCC_DUMPMACHINE}
echo $?
# pass
is_executable x86_64-w64-mingw32 x86_64-pc-msys
echo $?
# fail
is_executable arm-linux-gnueabihf x86_64-pc-msys
echo $?
As an extra precautionary measure, you should probably verify that the gcc 'dumpmachine' and the make 'built for' you are using are in the list of gcc, make you are using, and log an error message and/or exit if this is not the case.
Perhaps include an extra unit-test that is directly runnable, just a "hello world" or return EXIT_SUCCESS;, and if it fails, skip all the other plugin tests of that architecture?
Fun fact: on Linux at least, a shared library (ELF shared object) can have an entry point and be executable. (That's how PIE executables are made; what used to be a silly compiler / linker trick is now the default.) IDK if it would be useful to bake a main into one of your existing plugin tests.
I have a verifone terminal (vx520 and vx820).
I whant to crate a makefile for compile app for this terminal.
I have "VRXSDK" version 1.2.0
howto do it?
or how to compile a file like "main.c" or "main.cpp" till have a execute file for verifone POS terminal
Wherever you got your "VRXSDK" from, you should also be able to get some sample project files. You will find a make file in there (likely with a .smk extension). I would recommend you start with that and as you read through it, that should give you more specific questions to ask and things you can look up using your favorite search engine.
A make file is, in essence, a program that invokes the compiler with input parameters you determine based on various factors. It will also invoke the linker to tie it all together. How you go about this varies wildly from one implementation to another, so "How do I create a make file" is just about as broad as "how do I write a program?" which makes answering it here rather challenging. However, to get you started...
I use Visual Studio as my IDE and I am using NMake. I actually have 2 layers of make files. The outer layer is what is called when I say "build" from my IDE and it is very short:
# Pick one of the following (for LOG_PRINTF messages)
#CompileWithLogSys = -DLOGSYS_FLAG
CompileWithLogSys =
all:
$(MAKE) /i /f coreBuild.smk /a TerminalType=$(Configuration) CompileWithLogSys=$(CompileWithLogSys) VMACMode=Multi
$(MAKE) /i /f coreBuild.smk /a TerminalType=$(Configuration) CompileWithLogSys=$(CompileWithLogSys) VMACMode=Single
Lines starting with # are comments.
This outer file makes it really easy for me to toggle a few things when I want to build different variations. You can see that I have turned OFF compliation with LogSys. The more important thing the 2-layer approach gives me is an easy way to compile 2 different versions with a single command to build. This runs nmake with "VMACMode" set to "Multi" and then runs it again with it set to "Single". The inner make file will see that parameter and compile to a different root folder for each so in the end I wind up with 2 folders, each with a different version.
You can do a web search on "nmake parameters" to see what things like /i and /f do, as well as other options I'm not using here. However, I would like to direct your attention to TerminalType=$(Configuration). In Visual Studio, you can select from a dropbox if you want "Debug" or "Release". Those 2 options are the default options, but you can change them; in my case, I have modified them to "eVo" and "Vx". Now I just select in my drop down box which version I want to compile for and that gets passed in. Alternately, I could just hard code both into my outer make file. That's just preference.
My inner make file (which is named "coreBuild.smk") gets much more interesting.
Generally, you start by defining variables, such as "include paths":
# Includes
SDKIncludes = -I$(EVOSDK)\include
ACTIncludes = -I$(EVOACT)include
VCSIncludes = -I$(EVOVCS)include
EOSIncludes = -I$(EOSSDK)\include\ssl2
And/Or libraries:
#Libraries
ACTLibraries = $(EVOACT)OutPut\RV\Files\Static\Release
#Others you may want to include, but I am not using:
#LOGSYSLibraries = $(EVOVMAC)\Output\RV\Lib\Files\Debug
#EOSLibraries = $(EOSSDK)\lib
As well as the path(s) to your files
# App Paths
AppIncludes = .\include
SrcDir = .\source
ObjDir = .\obj
OutDir = .\Output\$(TerminalType)\$(VMACMode)\Files
ResDir = .\Resource
I also like to define my project name here:
ProjectName = MakeFileTest
Note that OutDir uses the TerminalType and VMACMode that we passed in in order to go to a unique folder.
Next, you would generally set your compiler options
# Compiler Options
# Switch based on terminal type
!IF "$(TerminalType)"=="eVo"
CompilerCompatibility=-p
DefineTerminalType = -DEVO_TERMINAL
!ELSE
CompilerCompatibility=
DefineTerminalType = -DVX_TERMINAL
!ENDIF
# Switch based on Multi or Single mode (VMACMode)
!if "$(VMACMode)"=="Multi"
VMACIncludes = -I$(EVOVMAC)include
DefineMulti = -DMULTI_APP_ENABLED
!else
VMACIncludes =
DefineMulti =
!endif
An interesting thing to note above is the -DMULTI_APP_ENABLED. The program I wrote has some blocks that are dependent on #ifdef MULTI_APP_ENABLED. This is not any special name--it's just one I came up with, but the compiler will define it right before it starts compiling my code, so I can turn those code blocks on and off right here.
Next, we are going to kinda' gather everything together. We will start by defining a new var, "Includes" and it will have the flag "-I" (to indicate "include" and then all the things we said above that we wanted to include:
Includes = -I$(AppIncludes) $(SDKIncludes) $(ACTIncludes) $(VMACIncludes) $(VCSIncludes)
Note that you could just type everything long hand here and not go through the extra steps of defining the vars in the first place, but it makes it easier to read, so I think this is pretty normal.
We do pretty much the same thing with compiler options, although note that the specific flags (ex, "-D" "-p") were already included in the original var declarations, so we leave them out here:
COptions =$(CompilerCompatibility) $(CompileWithLogSys) $(DefineTerminalType) $(DefineMulti) -DDEV_TOGGLES_FOR_SYNTAX
Next we set a variable that will tell the linker where the object files are that it needs to stitch together. Note that if you insert new lines, as I have, you need the '\' to tell it that it continues on the next line
# Dependencies
AppObjects = \
$(ObjDir)\$(ProjectName).o \
$(ObjDir)\Base.o \
$(ObjDir)\printer.o \
$(ObjDir)\UI.o \
$(ObjDir)\Comm.o
We will also set one for any libaries we want to link in:
Libs = $(ACTLibraries)\act2000.a
OK, next we have to sign the file(s). We are trying to tell nMake that we also will be creating the resource file and compiling the actual code.
If we are doing a multi-app build, then pseudoOut depends on the .res and the .out files. If not, then it just depends on the .out, because there is no .res. If the the dependent file(s) has/have changed more recently than pseudoOut, then run commands vrxhdr..., filesignature..., and move... NOTE that the indentations seen below are required for nMake to work properly.
!if "$(VMACMode)"=="Multi"
pseudoOut : $(ResDir)\$(ProjectName).res $(OutDir)\$(ProjectName).out
!else
pseudoOut : $(OutDir)\$(ProjectName).out
!endif
# This calls vrxhdr: the utility program that fixes the executable program’s header required to load and run the program. Vrxhdr is needed when you want to move a shared library around on the terminal.
$(EVOSDK)\bin\vrxhdr -s 15000 -h 5000 $(OutDir)\$(ProjectName).out
# do the signing using the file signature tool and the .fst file associated with this TerminalType.
"$(VSFSTOOL)\filesignature" $(TerminalType)$(VMACMode).fst -nogui
#echo __________________ move files to out directory __________________
# rename the .p7s file we just created
move $(OutDir)\$(ProjectName).out.p7s $(OutDir)\$(ProjectName).p7s
!if "$(VMACMode)"=="Multi"
copy $(ResDir)\imm.ini $(OutDir)\imm.ini
copy $(ResDir)\$(ProjectName).INS $(OutDir)\$(ProjectName).INS
copy $(ResDir)\$(ProjectName).res $(OutDir)\$(ProjectName).res
!endif
#echo *****************************************************************
Note that the "echo" commands are just to help me read my output logs, as needed.
OK, now we link.
"WAIT!" I can hear you saying, "We haven't compiled yet, we already issued a command to sign, and now we are linking? This is totally out of order!" Yes and no. We haven't actually issued the command to sign yet, we have merely told nmake that we may want to do that and how to do it if we decide so to do. Similarly, we aren't issuing the command to link yet, we are just telling nmake how to do it when we are ready.
# Link object files
$(OutDir)\$(ProjectName).out : $(AppObjects)
$(EVOSDK)\bin\vrxcc $(COptions) $(AppObjects) $(Libs) -o $(OutDir)\$(ProjectName).out
Remember that mutli-app programs need a .res file. Single apps don't. The following will actually build the .res file, as needed.
!if "$(VMACMode)"=="Multi"
# compile resource file
$(ResDir)\$(ProjectName).res : $(ResDir)\$(ProjectName).rck
$(EVOTOOLS)rck2 -S$(ResDir)\$(ProjectName) -O$(ResDir)\$(ProjectName) -M
!endif
Remember those AppObjects? We are finally ready to make them. I'm using the following flags
-c = compile only
-o = output file name
-e"-" => -e redirect error output from sub-tools to... "-" to stdout. (These are all then redirected via pipe | )
Again, wherever you got your VRXSDK, you should also be able to get some documentation from VeriFone. See "Verix_eVo_volume 3", page 59 for more details on the flags
$(ObjDir)\$(ProjectName).o : $(SrcDir)\$(ProjectName).c
!IF !EXISTS($(OutDir))
!mkdir $(OutDir)
!ENDIF
-$(EVOSDK)\bin\vrxcc -c $(COptions) $(Includes) -o $(ObjDir)\$(ProjectName).o $(SrcDir)\$(ProjectName).c -e"-" | "$(EVOTOOLS)fmterrorARM.exe"
$(ObjDir)\Base.o : $(SrcDir)\Base.c
$(EVOSDK)\bin\vrxcc -c $(COptions) $(Includes) -o $(ObjDir)\base.o $(SrcDir)\Base.c -e"-" | "$(EVOTOOLS)fmterrorARM.exe"
$(ObjDir)\myprinter.o : $(SrcDir)\printer.c
$(EVOSDK)\bin\vrxcc -c $(COptions) $(Includes) -o $(ObjDir)\printer.o $(SrcDir)\printer.c -e"-" | "$(EVOTOOLS)fmterrorARM.exe"
$(ObjDir)\UI.o : $(SrcDir)\UI.c
$(EVOSDK)\bin\vrxcc -c $(COptions) $(Includes) -o $(ObjDir)\UI.o $(SrcDir)\UI.c -e"-" | "$(EVOTOOLS)fmterrorARM.exe"
$(ObjDir)\Comm.o : $(SrcDir)\Comm.c
$(EVOSDK)\bin\vrxcc -c $(COptions) $(Includes) -o $(ObjDir)\Comm.o $(SrcDir)\Comm.c -e"-" | "$(EVOTOOLS)fmterrorARM.exe"
I execute the following code on AIX box using TCL and it fails
The reason for failure is that somehow 'gcc' is ENABLED by DEFAULT on that AIX LPAR.
I want to DISABLE gcc. How can I do that?
AC_DEFUN(SC_ENABLE_GCC, [
AC_ARG_ENABLE(gcc, [ --enable-gcc allow use of gcc if available [--disable-gcc]],
[ok=$enableval], [ok=no])
if test "$ok" = "yes"; then
CC=gcc
AC_PROG_CC
else
CC=${CC-cc}
fi
])
Please help me to resolve the issue
You probably need to do two things:
Specify the --disable-gcc option to configure.
Set the exported CC environment variable to the compiler you actually want to use prior to running configure.
This might be combined into:
CC=cc ./configure --disable-gcc ...
(I commonly have my CC variable set to a clang variant on my platform…)
Ive got some large make files for a third party project that are not building due to linker issues.
From looking at the make files, I think it should be executing something like:
LIBS = -lm
CC = gcc
bin = bin
myapp: $(bin)/main.o $(bin)/other.o $(bin)/etc.o
$(CC) $(bin)/main.o $(bin)/other.o $(bin)/etc.o $(LIBS) -o myapp
gcc bin/main.o bin/other.o bin/etc.o -lm -o myapp
Instead from the error it seems to be failing on something like: It also didn't put any of the .o files in the expected bin/ location, but just left them in the source directory...
cc main.o -o myapp
But I cant locate anywhere that might come from. Is there some way to get some kind of stacktrace through the make files?
I am aware of -n and -d, but neither seems to tell me what target line and file yeilded that command, or which series of targets led there and the values of any $() expansions (The one im expecting is the only myapp: I can find in any of the makefiles...)
Check out the --debug option. From my manpage:
--debug[=FLAGS]
Print debugging information in addition to normal processing. If the
FLAGS are omitted, then the behavior is the same as if -d was specified.
FLAGS may be a for all debugging output (same as using -d), b for basic
debugging, v for more verbose basic debugging, i for showing implicit
rules, j for details on invocation of commands, and m for debugging
while remaking makefiles.
remake is a very good choice but in a pinch something like the following (saved as debug.mk) can be a good help too. It won't tell you as much as remake but it might tell you enough to start with.
# Use as: MAKEFILES=debug.mk make
OLD_SHELL := $(SHELL)
ifneq (undefined,$(origin X))
override X = -x
endif
SHELL = $(if $#,$(warning Running $#$(if $<, (from: $<))$(if $?, (newer: $?))))$(OLD_SHELL) $(X)
You can print out the other automatic variables there too if you wanted to see a bit more about what was going on.
I have a project that I'm building on OS X using autotools. I'd like to build a universal binary, but putting multiple -arch options in OBJCFLAGS conflicts with gcc's -M (which automake uses for dependency tracking). I can see a couple workarounds, but none seems straightforward.
Is there a way to force preprocessing to be separate from compilation (so -M is given to CPP, while -arch is handed to OBJC)?
I can see that automake supports options for disabling dependency tracking, and enabling it when it can't be done as a side-effect. Is there a way to force the use of the older style of tracking even when the side-effect based tracking is available?
I don't have any experience with lipo. Is there a good way to tie it into the autotools work flow?
This Apple Technical Note looks promising, but it's something I haven't done. I would think you'd only need to do a universal build when preparing a release, so perhaps you can do without dependency tracking?
There's a few solutions here; and likely one that has evaded me.
The easiest and fastest is to add --disable-dependency-tracking to your run of ./configure.
This will tell it not to generate dependencies at all. The dependency phase is what's killing you because the -M dependency options are being used during code generation; which can't be done if there are multiple architectures being targetted.
So this is 'fine' if you are doing a clean build on someone else's package; or you don't mind doing a 'make clean' before each build. If you are hacking at the source, especially header files, this is no good as make probably won't know what to rebuild and will leave you with stale binaries.
Better, but more dangerous is to do something like:
CC=clang CXX=clang++ ./configure
This will make the compiler clang instead of gcc. You have clang if you have a recent Xcode. Configure will realize that clang meets the compilation requirements, but will also decide that it isn't safe for auto-dependency generation. Instead of disabling auto dependency gen, it will do old-style 2 pass generation.
One caveat: This may or may not work as I described depending on how you set your architecture flags. If you have flags you want to pass to all compiler invocations (i.e.: -I for include paths), you should set CPPFLAGS. For code generation, set CFLAGS and CXXFLAGS for C and C++, (and I suppose COBJFLAGS for ObjC). Typically you would add $CPPFLAGS to those. I typically whip up a shell script something like:
#!/bin/bash
export CC=clang
export CXX=clang
export CPPFLAGS="-isysroot /Developer/SDKs/MacOSX10.5.sdk -mmacosx-version-min=10.5 -fvisibility=hidden"
export CFLAGS="-arch i386 -arch x86_64 -O3 -fomit-frame-pointer -momit-leaf-frame-pointer -ffast-math $CPPFLAGS"
export CXXFLAGS=$CFLAGS
./configure
You probably don't want these exact flags, but it should get you the idea.
lipo. You've been down this road it sounds like. I've found the best way is as follows:
a. Make top level directories like .X86_64 and .i386. Note the '.' in front. If you target a build into the source directories it usually needs to start with a dot to avoid screwing up 'make clean' later.
b. Run ./configure with something like: --prefix=`pwd`/.i386` and however you set the architecture (in this case to i386).
c. Do the make, and make install, and assuming it all went well make clean and make sure the stuff is still in .i386 Repeat for each architecture. The make clean at the end of each phase is pretty important as the reconfig may change what gets cleaned and you really want to make sure you don't pollute an architecture with old architecture files.
d. Assuming you have all your builds the way you want, I usually make a shell script that looks and feels something like this to run at the end that will lipo things up for you.
# move the working builds for posterity and debugging
mv .i386 ./Build/i386
mv .x86_64 ./Build/x86_64
for path in ./Build/i386/lib/*
do
file=${path##*/}
# only convert 'real' files
if [ -f "$file" -a ! -L "$file" ]; then
partner="./Build/x86_64/Lib/$file"
if [ -f $partner -a ! -L $partner ]; then
target="./Build/Lib/$file"
lipo -create "$file" "$partner" -output "$target" || { echo "Lipo failed to get phat"; exit 5; }
echo Universal Binary Created: $target
else
echo Skipping: $file, no valid architecture pairing at: $partner
fi
else
# this is a pretty common case, openssl creates symlinks
# echo Skipping: $file, NOT a regular file
true
fi
done
What I have NOT figured out is the magic that would let me use gcc, and old school 2-pass dep gen. Frankly I care less and less each day as I become more and more impressed with clang/llvm.
Good luck!