How to compile OpenSSL with relative rpath - makefile

I have been trying to compile openssl 1.0.0g with the following rpath:
$ORIGIN/../lib64
Everytime I readelf -d apps/openssl, I am getting results like the following depending on what escaping variation I tried:
\RIGIN/../lib64
RIGIN/../lib64
ORIGIN/../lib64
I want to setup my rpath without using external tools like chrpath. Is it at all possible? I will basically accept anything that does not involve using external tools like chrpath (though I would already be done with that).
Ideally, I would like to do it by passing options on the command line (any form of -Wl,-rpath,$ORIGIN/../lib64).
I don't mind editing the generated Makefile, which is what I have been trying last. If only I could get it to print a stupid dollar sign!!! I tried modifying LIBRPATH under the BUILDENV= block with no luck. My best results so far:
LIBRPATH=$$'ORIGIN/../lib64 # result: /../lib64
LIBRPATH=$$$$'ORIGIN/../lib64 # result: 12345<pid>/../lib64
I have read various rpath related questions and tried various escaping and quoting tricks but nothing worked so far!

In your makefile try:
-Wl,-rpath,${ORIGIN}/../lib64
I am assuming that the ORIGIN is a shell variable.
EDIT
I have just found an answer to your question (better late then never):
You need to prevent make from interpolating variables, to do that you need to use $$ (double dolar sign):
-Wl,-rpath,'$$ORIGIN/../lib64'
I know that it works because I have tested it with my own application, enjoy :)

I went the chrpath way.
http://enchildfone.wordpress.com/2010/03/23/a-description-of-rpath-origin-ld_library_path-and-portable-linux-binaries/
It is quite complicated to counter shell expansion of `$$ORIGIN`` in openssl. Sooner or later, it gets expanded because of the dollar sign. If you really want to go this way, you can do it. I have found the following to work with openssl 1.0.1g on Linux. In Makefile.shared, look for this line:
DO_GNU_APP=LDFLAGS="$(CFLAGS) -Wl,-rpath,$(LIBRPATH)"
Replace it with the following. This quoting-fu neutralize the expansion of $. The double $$ is the way to get a single dollar sign in makefiles.
DO_GNU_APP=LDFLAGS="$(CFLAGS) -Wl,-rpath,'"'$$'"ORIGIN/../lib64'"
After compiling:
readelf -d apps/openssl | grep RPATH
0x000000000000000f (RPATH) Library rpath: ['$ORIGIN/../lib64']

OK I spent several hours fighting with this same issue and trying all manner of crazy escaping, at one point I was up to eight $ signs, at which point I decided that there must be another way.
In fact, it appears that there is, at least with GNU ld.
Instead of -Wl,-rpath,\\$$$\$\$\$$$\$\\\\$ or some other elder god invoking monstrosity, just do this:
echo '-rpath=$ORIGIN/../lib64' > rpathorigin
./config -Wl,#$(pwd)/rpathorigin ...
I don't see that ld.gold documents the # flag, and I have no idea about, say, lld. But if you are using GCC and it is invoking BFD ld, the above may just work for you.
Of course, the actual path used with origin should be customized as needed, and I have no opinion on ./config vs ./Configure. But using the response file trick seems to entirely sidestep the shell/make escaping nightmare.

I don't mind editing the generated Makefile, which is what I have been trying last...
I'm not sure you can set it with a shell variable and relative path. I don't think ldd expands the $ORIGIN in $ORIGIN/../lib64. In this case, I think you need to use ldconfig to add $ORIGIN/../lib64 to the library search paths. See finding ldd search path on Server Fault for more details.
Since I'm not sure, I'll provide the instructions anyway. You don't need to change the Makefiles. As a matter of fact, I did not have any luck doing so in the past because things get overwritten, and other things like CFLAGS and LDFLAGS get ignored.
Also see Build OpenSSL with RPATH? Your question and the cited question are different question that converge on similar answers (no duplicates between them). But it provides the OpenSSL dev's position on RPATHs. It was a private email, so I shared the relevant details rather than the whole message.
If you manage to embed $ORIGIN/../lib64 in the ELF section and it works, then please report back. Below, I am using /usr/local/ssl/lib for my RPATH. You should substitute $ORIGIN/../lib64 for /usr/local/ssl/lib.
OpenSSL supports RPATH's out of the box for BSD targets (but not others). From Configure:
# Unlike other OSes (like Solaris, Linux, Tru64, IRIX) BSD run-time
# linkers (tested OpenBSD, NetBSD and FreeBSD) "demand" RPATH set on
# .so objects. Apparently application RPATH is not global and does
# not apply to .so linked with other .so. Problem manifests itself
# when libssl.so fails to load libcrypto.so. One can argue that we
# should engrave this into Makefile.shared rules or into BSD-* config
# lines above. Meanwhile let's try to be cautious and pass -rpath to
# linker only when --prefix is not /usr.
if ($target =~ /^BSD\-/)
{
$shared_ldflag.=" -Wl,-rpath,\$(LIBRPATH)" if ($prefix !~ m|^/usr[/]*$|);
}
The easiest way to do it for OpenSSL 1.0.2 appears to be add it to linker flags during configuration
./config -Wl,-rpath=/usr/local/ssl/lib
You can also edit Configure line and hard code the rpath. For example, I am working on Debian x86_64. So I opened the file Configure in an editor, copied linux-x86_64, named it linux-x86_64-rpath, and made the following change to add the -rpath option:
"linux-x86_64-rpath", "gcc:-m64 -DL_ENDIAN -O3 -Wall -Wl,-rpath=/usr/local/ssl/lib::
-D_REENTRANT::-Wl,-rpath=/usr/local/ssl/lib -ldl:SIXTY_FOUR_BIT_LONG RC4_CHUNK DES_INT DES_UNROLL:
${x86_64_asm}:elf:dlfcn:linux-shared:-fPIC:-m64:.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR):::64",
Above, fields 2 and 6 were changed. They correspond to $cflag and $ldflag in OpenSSL's builds system.
Then, Configure with the new configuration:
$ ./Configure linux-x86_64-rpath shared no-ssl2 no-ssl3 no-comp \
--openssldir=/usr/local/ssl enable-ec_nistp_64_gcc_128
Finally, after make, verify the settings stuck:
$ readelf -d ./libssl.so | grep -i rpath
0x000000000000000f (RPATH) Library rpath: [/usr/local/ssl/lib]
$ readelf -d ./libcrypto.so | grep -i rpath
0x000000000000000f (RPATH) Library rpath: [/usr/local/ssl/lib]
$ readelf -d ./apps/openssl | grep -i rpath
0x000000000000000f (RPATH) Library rpath: [/usr/local/ssl/lib]
Once you perform make install, then ldd will produce expected results:
$ ldd /usr/local/ssl/lib/libssl.so
linux-vdso.so.1 => (0x00007ffceff6c000)
libcrypto.so.1.0.0 => /usr/local/ssl/lib/libcrypto.so.1.0.0 (0x00007ff5eff96000)
...
$ ldd /usr/local/ssl/bin/openssl
linux-vdso.so.1 => (0x00007ffc30d3a000)
libssl.so.1.0.0 => /usr/local/ssl/lib/libssl.so.1.0.0 (0x00007f9e8372e000)
libcrypto.so.1.0.0 => /usr/local/ssl/lib/libcrypto.so.1.0.0 (0x00007f9e832c0000)
...

Don't ask me why but this worked for me in OpenSSL 1.1.1i in getting around the $ sign issue:
\$\$\$$ORIGIN
Example:
./Configure linux-x86_64 '-Wl,-rpath,\$\$\$$ORIGIN'
Alternatively, if this command line hack isn't congruent with you, you can always use chrpath after building as others have suggested:
./Configure linux-x86_64 '-Wl,-rpath,XORIGIN'
make depend
make all
chrpath -r "\$ORIGIN" libssl.so

Related

configure - check for availability of Perl headers (solved)

I'm developing an open source application where I'd like to include Perl conditionally (for different text processing purposes - that's just for information, not to be criticized as a concept :-). How would you normally check for availability of Perl headers using autoconf?
In my configure.ac I use the following for stuff that has pkg-config files:
PKG_CHECK_MODULES(GTK, gtk+-3.0, [AC_DEFINE([HAVE_GTK_3], 1, [Define to 1 if GTK+ 3 is present])])
PKG_CHECK_MODULES(SQLITE, sqlite3, [AC_DEFINE([HAVE_SQLITE], 1, [Define to 1 if SQLite is present])])
Unfortunately AFAIU Perl doesn't ship any .pc-s. In my Makefile.in to generate compiler flags I use their perl -MExtUtils::Embed -e ccopts -e ldopts instead of executing pkg-config.
Here rises the question - how would you do this in a prettier way?
I tried this:
AC_CHECK_HEADER([perl.h], AC_DEFINE(HAVE_PERL, 1, [Define to 1 if Perl headers are present]))
But it doesn't work unfortunately:
checking for perl.h... no
In my system (and probably much everywhere else) it's not in just /usr/include:
gforgx#shinjitsu nf $ locate perl.h | tail -n 1
/usr/lib64/perl5/CORE/perl.h
Is there at all a 'legal' way to extend search path for AC_CHECK_HEADER without using automake and AM_ macros?
So far I tried manipulating CPPFLAGS, and it's much better but still (probably due to other inclusions in perl.h):
configure: WARNING: perl.h: present but cannot be compiled
configure: WARNING: perl.h: check for missing prerequisite headers?
configure: WARNING: perl.h: see the Autoconf documentation
configure: WARNING: perl.h: section "Present But Cannot Be Compiled"
configure: WARNING: perl.h: proceeding with the compiler's result
configure: WARNING: ## ------------------------------------ ##
configure: WARNING: ## Report this to gforgx#protonmail.com ##
configure: WARNING: ## ------------------------------------ ##
checking for perl.h... no
Many thanks!
Update
Finally this works:
PERL_CPPFLAGS=`perl -MExtUtils::Embed -e ccopts`
PERL_LIBS=`perl -MExtUtils::Embed -e ldopts`
old_CPPFLAGS="$CPPFLAGS"
old_LIBS="$LIBS"
CPPFLAGS="$CPPFLAGS $PERL_CPPFLAGS"
LIBS="$LIBS $PERL_LIBS"
# TODO: figure out why first option doesn't work
#AC_CHECK_HEADER([perl.h], AC_DEFINE(HAVE_PERL, 1, [Define to 1 if Perl headers are present]))
AC_CHECK_FUNCS(perl_construct, AC_DEFINE(HAVE_PERL, 1, [Define to 1 if Perl headers are present]))
CPPFLAGS="$old_CPPFLAGS"
LIBS="$old_LIBS"
Not much of an autoconf expert, but I think: you can put plain shell snippets like
PERL_CFLAGS=`perl -MExtUtils::Embed -e ccopts`
PERL_LDFLAGS=`perl -MExtUtils::Embed -e ldopts`
into your configure.ac. Probably the right way to do it is to use AC_ARG_WITH to let the user specify those vars, and only get them from EU::E if the user hasn't overridden them. (likewise you can use one to have --with-perl override the HAS_PERL check entirely).
Then you can use AC_SUBST to make the values from configure-time available in the Makefile (so you don't need to call EU::E in Makefile.in).
And finally, the heart of the issue, I don't think there's a nice way to make AC_CHECK_HEADER aware that it needs some nonstandard flags, but you can do
old_CFLAGS="${CFLAGS}"
CFLAGS="${PERL_CFLAGS}"
AC_CHECK_HEADER(...)
CFLAGS="${old_CFLAGS}"
to run AC_CHECK_HEADER with PERL_CFLAGS in effect.
Note that you need Perl's C header(s) only if you want to build a Perl extension or embed a Perl interpreter in your binary. The latter seems more likely to be what you have in mind, but in that case, do consider whether it would work as well or better to simply use an external Perl interpreter, launched programmatically by your application at need. Use of an external Perl interpreter would not involve the C header at all.
However, you seem already committed to binary integration with Perl. In that case, configure is the right place to test for the availability and location of Perl's development headers, and to determine the appropriate compilation and linker flags. Putting it there also gives you the ability to use Automake conditionals to help you configure for and manage both with-Perl and without-Perl builds, if you should want to do that.
To that end, even though Autoconf does not provide built in macros for Perl detection / configuration, the Autoconf Archive has a few of them. In particular, ax_perl_ext_flags self describes its behavior as ...
Fetches the linker flags and C compiler flags for compiling and linking programs that embed a Perl interpreter.
... which I take to be appropriate for your purposes. After adding that macro to your project, you might incorporate it into your configure.ac like so:
PERL_CFLAGS=
PERL_LDFLAGS=
AX_PERL_EXT_FLAGS([PERL_CFLAGS], [PERL_LDFLAGS])
# ...
AC_SUBST([PERL_CFLAGS])
AC_SUBST([PERL_LDFLAGS])
That macro uses a technique similar to what you describe doing in your Makefile.in, but in a rather more robust way.
As for checking on the header, once you have the appropriate C compiler flags for Perl, you put those into effect (just) for the scope of the header check. This is necessary because configure uses the compiler to test for the presence of the header, and if the compiler requires extra options (say an -I option) to find the header at compile time, then it will need the same at configuration time. Something like this, then:
CFLAGS_SAVE=$CFLAGS
# Prepend the Perl CFLAGS to any user-specified CFLAGS
CFLAGS="${PERL_CFLAGS} ${CFLAGS}"
# This will automatically define HAVE_PERL_H if the header is found:
AC_CHECK_HEADERS([perl.h])
# Restore the user-specified CFLAGS
CFLAGS=$CFLAGS_SAVE

Does "-Wl,-soname" work on MinGW or is there an equivalent?

I'm experimenting a bit with building DLLs on windows using MINGW.
A very good summary (in my opinion) can be found at:
https://www.transmissionzero.co.uk/computing/building-dlls-with-mingw/
There is even a basic project which can be used for the purpose of this discussion:
https://github.com/TransmissionZero/MinGW-DLL-Example/releases/tag/rel%2Fv1.1
Note there is a cosmetic mistake in this project which will make it fail out of the box: the Makefile does not create an "obj" directory - Either adjust the Makefile or create it manually.
So here is the real question.
How to change the Windows DLL name so it differs from the actual DLL file name ??
Essentially I'm trying to achieve on Windows, the effect which is very well described here on Linux:
https://www.man7.org/conf/lca2006/shared_libraries/slide4b.html
Initially I tried changing "InternalName" and ""OriginalFilename" in the resource file used to create the DLL but that does not work.
In a second step, I tried adding "-Wl,-soname,SoName.dll" on the command that performs the final link, to change the Windows DLL name.
However, that does not seem to have the expected effect (I'm using MingW 7.3.0, x86_64-posix-seh-rev0).
Two things makes me say that:
1/ The test executable still works (I would expect it to fail, because it tries to locate SoName.dll but can't find it).
2/ "pexports.exe AddLib.dll" produces the output below, where the library name hasn't changed:
LIBRARY "AddLib.dll"
EXPORTS
Add
bar DATA
foo DATA
Am I doing anything wrong ? Are my expectations wrong perhaps ?
Thanks for your help !
David
First of all, I would like to say it's important to use either a .def file for specifying the exported symbols or use __declspec(dllexport) / __declspec(dllimport), but never mix these two methods. There is also another method using the -Wl,--export-all-symbols linker flag, but I think that's ugly and should only be used when quick and dirty is what you want.
It is possible to tell MinGW to use a DLL filename that does not match the library name. In the link step use -o to specify the DLL and use -Wl,--out-implib, to specify the library file.
Let me illustrate by showing how to build chebyshev as a both static and shared library. Its sources consist of only only 2 files: chebyshev.h and chebyshev.c.
Compile
gcc -c -o chebyshev.o chebyshev.c -I. -O3
Create static library
ar cr libchebyshev.a chebyshev.o
Create a .def file (as it wasn't supplied and __declspec(dllexport) / __declspec(dllimport) wasn't used either). Note that this file doesn't contain a line with LIBRARY allowing the linker to specify the DLL filename later.
There are several ways to do this if the .def file wasn't supplied by the project:
3.1. Get the symbols from the .h file(s). This may be hard as sometimes you need to distinguish for example between type definitions (like typedef, enum, struct) and actual functions and variables that need to be exported;
echo "EXPORTS" > chebyshev.def
sed -n -e "s/^.* \**\(chebyshev_.*\) *(.*$/\1/p" chebyshev.h >> chebyshev.def
3.2. Use nm to list symbols in the library file and filter out the type of symbols you need.
echo "EXPORTS" > chebyshev.def
nm -f posix --defined-only -p libchebyshev.a | sed -n -e "s/^_*\([^ ]*\) T .*$/\1/p" >> chebyshev.def
Link the static library into the shared library.
gcc -shared -s -mwindows -def chebyshev.def -o chebyshev-0.dll -Wl,--out-implib,libchebyshev.dll.a libchebyshev.a
If you have a project that uses __declspec(dllexport) / __declspec(dllimport) things are a lot easier. And you can even have the link step generate a .def file using the -Wl,--output-def, linker flag like this:
gcc -shared -s -mwindows -o myproject.dll -Wl,--out-implib,myproject.dll.a -Wl,--output-def,myproject.def myproject.o
This answer is based on my experiences with C. For C++ you really should use __declspec(dllexport) / __declspec(dllimport).
I believe I have found one mechanism to achieve on Windows, the effect described for Linux in https://www.man7.org/conf/lca2006/shared_libraries/slide4b.html
This involves dll_tool
In the example Makefile there was originally this line:
gcc -o AddLib.dll obj/add.o obj/resource.o -shared -s -Wl,--subsystem,windows,--out-implib,libaddlib.a
I simply replaced it with the 2 lines below instead:
dlltool -e obj/exports.o --dllname soname.dll -l libAddLib.a obj/resource.o obj/add.o
gcc -o AddLib.dll obj/resource.o obj/add.o obj/exports.o -shared -s -Wl,--subsystem,windows
Really, the key seems to be the creation with dlltool of an exports file in conjunction with dllname. This exports file is linked with the object files that make up the body of the DLL and it handles the interface between the DLL and the outside world. Note that dlltool also creates the "import library" at the same time
Now I get the expected effect, and I can see that the "Internal DLL name" (not sure what the correct terminology is) has changed:
First evidence:
>> dlltool.exe -I libAddLib.a
soname.dll
Second evidence:
>> pexports.exe AddLib.dll
LIBRARY "soname.dll"
EXPORTS
Add
bar DATA
foo DATA
Third evidence:
>> AddTest.exe
Error: the code execution cannot proceed because soname.dll was not found.
Although the desired effect is achieved, this still seems to be some sort of workaround. My understanding (but I could well be wrong) is that the gcc option "-Wl,-soname" should achieve exactly the same thing. At least it does on Linux, but is this broken on Windows perhaps ??

rpath=$ORIGIN not having desired effect?

I've got a binary "CeeloPartyServer" that needs to find libFoundation.so at runtime, on a FreeBSD machine. They're both in the same directory. I compile (on another platform, using a cross compiler) CeeloPartyServer using linker flag -rpath=$ORIGIN.
> readelf -d CeeloPartyServer |grep -i rpath
0x0000000f (RPATH) Library rpath: [$ORIGIN]
> ls
CeeloPartyServer Contents Foundation.framework libFoundation.so
> ./CeeloPartyServer
/libexec/ld-elf.so.1: Shared object "libFoundation.so" not found, required by "CeeloPartyServer"
Why isn't it finding the library when I try to run it?
My exact linker line is: -lm -lmysql -rpath=$ORIGIN.
I am pretty sure I don't have to escape $ or anything like that since my readelf analysis does in fact show that library rpath is set to $ORIGIN. What am I missing?
I'm assuming you are using gcc and binutils.
If you do
readelf -d CeeloPartyServer | grep ORIGIN
You should get back the RPATH line you found above, but you should also see some entries about flags. The following is from a library that I built.
0x000000000000000f (RPATH) Library rpath: [$ORIGIN/../lib]
0x000000000000001e (FLAGS) ORIGIN
0x000000006ffffffb (FLAGS_1) Flags: ORIGIN
If you aren't seeing some sort of FLAGS entries, you probably haven't told the linker to mark the object as requiring origin processing. With binutils ld, you do this by passing the -z origin flag.
I'm guessing you are using gcc to drive the link though, so in that case you will need to pass flag through the compiler by adding -Wl,-z,origin to your gcc link line.
Depending on how many layers this flag passes through before the linker sees it, you may need to use $$ORIGIN or even \$$ORIGIN. You will know that you have it right when readelf shows an RPATH header that looks like $ORIGIN/../lib or similar. The extra $ and the backslash are just to prevent the $ from being processed by other tools in the chain.
\$\ORIGIN if you are using chrpath and \$\$ORIGIN if you are providing directly in LDFLAGS
using ldd CeeloPartyServer to check the dependency .so is starting with ./ or not. (e.g. libFoundation.so and ./libFoundation.so)
For common situation it should be libFoundation.so and without the prefix ./
if ./ prefix is necessary for some uncommon case, make sure the CWD is the same folder with libFoundation.so, and the $ORIGIN would be invalid.
=======
For example:
g++ --shared -Wl,--rpath="\$ORIGIN" ./libFoundation.so -o lib2.so
would got a library lib2.so with ./libFoundation.so
g++ --shared -Wl,--rpath="\$ORIGIN" libFoundation.so -o lib2.so
would got libFoundation.so instead.

trouble building universal binary with autotools

I have a project that I'm building on OS X using autotools. I'd like to build a universal binary, but putting multiple -arch options in OBJCFLAGS conflicts with gcc's -M (which automake uses for dependency tracking). I can see a couple workarounds, but none seems straightforward.
Is there a way to force preprocessing to be separate from compilation (so -M is given to CPP, while -arch is handed to OBJC)?
I can see that automake supports options for disabling dependency tracking, and enabling it when it can't be done as a side-effect. Is there a way to force the use of the older style of tracking even when the side-effect based tracking is available?
I don't have any experience with lipo. Is there a good way to tie it into the autotools work flow?
This Apple Technical Note looks promising, but it's something I haven't done. I would think you'd only need to do a universal build when preparing a release, so perhaps you can do without dependency tracking?
There's a few solutions here; and likely one that has evaded me.
The easiest and fastest is to add --disable-dependency-tracking to your run of ./configure.
This will tell it not to generate dependencies at all. The dependency phase is what's killing you because the -M dependency options are being used during code generation; which can't be done if there are multiple architectures being targetted.
So this is 'fine' if you are doing a clean build on someone else's package; or you don't mind doing a 'make clean' before each build. If you are hacking at the source, especially header files, this is no good as make probably won't know what to rebuild and will leave you with stale binaries.
Better, but more dangerous is to do something like:
CC=clang CXX=clang++ ./configure
This will make the compiler clang instead of gcc. You have clang if you have a recent Xcode. Configure will realize that clang meets the compilation requirements, but will also decide that it isn't safe for auto-dependency generation. Instead of disabling auto dependency gen, it will do old-style 2 pass generation.
One caveat: This may or may not work as I described depending on how you set your architecture flags. If you have flags you want to pass to all compiler invocations (i.e.: -I for include paths), you should set CPPFLAGS. For code generation, set CFLAGS and CXXFLAGS for C and C++, (and I suppose COBJFLAGS for ObjC). Typically you would add $CPPFLAGS to those. I typically whip up a shell script something like:
#!/bin/bash
export CC=clang
export CXX=clang
export CPPFLAGS="-isysroot /Developer/SDKs/MacOSX10.5.sdk -mmacosx-version-min=10.5 -fvisibility=hidden"
export CFLAGS="-arch i386 -arch x86_64 -O3 -fomit-frame-pointer -momit-leaf-frame-pointer -ffast-math $CPPFLAGS"
export CXXFLAGS=$CFLAGS
./configure
You probably don't want these exact flags, but it should get you the idea.
lipo. You've been down this road it sounds like. I've found the best way is as follows:
a. Make top level directories like .X86_64 and .i386. Note the '.' in front. If you target a build into the source directories it usually needs to start with a dot to avoid screwing up 'make clean' later.
b. Run ./configure with something like: --prefix=`pwd`/.i386` and however you set the architecture (in this case to i386).
c. Do the make, and make install, and assuming it all went well make clean and make sure the stuff is still in .i386 Repeat for each architecture. The make clean at the end of each phase is pretty important as the reconfig may change what gets cleaned and you really want to make sure you don't pollute an architecture with old architecture files.
d. Assuming you have all your builds the way you want, I usually make a shell script that looks and feels something like this to run at the end that will lipo things up for you.
# move the working builds for posterity and debugging
mv .i386 ./Build/i386
mv .x86_64 ./Build/x86_64
for path in ./Build/i386/lib/*
do
file=${path##*/}
# only convert 'real' files
if [ -f "$file" -a ! -L "$file" ]; then
partner="./Build/x86_64/Lib/$file"
if [ -f $partner -a ! -L $partner ]; then
target="./Build/Lib/$file"
lipo -create "$file" "$partner" -output "$target" || { echo "Lipo failed to get phat"; exit 5; }
echo Universal Binary Created: $target
else
echo Skipping: $file, no valid architecture pairing at: $partner
fi
else
# this is a pretty common case, openssl creates symlinks
# echo Skipping: $file, NOT a regular file
true
fi
done
What I have NOT figured out is the magic that would let me use gcc, and old school 2-pass dep gen. Frankly I care less and less each day as I become more and more impressed with clang/llvm.
Good luck!

What's the "correct" way to determine target and architecture for GNU binutils?

In my build chain, I need to do this:
objcopy -I binary -O $BFDNAME -B $BFDARCH <this> <that>
in order to get a binary file into library form. Because I want other people to be able to use this, I need to know how to get $BFDNAME and $BFDARCH from their toolchain when they run the build. I can get the values locally by running objdump -f against a file I've already built, but is there a better way which won't leave me compiling throw-away files just to get configuration values?
Thank you for pointing this out, regularfry! Your answer helped me to find another solution which works without specifying the architecture at all:
ld -r -b binary -o data.o data.txt
On my system (Ubuntu Linux, binutils 2.22) both objcopy and ld approaches produce identical object files.
All credit goes to:
http://stupefydeveloper.blogspot.de/2008/08/cc-embed-binary-data-into-elf.html
For future reference, the answer seems to be this: the first entry in the output of objdump -i is the default, native format of the system.

Resources