Build FFMPEG 4.2 with Android NDK r20 - ffmpeg

I am trying to build FFMPEG 4.2 using Android NDK r20 and I am having an issue with configure.
I followed Ilia Kosynkin's blog post (https://medium.com/#ilja.kosynkin/building-ffmpeg-4-0-for-android-with-clang-642e4911c31e) and with a few minor changes to build.sh I successfully built FFMPEG 4.0.2 using Android NDK r17c for API level 14 on an Ubuntu 16 VM.
I updated FFMPEG to 4.2 and the Android NDK to r20 and got these and other compiler errors.
~/android-ndk/sysroot/usr/include/stdlib.h:61:7: error: expected identifier or '('
char* getenv(const char* __name);
^
./config.h:17:19: note: expanded from macro 'getenv'
#define getenv(x) NULL
^
~/android-ndk/toolchains/llvm/prebuilt/linux-x86_64/lib64/clang/8.0.7/include/stddef.h:105:18: note: expanded from macro 'NULL'
# define NULL ((void*)0)
and many like this:
./libavutil/libm.h:54:32: error: static declaration of 'cbrt' follows non-static declaration
static av_always_inline double cbrt(double x)
^
~/android-ndk/sysroot/usr/include/math.h:191:8: note: previous declaration is here
double cbrt(double __x);
^
In addition to cbrt there were about a dozen or so other math related functions redefined (e.g. lrint, round, trunc, inet_aton). I opened the generated config.h, commented out #define getenv(x) NULL and changed a bunch of defines like #define HAVE_CBRT 0 to #define HAVE_CBRT 1. I ran make and make install and the build was successful.
So my question is, are there ffmpeg options that I can pass to configure that will generate a config.h that I don't have to modify in order to get a successful build?
EDIT: Additional information from config.log.
It appears that check_mathfunc in configure is failing for NDK r20 but I can't tell why. Here is an example of a link command failing for the check for truncf. The error does not make any sense to me.
~/android-ndk/toolchains/arm-linux-androideabi-4.9/prebuilt/linux-x86_64/bin/arm-linux-androideabi-ld
-L~/android-ndk/toolchains/arm-linux-androideabi-4.9/prebuilt/linux-x86_64/lib/gcc/arm-linux-androideabi/4.9.x
-L~/android-ndk/platforms/android-29/arch-arm/usr/lib
--fix-cortex-a8 -lc --sysroot=~/android-ndk/sysroot -fPIE -pie
-o /tmp/ffconf.1OTX8pa8/test /tmp/ffconf.1OTX8pa8/test.o -lgcc
~/android-ndk/toolchains/arm-linux-androideabi-4.9/prebuilt/linux-x86_64/bin/arm-linux-androideabi-ld:
fatal error: -f/--auxiliary may not be used without -shared
EDIT 2: The error seems to be caused by the -pie option (Create a position independent executable). Previous version of FFMPEG did not have these 2 lines in the configure script for android:
add_cflags -fPIE
add_ldexeflags -fPIE, -pie
If I add -shared to add_ldexeflags I get the error "-shared and -pie are incompatible". If I replace -pie with -shared, check_mathfunc succeeds but I don't know if that is the correct thing to do. It seems odd that -fPIE requires -shared but -pie cannot be used with it.
Replacing -pie with -shared seems to fix the config.h file but now I get 'sys/sysctl.h' file not found during the build of libavutil. NDK r20 has 2 instances of sysctl.h and NDK r17c has 1 but none of them are in a directory named sys.
EDIT 3: I am going to chalk this one up to a bug in the FFMPEG configure script. configure checks for the existence of a function by generating a small source file that uses the function and then compiling and linking the generated file. If anything fails the function is not available. For NDK r17c, check_func sysctl fails and the build excludes sysctl functionality. For some reason, the test succeeds for NDK r20 because check_func does not verify whether or not sys/sysctl.h exists it just prototypes sysctl() and calls it. I solved this issue by adding a function to configure named check_sysfunc that tries to include sys/sysctl.h. I now get past this error and have a new one about an implicit declaration being invalid. I have to assume it is also a deficiency in configure and hopefully there won't be too many more of these.

I'm currently trying to do the same thing as you. Apparently, this guy (not me) manage to compile ffmpeg4.2 with NDK r20. So far it seem to work with ARCH=armv7a
[Youtube] https://www.youtube.com/watch?v=RP8SEAhcq5M
[Github] https://github.com/binglingziyu/ffmpeg-android-build
[P.s. I don't have enough reputation to post as a comment... so yeah...]

Passing -fPIE to ld directly is wrong. That's a compiler flag. The linker flag is spelled -pie. -fPIE is -f PIE, which is a completely different argument:
https://linux.die.net/man/1/ld
-f name
--auxiliary=name When creating an ELF shared object, set the internal DT_AUXILIARY field to the specified name. This tells the dynamic
linker that the symbol table of the shared object should be used as an
auxiliary filter on the symbol table of the shared object name. If you
later link a program against this filter object, then, when you run
the program, the dynamic linker will see the DT_AUXILIARY field. If
the dynamic linker resolves any symbols from the filter object, it
will first check whether there is a definition in the shared object
name. If there is one, it will be used instead of the definition in
the filter object. The shared object name need not exist. Thus the
shared object name may be used to provide an alternative
implementation of certain functions, perhaps for debugging or for
machine specific performance.
This option may be specified more than once. The DT_AUXILIARY entries
will be created in the order in which they appear on the command line.
As for the other issue:
NDK r20 has 2 instances of sysctl.h and NDK r17c has 1 but none of them are in a directory named sys.
I just filed https://github.com/android-ndk/ndk/issues/1068, but I'm not sure if we'll fix that one. Apparently its use is strongly discouraged. Fixing the source to not use that is probably the better call.

Related

How to make compiler generate a "elf32-x86-64" format object file?

First, some background info about elf32-x86-64 format.
It is a format that leverages 64-bit hardware while enforcing 32-bit pointers. Ref1 and Ref2.
Question
I am trying to link the Google Test framework binaries to my project.
I use objdump -f to check the format of Google Test binaries and my binaries.
Google Test format is elf64-x86-64. Mine elf32-x86-64. So they cannot be linked together.
Then I add below content to the google test's internal_utils.cmake file:
set(ZEPHYR_LINK_FLAGS "-Wl,--oformat=elf32-x86-64")
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} ${ZEPHYR_LINK_FLAGS}")
I hope the linker flag can change the output format to elf32-x86-64.
But google test build failed with below error:
/usr/lib/gcc/x86_64-linux-gnu/7/libstdc++.so: error adding symbols: File in wrong format
The /usr/lib/gcc/x86_64-linux-gnu/7/libstdc++.so is also a elf64-x86-64 format.
And I checked the generated object file, such as:
./googletest/CMakeFiles/gtest_main.dir/src/gtest_main.cc.o
It is still elf64-x86-64.
So it seems the linker flag doesn't affect the object file format.
I remember the linker ld will choose the output format based on its first encountered object file. So I guess I need to tell the compiler to output a elf32-x86-64 format.
How can I ask the compiler to output a elf32-x86-64 object file?
ADD 1 - 3:29 PM 11/1/2019
I have managed to compile the Google Test as elf32-x86-64 with below tuning:
Add compile flag -mx32
And add link flag -Wl,--oformat=elf32-x86-64
Now the output binaries libgtest.a, libgtest_main.a are elf32-x86-64. But they need to be linked to libstdc++.so. So far, it is elf64-x86-64 on my system. And I haven't found a elf32-x86-64 one. Thus below error:
/usr/lib/gcc/x86_64-linux-gnu/7/libstdc++.so: error adding symbols: File in wrong format
ADD 2 - 3:47 PM 11/1/2019
After installing sudo apt-get install gcc-multilib g++-multilib (ref), I got a elf32-x86-64 version of libstdc++.so at below location:
/usr/lib/gcc/x86_64-linux-gnu/7/x32/libstdc++.so
And it ultimately points to /usr/libx32/libstdc++.so.6.0.25
Now it seems I just need to find a way to tell the linker to use it... So close!
ADD 3 - 2:44 PM 11/4/2019
Thanks to Florian and EmployedRussian, I change Google Test's internal_utils.cmake file to add below 4 lines:
set(MY_COMPILE_FLAGS "-mx32")
set(cxx_base_flags "${cxx_base_flags} ${MY_COMPILE_FLAGS}")
set(MY_LINK_FLAGS "-mx32")
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} ${MY_LINK_FLAGS}")
Now the generated executable are elf32_x86-64 format.
So basically, I add the -mx32 to both compile and link flags.
And in the generated rules.ninja file, the link rule goes like this:
command = $PRE_LINK && /usr/bin/c++ $FLAGS $LINK_FLAGS $in -o $TARGET_FILE $LINK_PATH $LINK_LIBRARIES && $POST_BUILD
The $FLAGS and $LINK_FLAGS are defined in the build.ninja file as below:
FLAGS = -Wall -Wshadow -Werror -mx32 ...
LINK_FLAGS = -mx32 ...
So essentially, there are 2 -mx32 options in the ninja command definition contributed by the $FLAGS $LINK_FLAGS respectively.
So why do I need to specify the -mx32 for twice??
And I don't understand why I can specify -mx32 for CMAKE_EXE_LINKER_FLAGS.
First, -mx32 is only a compile option (ref), not a linker option.
Second, from the link rule definition, the $LINK_FLAGS are passed to usr/bin/c++ without a -Wl, prefix, so even the option can be appreciated by the linker, it won't be passed to the linker.
GCC will adjust the linker command line accordingly if you invoke it as gcc -mx32. It is more than just a compiler flag.

Why is it possible to override symbols from some static libraries but not others?

I'm working on a tool that links against Clang, and I need to implement a small number of changes to some operations. To improve development times, instead of rebuilding Clang, I decided to redefine the symbols of interest in my program code, and let the linker take care of the rest: in most cases, the program version of a symbol that is defined in both program code and a static library takes precedence at link-time without a fuss. (The linked answer relates to Linux, but I found that to work on macOS too–usually.)
This worked great when I was using the stock Clang build for macOS that can be downloaded from the LLVM website. However, I am currently trying to switch to my company's customized Clang (which I built once from source, and hoped to further modify in the same way), and now I get duplicate symbol errors.
I don't know what is causing this issue. My project's linker flags have remained unchanged (save for one new static library): importantly, they do not contain -all_load or its -force_load cousin, which tell the linker to try to include every symbol defined in static libraries. The symbols that I'm trying to override look defined the same way when I check them with nm in the stock archive and in the custom archive. The difference has to be with how I built LLVM, but just knowing that doesn't really help me figure out what I need to change.
For instance, say that I want to redefine clang::Qualifiers::getAsString() const. I could do that just fine using the stock LLVM libraries, but now I would get a duplicate symbol error:
duplicate symbol __ZNK5clang10Qualifiers11getAsStringEv in:
.../Objects-normal/x86_64/TypePrinter.o
clang+llvm-internal/lib/libclangAST.a(TypePrinter.cpp.o)
Using nm -f darwin to inspect both archives, I would get very similar results for __ZNK5clang10Qualifiers11getAsStringEv:
# clang+llvm-6.0.0/lib/libclangAST.a
(undefined) external __ZNK5clang10Qualifiers11getAsStringEv
0000000000000bb0 (__TEXT,__text) external __ZNK5clang10Qualifiers11getAsStringEv
# clang+llvm-internal/lib/libclangAST.a
(undefined) external __ZNK5clang10Qualifiers11getAsStringEv
0000000000000d00 (__TEXT,__text) external __ZNK5clang10Qualifiers11getAsStringEv
So, assuming more or less identical symbol definitions, and identical linker flags, why was I able to override static library symbols this way before, and why am I no longer able to?
This part of the premise isn't quite correct:
In most cases, the program version of a symbol that is defined in both program code and a static library takes precedence at link-time without a fuss. (The linked answer relates to Linux, but I found that to work on macOS too–usually.)
The linked answer appears correct, but I originally misunderstood it. The actual behavior, as evidenced by passing -Wl,-why_load to Clang (or -why_load to the linker), goes as follow:
if a symbol is referenced, try to find its definition in the program code.
if it is defined in the program code, you're done; do not search static libraries.
if it is not defined in the program code, look up the .__SYMDEF file in the static library to know which object file has it.
use all the definitions from that object file.
The issue was that switching to the custom Clang, I accidentally pulled in references to symbols that were defined in the same object file as symbols that I am redefining, causing the linker to see both definitions. I was able to solve the problem by using the -why_load argument to the linker, and then looking for which symbol caused the problem object file to be loaded. I then duplicated the definition of that symbol to my program, and now the linker doesn't complain anymore.
The morale of the story is that this technique isn't as reliable on macOS as it is on Linux, and that if you do it, you kind of have to go all in. It's better to take the entire source file and copy it to your project than to try to pick symbols piecewise.
Actually this behavior is the same for Linux, see this reproducer:
First case: build library where symbols are in different object files:
//val.cpp - contains needed symbol
int val=42;
//wrong_main.cpp - contains duplicate symbol
int main(){
return 21;
}
>>> g++ -c val.cpp -o val.o
>>> g++ -c wrong_main.cpp -o wrong.o
>>> ar rcs libsingle.a val.o wrong.o
Linking against this library works, no multiple definition of main-error is issued, because no symbols at all are used from the object file wrong_main.o at all:
//main.cpp
extern int val;
int main(){
return val;
}
>>> g++ main.cpp -L. -lsingle -o works
Second case: both symbols are in the same object file:
//together.cpp - contains both, needed and duplicate, symbols
#include "val.cpp"
#include "wrong_main.cpp"
>>> g++ -c together.cpp -o together.o
>>> ar rcs libtogether.a all.o
Linking against libtogether.a doesn't work:
>>> g++ main.cpp -L. -ltogether -o doesntwork
./libtogether.a(all.o): In function `main':
all.cpp:(.text+0x0): multiple definition of `main'
/tmp/cc38isDb.o:main.cpp:(.text+0x0): first defined here
collect2: ld returned 1 exit status
The linker takes either the whole object file from a static library or nothing. In this case val is needed and so the object file together.o will be taken, but it also contains the duplicate symbol main and thus the linker issues an error.
A great description how the linker works on Linux (and very very similar on MacOS) is this article.

Compiling pulseaudio on Mac OS X with CoreServices.h

I'm trying to compile pulseaudio on Mac OS X, however by default I get lots of errors about not finding standard files like inttypes.h, errno.h or stdio.h. Putting -isystem/usr/include in CPPFLAGS fixes those errors, but then later on I get fatal error: 'CoreServices/CoreServices.h' file not found.
I've tried also adding -framework CoreServices and/or
-I/System/Library/Frameworks/CoreServices.framework/Headers but neither work.
What's the proper way of making the compiler find it?
I think I'm using clang, gcc produces even more errors.
You are on the right track, those are the framework and include flags but if you use the correct configuration options you will find even the system includes are picked up properly.
The Makefiles will attempt to set the framework appropriately based on the --with-mac-sysroot and --with-mac-version-min attributes.
Example configuration option to specify the SDK location:
--with-mac-sysroot=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.7.sdk/
--with-mac-version-min=10.7
If you are building on Mountain lion (10.8) you still need to use the 10.7 minimum compatibility as there are headers missing in the 10.8 SDK which PulseAudio makes reference to.
You can pass the configure options to the autogen.sh which will run configure once autoconf has completed. You can try the following command which has been tested on the master branch:
./autogen.sh --prefix=/usr/local --disable-jack --disable-hal --disable-bluez --disable-avahi --with-mac-sysroot=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.7.sdk/ --with-mac-version-min=10.7 --disable-dbus
If you get m4 macro errors copy the m4 macros from aclocal into the m4 sub-directory and try again.
There are a few other problems but these are bound to be cleared up may date quickly. Adding it here as it may help someone trying to get this built.
error: Multiprocessing.h cannot be found:
This has been deprecated in 10.7 but the headers are still included it CoreServices and will build just change the include instruction in the file src/pulsecore/semaphore-osx.c.
-#include <Multiprocessing.h>
+#include <CoreServices/CoreServices.h>
error: ‘lt_PROGRAM_LTX_preloaded_symbols’ undeclared.
This may be a problem compiling src/daemon/dumpmodules.c and can be fixed by declaring the external macro.
extern const lt_dlsymlist lt_preloaded_symbols[];
error: gdbm.h: No such file or directory
For some reason the default include dir is not considered by the compiler and you can add the path to the src/Makefile look for and set the variable GDBM_CFLAGS.
GDBM_CFLAGS=-I/usr/local/include
nJoy!

Unable to build Boost libraries with GCC

I am using Windows 7 64-bit, and want to compile the non-precompiled libraries (specifically, I need Filesystem) from the command line (I do not use MSVC). I have MinGW, but read on the Boost website that MSYS shell is not supported, so I'm trying to compile the libraries from the Windows command prompt.
First of all, running bootstrap.bat results in the following error:
Building Boost.Jam build engine
'cl' is not recognized as an internal or external command,
operable program or batch file.
Failed to build Boost.Jam build engine.
Please consult bjam.log for furter diagnostics.
You can try to obtain a prebuilt binary from
http://sf.net/project/showfiles.php?group_id=7586&package_id=72941
Also, you can file an issue at http://svn.boost.org
Please attach bjam.log in that case.
Plus, there is not bjam.log file anywhere in the boost_root directory.
Disregarding this error, and trying to run the downloaded bjam.exe file, I get another error:
c:/boost_1_45_0/tools/build/v2/build\configure.jam:145: in builds-raw
*** argument error
* rule UPDATE_NOW ( targets * : log ? : ignore-minus-n ? )
* called with: ( <pbin.v2\libs\regex\build\gcc-mingw-4.5.2\debug\address-model64\architecture-x86>has_icu.exe : : ignore-minus-n : ignore-minus-q )
* extra argument ignore-minus-q
(builtin):see definition of rule 'UPDATE_NOW' being called
c:/boost_1_45_0/tools/build/v2/build\configure.jam:179: in configu
re.builds
c:/boost_1_45_0/tools/build/v2/build\configure.jam:216: in object(
check-target-builds-worker)#409.check
etc. with quite a lot of complaints. Setting the 'architecture' and 'address-model' options doesn't help.
Any suggestions?
#Andre
Following Andre's suggestion, I created minGW-bjam that was running for an hour and a half and built most of the libraries, but not the one I need at this moment: Filesystem.
Trying to compile only Filesystem, specifying version 2 with define="BOOST_FILESYSTEM_VERSION=2" and --disable-filesystem3 does not help. I get the following error:
gcc.compile.c++ bin.v2\libs\filesystem\build\gcc-mingw-4.5.2\debug\v3\src\operations.o
In file included from ./boost/filesystem/v3/operations.hpp:24:0,
from libs\filesystem\v3\src\operations.cpp:48:
./boost/filesystem/v3/config.hpp:16:5: error: #error Compiling Filesystem version 3
file with BOOST_FILESYSTEM_VERSION defined != 3
libs\filesystem\v3\src\operations.cpp:647:26: warning:
'<unnamed>::create_symbolic_link_api' defined but not used
"g++" -ftemplate-depth-128 -O0 -fno-inline -Wall -g -DBOOST_ALL_NO_LIB=1 -
DBOOST_FILESYSTEM_DYN_LINK=1 -DBOOST_FILESYSTEM_VERSION=2 -DBOOST_SYSTEM_DYN_LINK=1 -
I"." -c -o "bin.v2\libs\filesystem\build\gcc-mingw-4.5.2\debug\v3\src\operations.o"
"libs\filesystem\v3\src\operations.cpp"
etc. with a lot of ...failed statements.
Any hints here?
It's easy. Just use "bootstrap.bat gcc" to select GCC
The bootstrap script assumes the msvc compiler is available. But you can build bjam by hand without the bootstrap script:
Step into the tools\build\v2\engine\src directory and call "build.bat mingw". It will create a bjam.exe. You can then put it in your %PATH% or perhaps in the root boost directory...
To be honest, I usually build bjam like this with the msvc compiler and use this "msvc-bjam" to build my mingw boost libraries.
So... the first part of the problem was solved by Andre's suggestion.
The second part was solved by setting the variable BOOST_FILESYSTEM_VERSION to 3 everywhere (the error above complains about incompatibility with what is set in file user.hpp). Although this is not the default option for Boost 1.45 that I'm using, it's the only thing that works (i.e. bjam wants to compile version 3 no matter what). So now I have version 3 of the filesystem library, and version 2 for all others, but that doesn't seem to be an issue for the moment.
I do have a problem with using Boost with OpenCV and Eigen libraries, though... off to the next challenge ;)
Since I can't comment yet, I want to add that I ran
bootstrap mingw
to generate b2 properly and then
b2 --build-dir="c:\boost_release" toolset=gcc --build-type=complete "c:\boost_release\stage"
The includes will be located at your boost root folder (boost_1_58_00/boost) and your binaries at the specified build folder.

Passing a gcc flag through makefile

I am trying to build a pass using llvm and I have finished building llvm and its associated components. However, when I run make after following all the steps to build a pass including the makefile, I get the following
relocation R_X86_64_32 against `a local symbol' can not be used when making a shared object; recompile with -fPIC
After tyring to find a fix by googling the error message, I came to know that this is not specific to llvm. A few solutions suggested that I should use "--enable-shared" while running configure but that didn't help my case. Now I want to re-build llvm using fPIC, as the error says. But how do I do this using the makefile?
Looks like you could add the -fPIC (for position-independent code, something you want for a shared library that could be loaded at any address) by setting shell variables:
export CFLAGS="$CFLAGS -fPIC"
export CXXFLAGS="$CXXFLAGS -fPIC"
Looking at Makefile.rules, these will be picked up and used. Seems strange that it wasn't there to begin with.
EDIT:
Actually, reading more in the makefiles, I found this link to the LLVM Makefile Guide. From Makefile.rules, setting either SHARED_LIBRARY=1 or LOADABLE_MODULE=1 (which implies SHARED_LIBRARY) in Makefile will put -fPIC in the compiler flags.
If you are moderately convinced that you should use '-fPIC' everywhere (or '-m32' or '-m64', which I need more frequently), then you can use the 'trick':
CC="gcc -fPIC" ./configure ...
This assumes a Bourne/Korn/POSIX/Bash shell and sets the environment variable CC to 'gcc -fPIC' before running the configure script. This (usually) ensures that all compilations are done with the specified flags. For setting the correct 'bittiness' of the compilation, this sometimes works better than the various other mechanisms you find - it is hard for a compilation to wriggle around it except by completely ignoring the fact you specified the C compiler to use.
Another option is to pass -fPIC directly to make in the following way:
make CFLAGS='-fPIC' CXXFLAGS='-fPIC'

Resources