I'm trying to get to work hardware acceleration on my Raspberry pi 4-64. I'm using FFmpeg and AFAIK hwaccel can be reached by using OpenMAX or V4L2-M2M.
After '--enable-omx' and 'enable-omx-rpi' for FFmpeg, build fails with error: 'OMX_Core.h not found'. If I will provide manually omx headers, then it will compile but FFmpeg encoding will fail due to missing libraries: bcm_host.so and libopenmaxil.so
I have tried reverting to userland by DISABLE_VC4GRAPHICS = "1", it produced bcm_host.so, but not libopenmaxil.so. I have tried different combinations of virtual providers and graphics settings but without success.
Is it possible to access omx hardware acceleration on RPI4-64?
Steps to reproduce the issue:
1.Download latest Poky distro, meta-openembedded, meta-raspberrypi
2.Enable omx, omx-rpi support for FFmpeg
3.Link headers for FFmpeg
4.Build and try to use h264_omx
How do I get missing library libopenmaxil.so and everything I need for hwaccel?
poky master: commit 5d47cdf448b6cff5bb7cc5b0ba0426b8235ec478
meta-openembedded master: commit daa50331352c1f75da3a8ef6458ae3ddf94ef863
meta-raspberrypi master: commit 8d163dd
BTW, by using V4L2-M2M, I'm getting green shadows on the resulting video. Maybe can someone point me in the right direction?
You have to provide some extra flags to point ffmpeg to the right header and library locations, both at compile time and at run time.
This is what I used to cross-compile ffmpeg for AArxh64:
./configure \
--arch="${HOST_ARCH}" \
--target-os="linux" \
--prefix="/usr/local" \
--sysroot="${RPI_SYSROOT}" \
--enable-cross-compile \
--cross-prefix="${HOST_TRIPLE}-" \
--toolchain=hardened \
--enable-gpl --enable-nonfree \
--enable-avresample \
--enable-libvpx --enable-libx264 --enable-libxvid \
--enable-omx --enable-omx-rpi --enable-mmal --enable-neon \
--enable-shared \
--disable-static \
--disable-doc \
--extra-cflags="$(pkg-config --cflags mmal) \
-I${RPI_SYSROOT}/usr/local/include \
-I${RPI_SYSROOT}/opt/vc/include/IL" \
--extra-ldflags="$(pkg-config --libs-only-L mmal) \
-Wl,-rpath-link,${RPI_SYSROOT}/opt/vc/lib \
-Wl,-rpath,/opt/vc/lib"
Note that pkg-config is configured for cross-compilation as well, it looks in the Raspberry Pi sysroot, not in the build machine root. This is done by setting the right environment variables here.
The -I flags specify the include paths, and the -L flags returned by pkg-config --libs-only-L are the library paths. -Wl passes a list of comma separared arguments to the linker. -rpath-link is used to find shared libraries required by other shared libraries at link time, -rpath is used to find the libraries at run time. This is required because the userland libraries are in a nonstandard location, ld will not search in /opt/vc/lib by default.
You can find the toolchains, Dockerfiles and install scripts I used on my GitHub: https://github.com/tttapa/RPi-Cpp-Toolchain/tree/master/toolchain/docker/rpi3/aarch64/aarch64-cross-build
The userland script is here: https://github.com/tttapa/RPi-Cpp-Toolchain/blob/76ac03741bc7b7da106ae89884c7bada96768a07/toolchain/docker/rpi3/aarch64/aarch64-cross-build/install-scripts/userland.sh
And the ffmpeg script is here: https://github.com/tttapa/RPi-Cpp-Toolchain/blob/76ac03741bc7b7da106ae89884c7bada96768a07/toolchain/docker/rpi3/aarch64/aarch64-cross-build/install-scripts/ffmpeg.sh
There's some more documentation about the compilation process and the files used in the repository here (though not specifically about ffmpeg).
Related
I'm building ffmpeg version 5.0.1 from source, but for some reason I don't get it to cross-compile against aarch64 as I want to run ffmpeg on a Khadas VIM4 which is a ARMv8 CPU, to be specific the CPU is an Amlogic A311D2 with non-free codecs..
To build ffmpeg from source, I have setup a build-suite put together from various sources which currently works fine with x86_64 and can be obtained here: https://github.com/venomone/ffmpeg_build_suite
Simply run trigger.sh from the terminal, make sure docker is installed on your host if you are interested. The script will output ffprobe and ffmpeg in a folder called /build
From my understanding, the ffmpeg configuration part must get extended by the following lines:
--enable-cross-compile \
--target-os=linux \
--arch=arm64 \
--cross-prefix=aarch64-linux-gnu- \
--enable-shared \
But even with these settings, I'm running into the following error:
#11 938.3 aarch64-linux-gnu-gcc is unable to create an executable file.
#11 938.3 C compiler test failed.
Is somebody able to provide me a hint, what might be the problem here?
If i configure ffmpeg this way:
./configure --disable-everything --enable-static --disable-shared \
--enable-gpl --enable-nonfree --enable-encoder=h264_videotoolbox,aac \
--enable-muxer=mp4 --enable-protocol=file --enable-libfdk-aac
--enable-videotoolbox --disable-autodetect
it works for my purposes (allows to encode h264 video with aac audio on Mac's videotoolbox - an Apple QSV toolkit), but if i send it to any other computer except the one it was built on, it fails with something like this:
dyld: Symbol not found: _kCVImageBufferTransferFunction_ITU_R_2100_HLG
Referenced from: /Users/admin/Downloads/./ffmpeg
Expected in: /System/Library/Frameworks/CoreVideo.framework/Versions/A/CoreVideo
in /Users/admin/Downloads/./ffmpeg
Abort trap: 6
if i rebuild it this way:
./configure --disable-everything --enable-static --disable-shared
--enable-gpl --enable-nonfree --enable-encoder=aac
--enable-muxer=mp4 --enable-protocol=file --enable-libfdk-aac
--disable-autodetect
so with everything else but videotoolbox removed, it runs successfully on any other computer, so apparently ffmpeg needs to carry along something it doesn't, for videotoolbox to work...
i am actually building a C++ app with ffmpeg's static libraries, but explaining what i do there will be a very long story and error message produced is exactly the same if i run it on different machines, so i better illustrate it on example of ffmpeg console utility itself.
what are the configure switches i need to do to make the ffmpeg build portable please?
Problem turned out to be my macos version (10.14), the API mentioned is since 10.13, so it didn't work on earlier version i tried. Fixed by rebuilding ffmpeg on 10.10.
I need to compile a static FFmpeg on macOS and add this build to a Xcode project. If I download a full version from official website that is work. But this version size is huge, and I just need a few format to convert. So I need to compile by myself.
I've tired to compile and it's worked. But I am not sure how to select compile parameter.
For instance, I need to convert: ogg,flac,opus,webm files to mp3 file with the minimum size. And my compile parameter :
./configure --enable-ffmpeg --enable-small --enable-static --enable-protocol=file,http,https --enable-libvorbis \
--enable-libopus --disable-ffplay --disable-ffprobe --enable-demuxer=mp3,mp4,webm_dash_manifest,opus,flac,ogg \
--enable-decoder=mp3*,vp*,mpeg4*,opus,flac --enable-libmp3lame --disable-autodetect --disable-network --enable-pthreads
But it seems not to work, I can't convert files. Error reason is dyld: Library not loaded: /usr/local/opt/lame/lib/libmp3lame.0.dylib.But I used parameter --enable-static.
So what should I do? If I need to support a format to convert, I need to care about which respect? Thanks
--enable-static is applied to ffmpeg libraries but not its dependencies. You need to download and compile lame as static as well.
all I am trying to --enable-protocol=SRT of ffmpeg. What I do as the following:
1.Check current configuration of ffmpeg which shows it doesn't suppport protocol of SRT.
2.So I trying to use msys64 to compile ffmpeg with --enable-protocol=SRT,and the command
$ ./configure --toolchain=msvc --arch=x64 --enable-yasm --enable-asm --enable-shared --enable-protocol=SRT
but the result as the following:
it's showing that the config is no use.Can you help me,thanks!
SRT is provided via an external library, so you'll need that library available for linking via pkg-config.
configure flags are --enable-protocol=libsrt --enable-libsrt. The former flag is only needed if you have disabled all components or protocols. Won't hurt to keep it, though.
I`m trying to compile ffmpeg in windows with nvidia libraries for hardware acceleration using MinGW/msys. tried to follow the instruction on nvidias website (section: Getting Started with FFmpeg/libav using NVIDIA GPUs). configured with --enable-nonfree --disable-shared --enable-nvenc --enable-cuda --enable-cuvid --enable-libnpp --extra-cflags=-Ilocal/include --extra-cflags=-I../common/inc --extra-ldflags=-L../common/lib/x64 --prefix=ffmpeg but stopped at "ERROR: libnpp not found." where common folder is downloaded from NVIDIA Video Codec SDK but there is no npp libs or header files. is there any solution for that? thanks for edvice.
I managed to successfuly cross compile ffmpeg under linux targeting Windows 64 bit with --enable-libnpp included.
My environment is Ubuntu Server 16.10 64bit.
After a fresh installation I installed MinGW using the command:
sudo apt-get install mingw-w64
First I successfully compiled the Linux version with the --enable-libnpp option activated following the instructions on the NVIDIA dev site Compile Ffmpeg with NVIDIA Video Codec SDK.
In order to do that you need to install the CUDA Toolkit. Just follow the instructions and the package installer will create the symbolic links (I have the CUDA Toolkit 8.0):
/usr/local/cuda/include/ -> /usr/local/cuda-8.0/targets/x86_64-linux/include
/usr/local/cuda/lib64/ -> /usr/local/cuda-8.0/targets/x86_64-linux/lib
This should provide Configure the right path to find the correct libraries and headers.
The command line I have used to compile the linux version of ffmpeg is:
./configure --enable-nonfree --disable-shared --enable-nvenc --enable-cuda --enable-cuvid --enable-libnpp --extra-cflags=-I/usr/local/cuda/include/ --extra-ldflags=-L/usr/local/cuda/lib64/
The problem you got is that when using cross-compilation you need to provide Configure the right path where to find headers and library for the Windows version of the libnpp library.
From the CUDA Toolkit Download page mentioned above I simply downloaded the exe(local) version of the Windows package.
Under the root of my working folder I created a folder called tmp where I copied the subfolders I found under npp_dev inside the package cuda_8.0.61_win10.exe:
cuda_8.0.61_win10.exe\npp_dev\lib -> tmp/lib
cuda_8.0.61_win10.exe\npp_dev\include -> tmp/include
As final step I launched Configure once again using the following parameters:
./configure --arch=x86_64 --target-os=mingw32 --cross-prefix=x86_64-w64-mingw32- --pkg-config=pkg-config --enable-nonfree --disable-shared --enable-nvenc --enable-cuda --enable-cuvid --enable-libnpp --extra-cflags=-I/usr/local/include --extra-cflags=-I/usr/local/cuda/include/ --extra-ldflags=-L/usr/local/cuda/lib64/ --extra-cflags=-I../tmp/include/ --extra-ldflags=-L../tmp/lib/x64/
The compilation completed successully. When I copied the ffmpeg.exe file to Windows and tried to execute it I got an errore message saying the executable was missing some npp_*.dll.
From the package cuda_8.0.61_win10.exe I copied all the dlls included into the folder npp\bin to the same directory I put ffmpeg.exe.
After that the application run normally and a simple conversion from a 4K file completed as expected.
Actually I went nuts about ffmpeg is not building with the same problem. I fianally managed to get it worked under Windows 10 x64:
Download msys2 from https://www.msys2.org/ and install all packages with Pacman
pacman -Su
pacman -S make
pacman -S diffutils
pacman -S yasm
pacman -S mingw-w64-x86_64-gcc
pacman -S mingw-w64-x86_64-toolchain
add pkgconfig to environment variable PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib/pkgconfig
Add additional installed toolchain to path: PATH=$PATH:/opt/bin
Start mingw64 version: C:\msys64\msys2_shell.cmd -mingw64
Download and install Cuda from nVidia https://developer.nvidia.com/cuda-downloads?target_os=Windows&target_arch=x86_64&target_version=10&target_type=exenetwork
Extract the downloaded file e.g. cuda_11.2.2_461.33_win10.exe with 7zip locally
Copy cuda_nvcc\nvcc\include to your msys2 e.g. C:\msys64\tmp\nvidia_include
Copy libnpp\npp_dev\lib\x64 to your C:\msys64\tmp\nvidia_lib\x64
Copy libnpp\npp_dev\include to C:\msys64\tmp\nvidia_npp_include
git clone https://github.com/FFmpeg/FFmpeg.git to C:\msys64\home\<user>
git clone https://github.com/libav/libav to C:\msys64\home\<user>
Maybe optional step: git clone https://git.videolan.org/git/ffmpeg/nv-codec-headers.git to C:\msys64\home\<user>
make
make install
Optional because make install should have done this for you: Copy ffnvcodec.pc to C:\msys64\usr\local\lib\pkgconfig
Build libav avconv.exe and avprobe.exe are needed for ffmpeg later:
cd C:\msys64\home\<user>\libav
./configure
make
make install
Finally build ffmpeg:
cd C:\msys64\home\<user>\ffmpeg
./configure --enable-nonfree --disable-shared --enable-nvenc --enable-cuda --enable-cuvid --enable-libnpp --extra-cflags=-I/tmp/nvidia_npp_include --extra-cflags=-I/tmp/nvidia_include --extra-ldflags=-L/tmp/nvidia_lib/x64
make
make install
Copy avconv.exe and avprobe.exe to ffmpeg directory
Done.
Bugfixing:
Missing DLLs: find x64 missing DLLs on your harddisk or in internet.
Use dependency walker for analyzing errors
Download the newest nVidia drivers and use nSight making sure CUVID is supported for your graphic card.
This would seem to be caused by a broken configuration script in the FFmpeg code base. There is no library called npp in recent CUDA distributions, instead on Windows platforms you will have
nppc.lib
nppi.lib
npps.lib
and on linux
libnppc.so
libnppi.so
libnpps.so
You will either need to modify the configuration system yourself or file a bug request with the project developers to do it for you.
There might still be additional problems building the project with MinGW, but that is way beyond the scope of a Stack Overflow question.
If you check config.log, there may have a lot link warnings:
LINK : warning LNK4044: unrecognized option '/L...'; ignored
cause
ERROR: libnpp not found.
Since /L is not a correct argument for msvc linker, in order to include library path, the argument should as follow:
./configure .... --extra-cflags=-I/usr/local/cuda/... --extra-ldflags=-LIBPATH:/usr/local/cuda/...
This should able to solve the libnpp not found issue.
FYI, linker options are listed in the following link (included LIBPATH):
Linker Options
2022 Update
On this weekend I also managed to build latest ffmpeg with working scale_npp filter. Without any npp missing library error during compilation and building. But with some caveats (see below).
I followed this guide by NVIDIA with installed NVIDIA GPU Computing Toolkit v11.7 and latest driver display 473.47 for my video card GeForce GT 710 on Windows 10 21H2 x64
Changes (adaptations) for steps in the guide
I copied all headers including folders from directory path_to_CUDA_toolkit/include
I excluded pkg-config from pacman packages, because after recommended installation steps (step 7 in particular) of MSYS2 it conflicts with installed pkgconf package, i.e. use this command instead:
pacman -S diffutils make yasm
I added directories to Visual Studio C compiler to PATH environment variable in advance (using Windows GUI), in addition to declaring them in the MinGW64 terminal as specified in the guide:
export PATH="/c/Program Files (x86)/Microsoft Visual Studio/2017/BuildTools/VC/Tools/MSVC/14.16.27023/bin/Hostx64/x64/":$PATH
export PATH="/d/NVIDIA GPU Computing Toolkit/CUDA/v11.7/bin/":$PATH
After making (building) ffnvcodec headers, define PKG_CONFIG_PATH (where compiled file ffnvcodec.pc is located) before configure command.
Use absolute paths for --extra-cflags and --extra-ldflags options of configure command. It's probably the main thing in solving "not found" errors. But don't forget that these paths will be printed in ffmpeg banner with other explicit build options.
PKG_CONFIG_PATH="/d/_makeit/nv-codec-headers/" ./configure --enable-nonfree --disable-shared --enable-cuda-nvcc --enable-libnpp --toolchain=msvc --extra-cflags="-I/d/_makeit/ffmpeg/nv_sdk/" --extra-ldflags="-LIBPATH:/d/_makeit/ffmpeg/nv_sdk/"
And that's it. At least -vf scale_npp should work.
In my case still DO NOT WORK the following things from the guide:
cuda built-in resizer and cropper, i.e. -hwaccel_output_format cuda –resize 1280x720 and -hwaccel_output_format cuda –crop 16x16x32x32. I bet that this is due to my old video card is not in GPU Support Matrix. But NVENC and NVDEC works fine for me almost without crutches. And it seems I'm note alone.
UPD: resizer and cropper work! BUT in the mentioned guide commands are incorrect. I found correct way in another NVIDIA FFmpeg Transcoding Guide. Decoder h264_cuvid was missed, must be so:
ffmpeg.exe -y -vsync passthrough -hwaccel cuda -hwaccel_output_format cuda -c:v h264_cuvid -resize 1280x720 -i input.mp4 -c:a copy -c:v h264_nvenc -b:v 5M output.mp4
ffmpeg.exe -y -vsync passthrough -hwaccel cuda -hwaccel_output_format cuda -c:v h264_cuvid -crop 16x16x32x32 -i input.mp4 -c:a copy -c:v h264_nvenc -b:v 5M output.mp4
-vf scale_cuda fails with error. Maybe I used wrong C compiler version or didn't install DirectX SDK from here or installed wrong packages after installing MSYS2 and ignoring pkg-config
[Parsed_scale_cuda_0 # 000001A461479DC0] cu->cuModuleLoadData(cu_module, data) failed -> CUDA_ERROR_UNSUPPORTED_PTX_VERSION: the provided PTX was compiled with an unsupported toolchain.
there is no possibility to use -preset option for h264_nvenc with latest ffmpeg version where presets (enum) were updated. I noticed from ffmpeg report file, this is because using any preset causes "auto" enabling lookahead mode with log raw:
[h264_nvenc # 00000158EFC6E500] Lookahead enabled: depth 28, scenecut enabled, B-adapt enabled.
Even though the options -rc-lookahead and -temporal-aq are not supported by my device (video card). I have to use only one preset p4 (medium) which is by default. And I don't know how to workaround this issue. Value 0 for -rc-lookahead also does not help.
specifying -bf 2 only works with option -extra_hw_frames 6 (six in my case - number of extra frames can differ for your card). Or using only -bf 0. But this is due to constraints of my old video card.
ffmpeg.exe -v verbose -y -vsync passthrough -hwaccel cuda -hwaccel_output_format cuda -extra_hw_frames 6 -i input-1080p.mkv -map 0:v -map 0:a -c:a copy -c:v h264_nvenc -b:v 1M -bf 2 -bufsize 1M -maxrate 2M -qmin 0 -g 250 -b_ref_mode middle -i_qfactor 0.75 -b_qfactor 1.1 output.mp4
I hope my notes will help future Google and SO users.