FFmpeg hardware acceleration on Raspberry PI - ffmpeg

I am building a program that use ffmpeg to stream webcam content over internet. I would like to know if it is possible to use the GPU for the streaming part on the raspberry pi model 3. If yes, how could I implement this on ffmpeg?

You'll need some additional configure options:
--enable-mmal – Enable Broadcom Multi-Media Abstraction Layer (Raspberry Pi) via MMAL. For hardware decoding of H.264, VC-1, MPEG-2, MPEG-4. As a dependency you'll need the linux-raspberrypi-headers (Arch Linux) or linux-headers-*-raspi2 (Ubuntu) package which provides the required header file mmal.h.
--enable-omx-rpi – Enable OpenMAX IL code for Raspberry Pi. For hardware encoding of H.264 (encoder is named h264_omx) and MPEG-4 (mpeg4_omx). As a dependency you'll need the libomxil-bellagio (Arch Linux) or libomxil-bellagio-dev (Ubuntu) package which provides the required header file OMX_Core.h.
For Arch Linux users:
Copy the PKGBUILD file for the ffmpeg package (perhaps via the ABS if you prefer). Add the two new configure options shown above, and add the two mentioned packages to the depends line. Compile/install with the makepkg command.
Disclaimer: I don't have one of these devices to test any of this. Most of this info was acquired from the FFmpeg configure file.

The ffmpeg package from apt now comes with hardware codecs enabled so you can just install that using:
sudo apt install ffmpeg
There are a few hardware enabled codecs on the Pi depending on which model you've got. Here's an excerpt from this detailed post/thread on the Raspberry Pi Forum:
Pi0-3 have hardware accelerated decode for H264, MPEG4, H263, and
through optional codec licences for MPEG2 and VC1.
Pi4 has the same hardware accelerated decode for H264, but not the
other codecs. It also has a separate block for HEVC.
There are a few APIs (v4l2m2m, VAAPI, OMX, MMAL,...) to access the hardware codecs, but the main one is now the Video Memory-To-Memory Interface based h264_v4l2m2m, and there's also the [older] h264_omx OMX based one, and others. For full list of codecs for encode and decode run:
ffmpeg -codecs
Note: If you have changed the gpu_mem setting in /boot/config.txt it needs to be greater than 16, otherwise you will get an error with all hardware codecs.

Related

Generate valid ASF file for WMAsfReader

My legacy software breaks after migrating it to Windows 10 since WMV Encoder 9 SDK isn't longer supported.
I've tried other approaches, and I can generate the ASF file I need using FFmpeg.
I only need mux audio and video into ASF container, and this command do it :
ffmpeg -y -i audio.mp3 -i video.asf -vcodec copy -acodec copy output.asf
It works well, and the file can be played using VLC or Windows Media Player.
But it can't be played by DirectShow. I've got a ASF_E_INVALIDHEADER error when I set as the source of WMAsfReader. Any idea how can I generate a valid ASF file for the WMAsfReader?
Thnks!
You might be unable to use legacy SDK, but current Windows APIs to produce ASF files (with DirectShow and Media Foundation) are in good standing in Windows 10:
DirectShow: WM ASF Writer Filter
Media Foundation: ASF Support in Media Foundation, ASF Media Sinks
This content should be acceptable for WMAsfReader. FFmpeg has always generated "almost good" output, so it was acceptable for long time. However checks for format consistency in OS components were getting stricter and stricter over time and once in a while FFmpeg content is not longer considered valid.

Create fake webcam with ffmpeg on Windows?

ffmpeg has all kinds of options in it to record video off of a webcam, transcode video, send video up to streaming servers. Is there a way to loop over a file and make it look like a webcam?
I found this for Linux:
https://gist.github.com/zburgermeiszter/42b651a11f17578d9787
I've search around a lot to try to find something for Windows, but have not yet found anything.
No, that's not part of FFmpeg so you'll need to create this "virtual video device" yourself. See e.g. How to create virtual webcam in Windows 10?.

How to use Wowza media server with Jack Audio Connection Kit as input on MAC OS?

I need to live broadcast multiple RTSP streams out of the audio mixing software StudioOne. For this I am using Jack Audio Connection Kit as the connector. I've already tried using IceCast with Darkice but the latency went up to 6+ seconds which won't work for the project that I'm working on. That's why I'm using the Wowza media server which does RTSP streaming instead of HTTP.
That's where I'm stuck as I need some way of getting the streams from Jack Audio to Wowza on a MAC OS machine. I've tried using FFMpeg but FFMpeg doesn't have the feature to get input from Jack Audio on it's OSX version. I can try to port my whole setup onto an Ubuntu but the mixing software StudioOne isn't available on Ubuntu. I can try using Wine to port StudioOne to Linux but I'm not sure it'll be a good idea for real time mixer to be used as a port, especially when latency is involved.
Is there some other way I can get input from Jack Audio to Wowza Media Server on my MAC?
JACK on OS X is now in FFmpeg as of this commit (67f8a0be545).
Once you have JACK installed, you can compile FFmpeg from source and support should be automatically compiled into FFmpeg.

Ffmpeg support rtmp broadcasts from OS X with audio

I've successfully gotten ffmpeg to stream live video from the built-in webcam on my macbook pro to my rtmp server but I cannot figure out how to get it to also send audio from the built-in microphone.
I've tried both the qtkit device as well as the avfoundation. It appears that neither support an audio stream.
Does ffmpeg support audio capture on a mac?
All of the examples I can find only show audio capture working with the DirectShow device.
Turns out it isn't supported at this time. With the help of some of the ffmpeg-devel folks I was able to get it working with a patch. I've applied the patch to a fork available here:
https://github.com/realrunner/FFmpeg

FFmpeg streaming using H.264 (with audio) - Red5 media server (Ubuntu OS)

I'm trying to stream my webcam with FFmpeg to my Red5 server using RTMP. I've done this successfully using FLV format with the following line:
ffmpeg -f video4linux2 -i /dev/video0 -f flv rtmp://localhost/live/livestream
I'm new to FFmpeg and live streaming, and I've tried to stream using H.264/MPEG-4. But my knowledge is a bit limited with the FFmpeg options (which I did find here: http://man.cx/ffmpeg%281%29).
So, my questions would be:
How can I use H.264/MPEG-4 to stream to my Red5 server?
What are the options to stream audio as well?
And one final issue is:
I'm having a delay of about 5 seconds when I play the content with JWPlayer in Mozilla Firefox (on Ubuntu). Can you please help me out to solve this problem? Any suggestions why this might be?
Many thanks
There is no need to use ffmpeg for streaming H.264/MPEG-4 files because red5 has build in support for this. Using ffmpeg will only put an unnecessary load on your CPU usage. Red5 will recognize the file type automatically, you only have to specify the mp4 file in your JWPlayer.
About the delay, as far as I know JWPlayer has a buffer of 3 seconds by default. You can try to lower this (property bufferlength or something like that). And maybe JWPlayer has a "live" property as well to stream with minimal delay, but I am not sure about that. Removing ffmpeg will probably speed up the process also.

Resources