It appears that ffmpeg now has a segmenter in it, or at least there is a command line option
-f segment
in the documentation.
Does this mean I can use ffmpeg to realtime-transcode a video into h.264 and deliver segmented IOS compatible .m3u8 streams using ffmpeg alone? if so, what would a command to transcode an arbitrary video file to a segmented h.264 aac 640 x 480 stream ios compatible stream?
Absolutely - you can use -f segment to chop video into pieces and serve it to iOS devices. ffmpeg will create segment files .ts and you can serve those with any web server.
Working example (with disabled sound) - ffmpeg version N-39494-g41a097a:
./ffmpeg -v 9 -loglevel 99 -re -i sourcefile.avi -an \
-c:v libx264 -b:v 128k -vpre ipod320 \
-flags -global_header -map 0 -f segment -segment_time 4 \
-segment_list test.m3u8 -segment_format mpegts stream%05d.ts
Tips:
make sure you compile ffmpeg from most recent git repository
compile with libx264 codec
-map 0 is needed
How I compiled FFMPEG - with extra rtmp support to get feeds from flash-media-server
export PKG_CONFIG_PATH="/usr/lib/pkgconfig/:../rtmpdump-2.3/librtmp"
./configure --enable-librtmp --enable-libx264 \
--libdir='../x264/:/usr/local/lib:../rtmpdump-2.3' \
--enable-gpl --enable-pthreads --enable-libvpx \
--disable-ffplay --disable-ffserver --disable-shared --enable-debug
This is found in the ffmpeg documentation: https://ffmpeg.org/ffmpeg-formats.html#segment_002c-stream_005fsegment_002c-ssegment
Related
I'm trying to create a website to stream some videos. For each video, I extract video, audio and subtitles in 3 different folders. It happens that a video has multiple audio tracks and multiple subtitles. I did a lot of research and I don't know how to add all of them in the manifest. Right now, I use this command:
ffmpeg -f webm_dash_manifest \
-i video1.mp4 -f webm_dash_manifest \
-i video2.mp4 -f webm_dash_manifest \
-i audio1.webm -f webm_dash_manifest \
-i audio2.webm -f webm_dash_manifest \
-i subtitles.vtt \
-c copy -map 0 -map 1 -map 2 -map 3 \
-f webm_dash_manifest -adaptation_sets "id=0,streams=v id=1,streams=a" manifest.mpd
My two videos have different resolutions and bitrates, and it works perfectly. But I don't get any subtitles and my two audio tracks are considered like one same audio track which has two different bitrates (just like videos). I think I should have many adaptation_sets, but I don't know how to create them.
How can I create that manifest the right way?
After a few days, I found the solution.
My goal is to convert a video into mpeg-dash content which is really great for streaming.
I will encode video to h264, audio to aac, and subtitles to webvtt.
It's good settings for a large browser compatibility.
vp9 is really nice too but too long to encode for me.
Tools required:
ffmpeg: https://www.ffmpeg.org/download.html
mp4dash & mp4fragment: https://www.bento4.com/downloads/
Let's suppose we have a 1080p video file "video.mkv" with these streams:
0: video stream
1: audio stream, it language
2: audio stream, en langugage
3: subtitle stream, it language
4: subtitle stream, en language
1. Extracting differents streams
1.1 Video
I extract and transcode video stream to differents resolutions and bitrates:
ffmpeg -i video.mkv -an -sn -c:0 libx264 -x264opts 'keyint=24:min-keyint=24:no-scenecut' -b:v 5300k -maxrate 5300k -bufsize 2650k -vf 'scale=-1:1080' tmp/video/video-1080.mp4
ffmpeg -i video.mkv -an -sn -c:0 libx264 -x264opts 'keyint=24:min-keyint=24:no-scenecut' -b:v 2400k -maxrate 2400k -bufsize 1200k -vf 'scale=-1:720' tmp/video/video-720.mp4
ffmpeg -i video.mkv -an -sn -c:0 libx264 -x264opts 'keyint=24:min-keyint=24:no-scenecut' -b:v 600k -maxrate 600k -bufsize 300k -vf 'scale=-1:360' tmp/video/video-360.mp4
1.2 Audio
ffmpeg -i video.mkv -map 0:1 -ac 2 -ab 192k -vn -sn tmp/audio/audio-it.mp4
ffmpeg -i video.mkv -map 0:2 -ac 2 -ab 192k -vn -sn tmp/audio/audio-en.mp4
1.3 Subtitle
ffmpeg -i video.mkv -map 0:3 -vn -an tmp/subtitle/subtitle-it.vtt
ffmpeg -i video.mkv -map 0:4 -vn -an tmp/subtitle/subtitle-en.vtt
You can use the "-loglevel warning" option to see less informations.
2. Fragment video and audio
2.1 Video
mp4fragment tmp/video/video-1080.mp4 tmp/video/f-video-1080.mp4
mp4fragment tmp/video/video-720.mp4 tmp/video/f-video-720.mp4
mp4fragment tmp/video/video-360.mp4 tmp/video/f-video-360.mp4
2.2 Audio
mp4fragment tmp/audio/audio-it.mp4 tmp/audio/f-audio-it.mp4
mp4fragment tmp/audio/audio-en.mp4 tmp/audio/f-audio-en.mp4
3. Split files and create the dash manifest
mp4dash --mpd-name=manifest.mpd tmp/video/f-video-1080.mp4 tmp/video/f-video-720.mp4 tmp/video/f-video-360.mp4 tmp/audio/f-audio-it.mp4 tmp/audio/f-audio-en.mp4 \[+format=webvtt,+language=it\]tmp/subtitle/subtitle-it.vtt \[+format=webvtt,+language=en\]tmp/subtitle/subtitle-en.vtt
You can now delete the tmp folder
rm -rf tmp
(and your source file if you don't need it anymore)
You have now your mpeg-dash content which can be streamed. You have to serve your files (allow cors and enable byte range request).
I use angular and rx-player as player. I can switch language, subtitles and the video quality is adaptative to the client's bandwidth !
Rx-player: https://github.com/canalplus/rx-player
I'm successfully streaming silent video with music added from my Raspberry Pi (Raspbian) to YouTube via ffmpeg, with the help of this GitHub gist and this post:
raspivid -o - -t 0 -vf -hf -w 1280 -h 720 -fps 25 -b 4000000 | \
ffmpeg -i music.wav \
-f h264 -i - -vcodec copy -acodec aac -ab 128k -g 50 -strict experimental \
-f flv rtmp://a.rtmp.youtube.com/live2/STREAMKEY
The last step of my project to add a transparent, full width/height png overlay to the video (1280x720 size in my case). I've seen a few related answers such as this one and this one.
With the added complexity of piping in a camera feed, mixing in an audio source and outputting to a video stream, I haven't succeeded in adding the image overlay. Where/how would I add a transparent image overlay in the example above?
The ffmpeg part will be
ffmpeg -i music.wav \
-f h264 -i - -i overlay.png
-filter_complex "[1][2]overlay"
-vcodec libx264 -preset ultrafast -tune zerolatency -acodec aac -ab 128k -g 50 -strict experimental \
-f flv rtmp://a.rtmp.youtube.com/live2/STREAMKEY
Since you're altering the video contents, copy can't be used, and the video has to be re-encoded.
hope one of you can tell me why this ffmpeg command of mine does not draw the desired text. the produced video doesn't have it. here you go:
ffmpeg -f image2 -thread_queue_size 64 -framerate 15.1 -i /home/michael-heuberger/binarykitchen/code/videomail.io/var/local/tmp/clients/videomail.io/11e6-723f-d0aa0bd0-aa9b-f7da27da678f/frames/%d.webp -y -an -vcodec libvpx -filter:v drawtext=fontfile=/home/michael-heuberger/binarykitchen/code/videomail.io/src/assets/fonts/Varela-Regular.ttf:text=www.videomail.io:fontsize=180:fontcolor=white:x=150:y=150:shadowcolor=black:shadowx=2:shadowy=2 -vf scale=trunc(iw/2)*2:trunc(ih/2)*2 -crf 12 -deadline realtime -cpu-used 4 -pix_fmt yuv420p -loglevel warning -movflags +faststart /home/michael-heuberger/binarykitchen/code/videomail.io/var/local/tmp/clients/videomail.io/11e6-723f-d0aa0bd0-aa9b-f7da27da678f/videomail_preview.webm
the crucial part is this video filter:
-filter:v drawtext=fontfile=/home/michael-heuberger/binarykitchen/code/videomail.io/src/assets/fonts/Varela-Regular.ttf:text=www.videomail.io:fontsize=180:fontcolor=white:x=150:y=150:shadowcolor=black:shadowx=2:shadowy=2
does it seem correct to you? if so, then why am i not seeing any text in the videomail_preview.webm video file?
using ffmpeg v2.8.6 here with --enable-libfreetype, --enable-libfontconfig and --enable-libfribidi enabled.
furthermore, the above command has been produced with fluent-ffmpeg.
so, any ideas?
Combine all filters into a single graph, so
-filter:v drawtext=fontfile=/home/michael-heuberger/binarykitchen/code/videomail.io/src/assets/fonts/Varela-Regular.ttf:text=www.videomail.io:fontsize=180:fontcolor=white:x=150:y=150:shadowcolor=black:shadowx=2:shadowy=2 -vf scale=trunc(iw/2)*2:trunc(ih/2)*2
becomes
-filter:v drawtext=fontfile=/home/michael-heuberger/binarykitchen/code/videomail.io/src/assets/fonts/Varela-Regular.ttf:text=www.videomail.io:fontsize=180:fontcolor=white:x=150:y=150:shadowcolor=black:shadowx=2:shadowy=2,scale=trunc(iw/2)*2:trunc(ih/2)*2
I try to use FFMpeg to generate MKV video format. By default, FFMpeg will using h264 & libvorbis. But when I using the doc/examples/muxing.c under ffmpeg source file folder, there is always an error:
[libvorbis # 002e52a0] Specified sample format s16 is invalid or not supported
Could not open audio codec: Error number -22 occurred
I used Zeranoe FFmpeg and showed this error. I also tried to compile the ffmpeg from source under MinGW, and I also enable the libvorbis by following comfiguration:
$ ./configure --prefix=/mingw --enable-libvpx --enable-libvorbis --enable-shared --enable-static
Before I make, I also install libvorbis, libogg, yasm etc. But the error is still there.
If I use the ffmpeg.exe to convert video to webm format, it works. The command is like following:
ffmpeg -i test.h264 -vcodec libvpx -acodec libvorbis output.webm
The generated output.webm can be played by Firefox or something else. So I think the compiled ffmpeg library is OK. But why I can't generate webm file in muxing.c code?
As can be seen in the file libvorbisenc.c, the libvorbis library supports only AV_SAMPLE_FMT_FLTP (float planar data) input format.
You have to use, for example, a SwResample library from ffmpeg for converting audio data.
Try compiling the libvorbis package with the following :
LDFLAGS="-static" \
LIBS="-logg" \
./configure \
--prefix=$INSTALL_PREFIX \
--with-gnu-ld \
--with-pic \
--enable-static \
--disable-shared \
--disable-oggtest
I'm trying to decode an FLV's audio to a playable format. I attempted to use this SO post: FMS FLV to mp3.. as an example, but my FLV is encoded in Speex.
I have compiled ffmpeg with --enable-libspeex on a Fedora 15 machine.
I believe this can be done with ffmpeg but I'm having a hard time figuring out how to do it.
Any thoughts? Thanks
Your ffmpeg needs to be configured with --enable-libspeex to support Speex decoding. Since you did not provide your OS I can not give any more specific instructions. Once you have a build of ffmpeg that can decode speex the most simple command would be:
ffmpeg -i input.flv output.wav
while reencoding flv file (speex to mp3) if you get sample rate error try this:
ffmpeg -i c:\in.flv -acodec libmp3lame -ar 44100 -vcodec copy c:\out.flv
It does not matter what your input. As long as you have the decoder and encoder enabled in your ffmpeg it will do it.
ffmpeg -i inputfile.flv -acodec libmp3lame any_other_parameters_you_want -vcodec copy out.flv
will do the trick.
run ffmpeg -codecs to see the codecs supported and ffmpeg -formats to see the formats supported in your install.