How to create a DASH VOD for Chromecast with ffmpeg? - ffmpeg

I need to serve long videos (~2 hours) from a web server to mobile clients and the clients should be able to play the videos via Chromecast. I have chosen mpeg-dash for this purpose: video encoder is h.264 (level 4.1), audio is aac (although I've tried diffrent ones).
I've tried ffmpeg, MP4Box and some other tools to generate videos; most of the time I succeeded playing them on VLC or on a mobile client (locally), but not with Chromecast.
I've tried Amazon's Elastic Transcoder and it worked, but it gave me one big file whereas I need many small segments.
CORS are set.
Chromecast remote debugging didn't help much.
Do you know how to do this?

Finally, I have managed to do it. This is the script that converts a video file to dash with many segments which can be played by Chromecast:
ffmpeg -y -threads 8 \
-i input.ts \
-c:v libx264 \
-x264-params keyint=60:scenecut=0 \
-keyint_min 60 -g 60 \
-flags +cgop \
-pix_fmt yuv420p \
-coder 1 \
-bf 2 \
-level 41 \
-s:v 1920x1080 \
-b:v 6291456 \
-vf bwdif \
-r 30 \
-aspect 16:9 \
-profile:v high \
-preset slow \
-acodec aac \
-ab 384k \
-ar 48000 \
-ac 2 \
output.mp4 2> output/output1_ffmpeg.log \
\
&& MP4Box -dash 2000 \
-rap \
-out output/master.mpd \
-profile simple \
output.mp4#video output.mp4#audio 2> output/output2_mp4box.log
As you can see, first I encode the input file; then I use MP4Box to convert it to dash. Note that Chromecast can fail playing video with more than 2 audio channels (I use 2 with -ac 2).

Related

what filters affect ffmpeg encoding speed

What are the options in this command that would cause my encoding speed to be 0.999x instead of 1.0x or higher?
ffmpeg -y \
-loop 1 -framerate 30 -re \
-i ./1280x720.jpg \
-stream_loop -1 -re \
-i ./audio.mp3 \
-vcodec libx264 -pix_fmt yuv420p \
-b:v 2500k -maxrate 2500k -bufsize 10000k \
-preset slow -tune stillimage \
-b:a 128k -ar 44100 -ac 2 -acodec aac \
-af "dynaudnorm=f=150:g=15" \
-g 60 \
-f flv tmp.flv
I am trying to figure out why would this only be encoding at 0.999x speed, is there anything that I could do to speed this up? 2 pass encoding? I cannot understand why the encoding speed is so slow?
Also please not i've tried present from slow - ultrafast, the encoding speed stays relatively unchanged.
The -re is the rate-limiting factor. It only feeds input in real-time so the encoder can't progress any faster.
Remove the -re before the inputs. Needed only when trying to simulate a real-time input or streaming to an output that expects its input in real-time.

Live streaming from FFMPEG: output a window m3u8 and also an all-segments m3u8

I've been experimenting with using FFMPEG to take an incoming RTMP stream, transcode into a selection of bitrates, and output it as HLS. It works.
I wanted to store the live stream as a VOD. And found by adding the -hls_list_size 0 flag, sure enough, all segment are in the .m3u8. Making it super easy to turn into a VOD afterwards. So far, so good.
But the obvious consequence of using -hls_list_size 0 is that now the m3u8 is huge during the live stream. That's fine for a VOD where it is only requested once, but less good during a live stream where it is requested over and over.
So ... my question: without re-transcoding, can FFMPEG output both an all-segments all.m3u8 (to keep internally for making a VOD afterwards, ie using -hls_list_size 0) and also output a sliding-window style latest.m3u8 (of only the last X segments, ie using -hls_list_size 3)?
That way, viewers of the live stream could be served that little latest.m3u8, as a tiny file, with only the last few segments in. And after the event ends, I'd ditch that little latest.m3u8 and only keep the all.m3u8 to make a VOD version of the stream?
Thanks!
Here is my two cents. As #Gyan suggested in comments above I made use of tee command. This takes input single time and performance of my system remains almost same if I would do only Live. Only drawback is it creates duplicate copy of that many segments which are being used in HLS segments for live.
ffmpeg -y \
-hide_banner \
-i $input_url \
-preset veryfast -g 48 -sc_threshold 0 \
-map 0:1 -map 0:2 -map 0:1 -map 0:2 -map 0:1 -map 0:2 \
-filter:v:1 "scale=-2:360" -c:v:1 libx264 -b:v:1 365k \
-filter:v:2 "scale=-2:480" -c:v:2 libx264 -b:v:2 1600k \
-c:v:4 copy \
-f hls -hls_time 10 -hls_list_size 10 \
-var_stream_map "v:0,a:0 v:1,a:1 v:2,a:2" \
-hls_segment_filename "$stream_key-v%v/%d.ts" \
-f hls -f tee \
"[var_stream_map=\'v:0,a:0 v:1,a:1 v:2,a:2 \': \
master_pl_name=\'master-live.m3u8\': \
hls_flags=delete_segments: \
hls_list_size=60]$master-live%v/live.m3u8| \
[var_stream_map=\'v:0,a:0 v:1,a:1 v:2,a:2\': \
master_pl_name=\'master-record.m3u8\': \
hls_playlist_type=vod]$master-record%v/record.m3u8"

HLS with FFmpeg (separated track sync)

I'm looking for a solution to proceed transcoding a video file to HLS multi-bitrate with audio track separated
basically,
I got a video file, I transcoded it into 4 resolutions + 1 audio track
180p
360p
720p
1080p
2160p (maybe)
Audio1
Audio2 (maybe)
but for the exemple, here is my 180p command:
ffmpeg -i ${source} \
-pix_fmt yuv420p \
-c:v libx264 \
-b:v 230k -minrate:v 230k -maxrate:v 230k -bufsize:v 200k \
-profile:v baseline -level 3.0 \
-x264opts scenecut=0:keyint=75:min-keyint=75 \
-hls_time 3 \
-hls_playlist_type vod \
-r 25 \
-vf scale=w=320:h=180:force_original_aspect_ratio=decrease \
-an \
-f hls \
-hls_segment_filename ../OUT/${base_name}/180p/180p_%06d.ts ../OUT/${base_name}/180p/180p.m3u8
and the audio track:
ffmpeg -i ${source} \
-vn \
-c:a aac \
-b:a 128k \
-ar:a 48000 \
-ac:a 2 \
-hls_time 3 \
-hls_playlist_type vod \
-hls_segment_filename ../OUT/${base_name}/audio1/audio1_%06d.ts ../OUT/${base_name}/audio1/audio1.m3u8
for convenient reason, I launch separate ffmpeg command for each resolution, depending on the video source quality
Then I create a standard Master Playlist
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-STREAM-INF:BANDWIDTH=230000,RESOLUTION=320x180,CODECS="avc1.42001e"
180p/180p.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=600000,RESOLUTION=640x360,CODECS="avc1.42e00a"
360p/360p.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=3150000,RESOLUTION=1280x720,CODECS="avc1.4d0028"
720p/720p.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=5000000,RESOLUTION=1920x1080,CODECS="avc1.4d0029"
1080p/1080p.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=128000,CODECS="mp4a.40.2"
audio1/audio1.m3u8
When I try to read the Master Playlist,
I don't have any sound
Using VLC, the audio track is played before the video tracks
So, How Can I sync Audio track with Video Tracks ?
Thanks

Encode HEVC/H.265/HDR Video for YouTube from 10bit Pro-Res using FFmpeg

I want to have an HDR YouTube video published, my source file is either an Apple ProRes or DNxHR using a chroma subsamplig 4:4:4 or full RGB, both 10bit, so the original source file has all what is needed in order to be encoded into a 10bit 4:2:0 H.265/HEVC (HDR).
I have followed some answers listed here, reviewed lots of different approaches, tried out many different commands without success, colors aren't right when using only FFmpeg, to much red, when using only Adobe to encode into H.264 with the recommended settings on their support page, the results is darker, here are the commands I've using:
I have tried this:
ffmpeg \
-i input.mov \
-c:v libx265 \
-tag:v hvc1 \
-crf 21 \
-preset fast \
-pix_fmt yuv420p10le \
-x265-params "colorprim=bt2020:transfer=smpte2084:colormatrix=bt2020nc:master-display=G(13250,34500)B(7500,3000)R(34000,16000)WP(15635,16450)L(10000000,10):max-cll=1000,400" \
-c:a libfdk_aac \
-b:a 128k \
-ac 2 \
-ar 44100 \
-movflags +faststart \
output.mp4
And this:
ffmpeg \
-y \
-hide_banner \
-i input.mov \
-pix_fmt yuv420p10le \
-vf "scale=out_color_matrix=bt2020:out_h_chr_pos=0:out_v_chr_pos=0,format=yuv420p10" \
-c:v libx265 \
-tag:v hvc1 \
-crf 21 \
-preset fast \
-x265-params 'crf=12:colorprim=bt2020:transfer=smpte-st-2084:colormatrix=bt2020nc:master-display="G(13250,34500)B(7500,3000)R(34000,16000)WP(15635,16450)L(10000000,1)":max-cll="1000,400"' \
-c:a libfdk_aac \
-b:a 128k \
-ac 2 \
-ar 44100 \
-movflags +faststart \
output.mp4
I have also tried using MKVToolNix in order to insert the metadata into the encoded HEVC/H.265 file with the following command:
/Applications/MKVToolNix-9.7.1.app/Contents/MacOS/mkvmerge \
-o output.mkv \
--colour-matrix 0:9 \
--colour-range 0:1 \
--colour-transfer-characteristics 0:16 \
--colour-primaries 0:9 \
--max-content-light 0:1000 \
--max-frame-light 0:300 \
--max-luminance 0:1000 \
--min-luminance 0:0.01 \
--chromaticity-coordinates 0:0.68,0.32,0.265,0.690,0.15,0.06 \
--white-colour-coordinates 0:0.3127,0.3290 \
input.mp4
But the result is the same and YouTube don't recognize the file as an HDR file, it does only with the first FFmpeg command and with the file encoded with Adobe Premiere, but the colors don't look well, so, maybe I'm getting some concept wrong, thanks for your help.

can i stream "webpage" from ffmpeg?

Hello everybody..
just need to know if i can stream "webpage-html" via FFmpeg ,
i have script in my server , i used it to stream live poll into facebook, just need to know if i can stream any html or web page.
this is my stream code:
ffmpeg \
-re -y \
-loop 1 \
-f image2 \
-i images/stream.jpg \
-i /home/sounds/silence-loop.wav \
-acodec libfdk_aac \
-ac 1 \
-ar 44100 \
-b:a 128k \
-vcodec libx264 \
-pix_fmt yuv420p \
-vf scale=640:480 \
-r 30 \
-g 60 \
-f flv \
"rtmp://rtmp-api.facebook.com:80/rtmp/1270000000015267?ds=1&s_l=1&a=ATh1XXXXXXXXXXXuX"
You can do this using PHP GD or ImageMagik
Check out this git repo for an example of how to do it.
https://github.com/JamesTheHacker/Facebook-Live-Reactions

Resources