Hello everybody..
just need to know if i can stream "webpage-html" via FFmpeg ,
i have script in my server , i used it to stream live poll into facebook, just need to know if i can stream any html or web page.
this is my stream code:
ffmpeg \
-re -y \
-loop 1 \
-f image2 \
-i images/stream.jpg \
-i /home/sounds/silence-loop.wav \
-acodec libfdk_aac \
-ac 1 \
-ar 44100 \
-b:a 128k \
-vcodec libx264 \
-pix_fmt yuv420p \
-vf scale=640:480 \
-r 30 \
-g 60 \
-f flv \
"rtmp://rtmp-api.facebook.com:80/rtmp/1270000000015267?ds=1&s_l=1&a=ATh1XXXXXXXXXXXuX"
You can do this using PHP GD or ImageMagik
Check out this git repo for an example of how to do it.
https://github.com/JamesTheHacker/Facebook-Live-Reactions
Related
I have three videos: let's call them intro, recording and outro. My ultimate goal is to stitch them together like so:
Both intro and outro have alpha (prores 4444) and a "wipe" to transition, so when overlaying, they must be on top of the recording. The recording is h264, and ultimately I'm encoding out for youtube with these recommended settings.
I've figured out how to make the thing work correctly for intro + recording:
$ ffmpeg \
-i intro.mov \
-i recording.mp4 \
-filter_complex \
"[1:v]tpad=start_duration=10:start_mode=add:color=black[rv]; \
[1:a]adelay=delays=10s:all=1[ra]; \
[rv][0:v]overlay[v];[0:a][ra]amix[a]" \
-map "[a]" -map "[v]" \
-movflags faststart -c:v libx264 -profile:v high -bf 2 -g 30 -crf 18 -pix_fmt yuv420p \
out.mp4 -y
However I can't use the tpad trick for the outro because it would render black frames over everything.
I've tried various iterations with setpts/asetpts as well as passing -itsoffset for the input, but haven't come up with a solution that works correctly for both video and audio. This tries to start the outro at 16 seconds into the recording (10s start + 16s of recording is how I got to setpts=PTS+26/TB). del, but doesn't work correctly, I get both intro and outro audio from the first frame, and the recording audio cuts out when the outro overlay begins:
$ ffmpeg \
-i intro.mov \
-i recording.mp4 \
-i outro.mov \
-filter_complex \
"[1:v]tpad=start_duration=10:start_mode=add:color=black[rv]; \
[1:a]adelay=delays=10s:all=1[ra]; \
[2:v]setpts=PTS+26/TB[outv]; \
[2:a]asetpts=PTS+26/TB[outa]; \
[rv][0:v]overlay[v4]; \
[0:a][ra]amix[a4]; \
[v4][outv]overlay[v]; \
[a4][outa]amix[a]" \
-map "[a]" -map "[v]" \
-movflags faststart -c:v libx264 -profile:v high -bf 2 -g 30 -crf 18 -pix_fmt yuv420p \
out.mp4 -y
I think the right solution lies in the direction of using setpts correctly but I haven't been able to wrap my brain fully around it. Or, maybe I'm making life complicated and there's an easier approach?
In the nice-to-have realm, I'd love to be able to specify the start of the outro relative to the end of the recording. I will be doing this to a bunch of recordings of varying lengths. It would be nice to have one command to invoke on everything rather than figuring out a specific timestamp for each one.
Thank you!
Use adelay for all audio adjustments. Perform all mixing in a single amix.
Set the outro overlay to start only at the correct timestamps.
Use
$ ffmpeg \
-i intro.mov \
-i recording.mp4 \
-i outro.mov \
-filter_complex \
"[1:v]tpad=start_duration=10:start_mode=add:color=black[mainv]; \
[1:a]adelay=delays=10s:all=1[maina]; \
[2:v]setpts=PTS+26/TB[outv]; \
[2:a]adelay=delays=26s:all=1[outa]; \
[mainv][0:v]overlay=eof_action=pass[previd]; \
[previd][outv]overlay=enable='gte(t,26)'[v]; \
[maina][0:a][outa]amix=inputs=3[a]; \
-map "[v]" -map "[a]" \
-c:v libx264 -profile:v high -bf 2 -g 30 -crf 18 -pix_fmt yuv420p \
-movflags +faststart \
out.mp4 -y
I'm trying to encode and package uploaded videos for an LMS website where video size may differ. How can I write a sh script that converts and packages the given video based on its size (For ex. if the given video resolution is bigger than 720p and less than 1080p FFmpeg should convert videos in 2 sizes [360p, 720p] then shaka-packager should package them).
So far I have this script assuming that input video resolution is 1080p (or 1080p <= size < 4k)
#!/bin/sh
pwd
URL="$1"
ID="$2"
FOLDER="$3"
if [ -z "$URL" ];then
echo "Must input a file"
$SHELL
exit
fi
DIR="$FOLDER/$ID"
OUTDIR="$DIR/cmaf"
mkdir -p -v $DIR
mkdir -p -v $OUTDIR
GOP_SIZE=50
FPS=25
CRF=28
INPUT="$DIR/input"
wget -c -O $INPUT $URL &&
if [ ! -f $FILE ]; then
echo "$FILE does not exists"
$SHELL
exit
fi
ffmpeg -i $INPUT -y \
-threads 1 \
-c:v libx264 -crf $CRF -profile:v high -pix_fmt yuv420p \
-keyint_min $GOP_SIZE -g $GOP_SIZE -sc_threshold 0 \
-color_primaries 1 -color_trc 1 -colorspace 1 -movflags +faststart \
-c:a aac -b:a 128k -ar 44100 \
-r $FPS \
"$DIR/input.mp4" &&
ffmpeg -i "$DIR/input.mp4" -y \
-threads 1 \
-vn -acodec copy "$DIR/a.mp4" \
-vf scale=640:360 -an "$DIR/360p.mp4" \
-vf scale=1280:720 -an "$DIR/720p.mp4" \
-vf scale=1920:1080 -an "$DIR/1080p.mp4" &&
rm -R $OUTDIR
packager \
in="$DIR/a.mp4",stream=audio,output="$OUTDIR/a.mp4",drm_label=AUDIO \
in="$DIR/360p.mp4",stream=video,output="$OUTDIR/360p.mp4",drm_label=SD \
in="$DIR/720p.mp4",stream=video,output="$OUTDIR/720p.mp4",drm_label=HD \
in="$DIR/1080p.mp4",stream=video,output="$OUTDIR/1080p.mp4",drm_label=HD \
--enable_raw_key_encryption \
--keys label=AUDIO:key_id=f3c5e0761e6654b28f8049c778b23947:key=a4637a153a443df9eed0593043db7517,label=SD:key_id=abba277e8bcf552bbd2e86a434a9a5d7:key=69eaa807a6763af979e8d1940fb88397,label=HD:key_id=6d76f25cb17f5e76b8eaef6b7f582d87:key=cb541784c99737aef4fff74500c12ea7 \
--pssh 000000377073776800000000EDEF8BA979D64ACEA3C877DCD51D21ED00000071220F7465737420636F6E74656E74206967 \
--mpd_output "$OUTDIR/h264.mpd" \
--hls_master_playlist_output "$OUTDIR/h264_master.m3u8"
The above script first downloads a video by a given URL then converts it to appropriate video format before resizing and packaging. I assumed if I convert the video before scaling would be more performant than every time converting and resizing it. Also, I assumed if I resize to all resolutions in one command it would be much faster, but I think that is not how FFmpeg works. I'm stack in the world of FFmpeg not knowing how to write sh(or bash) script better, cleaner and dynamic for encoding and packaging videos for online streaming. I think there are others with the same problem or the same case. So any help, fix and recommendation is appreciated
For the sake of clarity, I stripped some arguments from your commands (yuv420p and -profile:v high are defaults, not changing frame-rate)
ffmpeg -i <input> -y \
-c:v libx264 -crf 28 -g 50 \
-c:a aac -b:a 128k -ar 44100 \
-movflags +faststart \
<output> &&
ffmpeg -i <output> -y \
-vn -c:a copy "$DIR/a.mp4" \
-vf scale=640:360 -an "$DIR/360p.mp4" \
-vf scale=1280:720 -an "$DIR/720p.mp4" \
-vf scale=1920:1080 -an "$DIR/1080p.mp4"
The first run will decode your input and re-encode it using libx264 with quality-target 28 and a keyframe every 50 frames.
The second instance will decode it again, guessing an encoder by the .mp4 extension -- defaulting to libx264 --, and re-encodes everything three times by using the default values -g 250 -crf 23 (I'm not sure about -movflags +faststart).
So you are (1) overwriting your settings from the first-run, (2) having an additional decode process and (3) having a certain quality loss due to multiple lossy encodings.
What you want is to combine these into one invocation:
ffmpeg -i <input> -y \
-vn -c:a aac -b:a 128k -ar 44100 "$DIR/a.mp4" \
-c:v libx264 -crf 28 -g 50 -s 640x360 -movflags +faststart -an "$DIR/360p.mp4" \
-c:v libx264 -crf 28 -g 50 -s 1280x720 -movflags +faststart -an "$DIR/720p.mp4" \
-c:v libx264 -crf 28 -g 50 -s 19201080 -movflags +faststart -an "$DIR/1080p.mp4"
Additionally, I would stay away from special arguments unless you really know what and why you are choosing them.
P.s.
This is a command that runs with 15 % CPU utilization on my laptop.
ffmpeg \
-hwaccel qsv -c:v h264_qsv -i 'rtsp://109.98.78.106' \
-an -c:v h264_qsv -global_quality 30 -vf "scale_qsv=h=360:w=-1" "/tmp/360p.mp4" \
-an -c:v h264_qsv -global_quality 30 -vf "scale_qsv=h=720:w=-1" "/tmp/720p.mp4" \
-an -c:v h264_qsv -global_quality 30 -vf "scale_qsv=h=1080:w=-1" "/tmp/1080p.mp4"
It might have some color and / or quality issues but this is a performance trade-off.
I need to serve long videos (~2 hours) from a web server to mobile clients and the clients should be able to play the videos via Chromecast. I have chosen mpeg-dash for this purpose: video encoder is h.264 (level 4.1), audio is aac (although I've tried diffrent ones).
I've tried ffmpeg, MP4Box and some other tools to generate videos; most of the time I succeeded playing them on VLC or on a mobile client (locally), but not with Chromecast.
I've tried Amazon's Elastic Transcoder and it worked, but it gave me one big file whereas I need many small segments.
CORS are set.
Chromecast remote debugging didn't help much.
Do you know how to do this?
Finally, I have managed to do it. This is the script that converts a video file to dash with many segments which can be played by Chromecast:
ffmpeg -y -threads 8 \
-i input.ts \
-c:v libx264 \
-x264-params keyint=60:scenecut=0 \
-keyint_min 60 -g 60 \
-flags +cgop \
-pix_fmt yuv420p \
-coder 1 \
-bf 2 \
-level 41 \
-s:v 1920x1080 \
-b:v 6291456 \
-vf bwdif \
-r 30 \
-aspect 16:9 \
-profile:v high \
-preset slow \
-acodec aac \
-ab 384k \
-ar 48000 \
-ac 2 \
output.mp4 2> output/output1_ffmpeg.log \
\
&& MP4Box -dash 2000 \
-rap \
-out output/master.mpd \
-profile simple \
output.mp4#video output.mp4#audio 2> output/output2_mp4box.log
As you can see, first I encode the input file; then I use MP4Box to convert it to dash. Note that Chromecast can fail playing video with more than 2 audio channels (I use 2 with -ac 2).
I am creating a manifest to playback Adaptive WebM using DASH. Everything working pretty fine but I need language-name/track-name instead of bitrate. Is it supported? How can update/optimize to support such feature?
Manifest creation:
ffmpeg \
-f webm_dash_manifest -i webm240.webm \
-f webm_dash_manifest -i webm360.webm \
-f webm_dash_manifest -i webm480.webm \
-f webm_dash_manifest -i webm720.webm \
-f webm_dash_manifest -i audio1.webm \
-f webm_dash_manifest -i audio2.webm \
-f webm_dash_manifest -i audio3.webm \
-f webm_dash_manifest -i audio4.webm \
-c copy -map 0 -map 1 -map 2 -map 3 -map 4 -map 5 -map 6 -map 7 \
-f webm_dash_manifest \
-adaptation_sets "id=0,streams=0,1,2,3 id=1,streams=4,5,6,7" \
manifest.mpd
Player audio track selection:
Finally, after changing a couple of DASH players and encoders, this is how I solved it.
The problem was not in manifest creation but in input file preparation. I added metadata to input files like below and it worked.
Tested in Shaka-player, works like charm.
ffmpeg -i input.mp4 -y -vn -acodec aac -ab 96k -dash 1 -metadata:s:a:0 language=hin audiohindi.mp4
ffmpeg -i input.mp4 -y -vn -acodec aac -ab 96k -dash 1 -metadata:s:a:0 language=tam audiotamil.mp4
ffmpeg -i input.mp4 -y -vn -acodec aac -ab 96k -dash 1 -metadata:s:a:0 language=kan audiokannada.mp4
ffmpeg -i input.mp4 -y -vn -acodec aac -ab 96k -dash 1 -metadata:s:a:0 language=tel audiotelugu.mp4
It uses ISO 639-2 language codes like: Wiki: ISO 639-2 language codes
I want to have an HDR YouTube video published, my source file is either an Apple ProRes or DNxHR using a chroma subsamplig 4:4:4 or full RGB, both 10bit, so the original source file has all what is needed in order to be encoded into a 10bit 4:2:0 H.265/HEVC (HDR).
I have followed some answers listed here, reviewed lots of different approaches, tried out many different commands without success, colors aren't right when using only FFmpeg, to much red, when using only Adobe to encode into H.264 with the recommended settings on their support page, the results is darker, here are the commands I've using:
I have tried this:
ffmpeg \
-i input.mov \
-c:v libx265 \
-tag:v hvc1 \
-crf 21 \
-preset fast \
-pix_fmt yuv420p10le \
-x265-params "colorprim=bt2020:transfer=smpte2084:colormatrix=bt2020nc:master-display=G(13250,34500)B(7500,3000)R(34000,16000)WP(15635,16450)L(10000000,10):max-cll=1000,400" \
-c:a libfdk_aac \
-b:a 128k \
-ac 2 \
-ar 44100 \
-movflags +faststart \
output.mp4
And this:
ffmpeg \
-y \
-hide_banner \
-i input.mov \
-pix_fmt yuv420p10le \
-vf "scale=out_color_matrix=bt2020:out_h_chr_pos=0:out_v_chr_pos=0,format=yuv420p10" \
-c:v libx265 \
-tag:v hvc1 \
-crf 21 \
-preset fast \
-x265-params 'crf=12:colorprim=bt2020:transfer=smpte-st-2084:colormatrix=bt2020nc:master-display="G(13250,34500)B(7500,3000)R(34000,16000)WP(15635,16450)L(10000000,1)":max-cll="1000,400"' \
-c:a libfdk_aac \
-b:a 128k \
-ac 2 \
-ar 44100 \
-movflags +faststart \
output.mp4
I have also tried using MKVToolNix in order to insert the metadata into the encoded HEVC/H.265 file with the following command:
/Applications/MKVToolNix-9.7.1.app/Contents/MacOS/mkvmerge \
-o output.mkv \
--colour-matrix 0:9 \
--colour-range 0:1 \
--colour-transfer-characteristics 0:16 \
--colour-primaries 0:9 \
--max-content-light 0:1000 \
--max-frame-light 0:300 \
--max-luminance 0:1000 \
--min-luminance 0:0.01 \
--chromaticity-coordinates 0:0.68,0.32,0.265,0.690,0.15,0.06 \
--white-colour-coordinates 0:0.3127,0.3290 \
input.mp4
But the result is the same and YouTube don't recognize the file as an HDR file, it does only with the first FFmpeg command and with the file encoded with Adobe Premiere, but the colors don't look well, so, maybe I'm getting some concept wrong, thanks for your help.