I am building a surveillance camera for a school project, which is based on a raspberry pi and a infrared raspberry Pi camera.
I am capturing the camera's video stream and outputting it as an HLS stream directly from ffmpeg. However, the resulting video is really low fps (~5 at max)
Strangely, raspivid manages to ouput a 60fps 720p stream without any problem, but when put through ffmpeg for streaming, the video is cropped in half and i cannot get it to show up entirely.
Here is the ffmpeg command i use:
#!/bin/bash
ffmpeg -v verbose \
-re \
-i /dev/video0 \
-c:v libx264 \
-an \
-f hls \
-g 10 \
-sc_threshold 0 \
-hls_time 1 \
-hls_list_size 4 \
-hls_delete_threshold 1 \
-hls_flags delete_segments \
-hls_start_number_source datetime \
-hls_wrap 15 \
-preset superfast \
-start_number 1 \
/home/pi/serv/assets/stream.m3u8
And the resulting log output (notice the fps)
Here is the command using raspivid that i tested, based on a blog post i read:
raspivid -n \
-t 0 \
-w 960 \
-h 540 \
-fps 25 \
-o - | ffmpeg \
-v verbose \
-i - \
-vcodec copy \
-an \
-f hls \
-g 10 \
-sc_threshold 0 \
-hls_time 1 \
-hls_list_size 4 \
-hls_delete_threshold 1 \
-hls_flags delete_segments \
-hls_start_number_source datetime \
-hls_wrap 15 \
-preset superfast \
-start_number 1 \
/home/pi/serv/assets/stream.m3u8
I am not a ffmpeg expert and am open to any suggestions that would help improve the stream's quality and stability :)
Related
I have three videos: let's call them intro, recording and outro. My ultimate goal is to stitch them together like so:
Both intro and outro have alpha (prores 4444) and a "wipe" to transition, so when overlaying, they must be on top of the recording. The recording is h264, and ultimately I'm encoding out for youtube with these recommended settings.
I've figured out how to make the thing work correctly for intro + recording:
$ ffmpeg \
-i intro.mov \
-i recording.mp4 \
-filter_complex \
"[1:v]tpad=start_duration=10:start_mode=add:color=black[rv]; \
[1:a]adelay=delays=10s:all=1[ra]; \
[rv][0:v]overlay[v];[0:a][ra]amix[a]" \
-map "[a]" -map "[v]" \
-movflags faststart -c:v libx264 -profile:v high -bf 2 -g 30 -crf 18 -pix_fmt yuv420p \
out.mp4 -y
However I can't use the tpad trick for the outro because it would render black frames over everything.
I've tried various iterations with setpts/asetpts as well as passing -itsoffset for the input, but haven't come up with a solution that works correctly for both video and audio. This tries to start the outro at 16 seconds into the recording (10s start + 16s of recording is how I got to setpts=PTS+26/TB). del, but doesn't work correctly, I get both intro and outro audio from the first frame, and the recording audio cuts out when the outro overlay begins:
$ ffmpeg \
-i intro.mov \
-i recording.mp4 \
-i outro.mov \
-filter_complex \
"[1:v]tpad=start_duration=10:start_mode=add:color=black[rv]; \
[1:a]adelay=delays=10s:all=1[ra]; \
[2:v]setpts=PTS+26/TB[outv]; \
[2:a]asetpts=PTS+26/TB[outa]; \
[rv][0:v]overlay[v4]; \
[0:a][ra]amix[a4]; \
[v4][outv]overlay[v]; \
[a4][outa]amix[a]" \
-map "[a]" -map "[v]" \
-movflags faststart -c:v libx264 -profile:v high -bf 2 -g 30 -crf 18 -pix_fmt yuv420p \
out.mp4 -y
I think the right solution lies in the direction of using setpts correctly but I haven't been able to wrap my brain fully around it. Or, maybe I'm making life complicated and there's an easier approach?
In the nice-to-have realm, I'd love to be able to specify the start of the outro relative to the end of the recording. I will be doing this to a bunch of recordings of varying lengths. It would be nice to have one command to invoke on everything rather than figuring out a specific timestamp for each one.
Thank you!
Use adelay for all audio adjustments. Perform all mixing in a single amix.
Set the outro overlay to start only at the correct timestamps.
Use
$ ffmpeg \
-i intro.mov \
-i recording.mp4 \
-i outro.mov \
-filter_complex \
"[1:v]tpad=start_duration=10:start_mode=add:color=black[mainv]; \
[1:a]adelay=delays=10s:all=1[maina]; \
[2:v]setpts=PTS+26/TB[outv]; \
[2:a]adelay=delays=26s:all=1[outa]; \
[mainv][0:v]overlay=eof_action=pass[previd]; \
[previd][outv]overlay=enable='gte(t,26)'[v]; \
[maina][0:a][outa]amix=inputs=3[a]; \
-map "[v]" -map "[a]" \
-c:v libx264 -profile:v high -bf 2 -g 30 -crf 18 -pix_fmt yuv420p \
-movflags +faststart \
out.mp4 -y
I need to serve long videos (~2 hours) from a web server to mobile clients and the clients should be able to play the videos via Chromecast. I have chosen mpeg-dash for this purpose: video encoder is h.264 (level 4.1), audio is aac (although I've tried diffrent ones).
I've tried ffmpeg, MP4Box and some other tools to generate videos; most of the time I succeeded playing them on VLC or on a mobile client (locally), but not with Chromecast.
I've tried Amazon's Elastic Transcoder and it worked, but it gave me one big file whereas I need many small segments.
CORS are set.
Chromecast remote debugging didn't help much.
Do you know how to do this?
Finally, I have managed to do it. This is the script that converts a video file to dash with many segments which can be played by Chromecast:
ffmpeg -y -threads 8 \
-i input.ts \
-c:v libx264 \
-x264-params keyint=60:scenecut=0 \
-keyint_min 60 -g 60 \
-flags +cgop \
-pix_fmt yuv420p \
-coder 1 \
-bf 2 \
-level 41 \
-s:v 1920x1080 \
-b:v 6291456 \
-vf bwdif \
-r 30 \
-aspect 16:9 \
-profile:v high \
-preset slow \
-acodec aac \
-ab 384k \
-ar 48000 \
-ac 2 \
output.mp4 2> output/output1_ffmpeg.log \
\
&& MP4Box -dash 2000 \
-rap \
-out output/master.mpd \
-profile simple \
output.mp4#video output.mp4#audio 2> output/output2_mp4box.log
As you can see, first I encode the input file; then I use MP4Box to convert it to dash. Note that Chromecast can fail playing video with more than 2 audio channels (I use 2 with -ac 2).
I am using ffmpeg and mpv, to stream audio/video between two hosts. One of the hosts is sending stream with ffmpeg:
ffmpeg -f pulse \
-thread_queue_size 0 \
-i audioInput \
-f video4linux2 \
-thread_queue_size 0 \
-standard PAL \
-i videoInput \
-vcodec mpeg4 \
-r 10 \
-s 176x144 \
-maxrate 256K \
-acodec pcm_s16le \
-ar 8000 \
-b:a 32k \
-af aresample=async=1000 \
-f rtsp \
-rtsp_transport tcp \
url
and second host is receiving with mpv:
mpv url --rtsp-transport=tcp \
--profile=low-latency \
--demuxer-lavf-o=rtsp_flags=listen \
--no-cache \
--autosync=30 \
--no-demuxer-thread \
--demuxer-lavf-analyzeduration=0 \
--demuxer-lavf-probesize=32
I have tried a lot of options and combinations to reduce latency as much as possible. Above commands works nice, the latency on startup is < 1s. Unfortunately, sometimes, delay appears during streaming and it can even increase in time. My goal is to ensure, that delay will be more or less constant (close to 1s), and if some delay will appear, delayed frames will be dropped (even if it will affect audio or video quality).
How to force ffmpeg/mpv to drop frames, which are delayed e.g. more than 1s?
I want to have an HDR YouTube video published, my source file is either an Apple ProRes or DNxHR using a chroma subsamplig 4:4:4 or full RGB, both 10bit, so the original source file has all what is needed in order to be encoded into a 10bit 4:2:0 H.265/HEVC (HDR).
I have followed some answers listed here, reviewed lots of different approaches, tried out many different commands without success, colors aren't right when using only FFmpeg, to much red, when using only Adobe to encode into H.264 with the recommended settings on their support page, the results is darker, here are the commands I've using:
I have tried this:
ffmpeg \
-i input.mov \
-c:v libx265 \
-tag:v hvc1 \
-crf 21 \
-preset fast \
-pix_fmt yuv420p10le \
-x265-params "colorprim=bt2020:transfer=smpte2084:colormatrix=bt2020nc:master-display=G(13250,34500)B(7500,3000)R(34000,16000)WP(15635,16450)L(10000000,10):max-cll=1000,400" \
-c:a libfdk_aac \
-b:a 128k \
-ac 2 \
-ar 44100 \
-movflags +faststart \
output.mp4
And this:
ffmpeg \
-y \
-hide_banner \
-i input.mov \
-pix_fmt yuv420p10le \
-vf "scale=out_color_matrix=bt2020:out_h_chr_pos=0:out_v_chr_pos=0,format=yuv420p10" \
-c:v libx265 \
-tag:v hvc1 \
-crf 21 \
-preset fast \
-x265-params 'crf=12:colorprim=bt2020:transfer=smpte-st-2084:colormatrix=bt2020nc:master-display="G(13250,34500)B(7500,3000)R(34000,16000)WP(15635,16450)L(10000000,1)":max-cll="1000,400"' \
-c:a libfdk_aac \
-b:a 128k \
-ac 2 \
-ar 44100 \
-movflags +faststart \
output.mp4
I have also tried using MKVToolNix in order to insert the metadata into the encoded HEVC/H.265 file with the following command:
/Applications/MKVToolNix-9.7.1.app/Contents/MacOS/mkvmerge \
-o output.mkv \
--colour-matrix 0:9 \
--colour-range 0:1 \
--colour-transfer-characteristics 0:16 \
--colour-primaries 0:9 \
--max-content-light 0:1000 \
--max-frame-light 0:300 \
--max-luminance 0:1000 \
--min-luminance 0:0.01 \
--chromaticity-coordinates 0:0.68,0.32,0.265,0.690,0.15,0.06 \
--white-colour-coordinates 0:0.3127,0.3290 \
input.mp4
But the result is the same and YouTube don't recognize the file as an HDR file, it does only with the first FFmpeg command and with the file encoded with Adobe Premiere, but the colors don't look well, so, maybe I'm getting some concept wrong, thanks for your help.
Hello everybody..
just need to know if i can stream "webpage-html" via FFmpeg ,
i have script in my server , i used it to stream live poll into facebook, just need to know if i can stream any html or web page.
this is my stream code:
ffmpeg \
-re -y \
-loop 1 \
-f image2 \
-i images/stream.jpg \
-i /home/sounds/silence-loop.wav \
-acodec libfdk_aac \
-ac 1 \
-ar 44100 \
-b:a 128k \
-vcodec libx264 \
-pix_fmt yuv420p \
-vf scale=640:480 \
-r 30 \
-g 60 \
-f flv \
"rtmp://rtmp-api.facebook.com:80/rtmp/1270000000015267?ds=1&s_l=1&a=ATh1XXXXXXXXXXXuX"
You can do this using PHP GD or ImageMagik
Check out this git repo for an example of how to do it.
https://github.com/JamesTheHacker/Facebook-Live-Reactions