I am using ffmpeg and mpv, to stream audio/video between two hosts. One of the hosts is sending stream with ffmpeg:
ffmpeg -f pulse \
-thread_queue_size 0 \
-i audioInput \
-f video4linux2 \
-thread_queue_size 0 \
-standard PAL \
-i videoInput \
-vcodec mpeg4 \
-r 10 \
-s 176x144 \
-maxrate 256K \
-acodec pcm_s16le \
-ar 8000 \
-b:a 32k \
-af aresample=async=1000 \
-f rtsp \
-rtsp_transport tcp \
url
and second host is receiving with mpv:
mpv url --rtsp-transport=tcp \
--profile=low-latency \
--demuxer-lavf-o=rtsp_flags=listen \
--no-cache \
--autosync=30 \
--no-demuxer-thread \
--demuxer-lavf-analyzeduration=0 \
--demuxer-lavf-probesize=32
I have tried a lot of options and combinations to reduce latency as much as possible. Above commands works nice, the latency on startup is < 1s. Unfortunately, sometimes, delay appears during streaming and it can even increase in time. My goal is to ensure, that delay will be more or less constant (close to 1s), and if some delay will appear, delayed frames will be dropped (even if it will affect audio or video quality).
How to force ffmpeg/mpv to drop frames, which are delayed e.g. more than 1s?
Related
Trying to transcode two streams into one gives me poor/unstable encoding speeds from x0.400 to x0.988, sometimes above x1.
fmpeg \
-thread_queue_size 15 -rtbufsize 100M -i "https://.../stream.m3u8" \
-thread_queue_size 15 -rtbufsize 100M -i "http://.../video.mjpg" \
-filter_complex \
"[0:v]setpts=PTS-STARTPTS [bg]; \
[1:v]scale=200:-1,setpts=PTS-STARTPTS [fg]; \
[bg][fg]overlay=W-w-10:10" \
-c:v mpeg1video \
-b:v 1000k \
-r 25 \
-threads 1 \
-f mjpeg udp://127.0.0.1:1235?pkt_size=1316
Hardware specs:
CPU is Intel Core 2 Duo
Mechanical hard drive
I choose the mpeg1video encoder because of the low CPU usage. It seems that my Core 2 Duo can't keep up with libx264 .
I played with output bitrates, fps and threads, -re but nothing seems to improve and stabilize encoding speed to x1. Which parameters do I need to change/add/remove to achieve a reliable x1 encoding speed?
Input streams are not reliable, download internet connection is slow and unreliable.
I need to serve long videos (~2 hours) from a web server to mobile clients and the clients should be able to play the videos via Chromecast. I have chosen mpeg-dash for this purpose: video encoder is h.264 (level 4.1), audio is aac (although I've tried diffrent ones).
I've tried ffmpeg, MP4Box and some other tools to generate videos; most of the time I succeeded playing them on VLC or on a mobile client (locally), but not with Chromecast.
I've tried Amazon's Elastic Transcoder and it worked, but it gave me one big file whereas I need many small segments.
CORS are set.
Chromecast remote debugging didn't help much.
Do you know how to do this?
Finally, I have managed to do it. This is the script that converts a video file to dash with many segments which can be played by Chromecast:
ffmpeg -y -threads 8 \
-i input.ts \
-c:v libx264 \
-x264-params keyint=60:scenecut=0 \
-keyint_min 60 -g 60 \
-flags +cgop \
-pix_fmt yuv420p \
-coder 1 \
-bf 2 \
-level 41 \
-s:v 1920x1080 \
-b:v 6291456 \
-vf bwdif \
-r 30 \
-aspect 16:9 \
-profile:v high \
-preset slow \
-acodec aac \
-ab 384k \
-ar 48000 \
-ac 2 \
output.mp4 2> output/output1_ffmpeg.log \
\
&& MP4Box -dash 2000 \
-rap \
-out output/master.mpd \
-profile simple \
output.mp4#video output.mp4#audio 2> output/output2_mp4box.log
As you can see, first I encode the input file; then I use MP4Box to convert it to dash. Note that Chromecast can fail playing video with more than 2 audio channels (I use 2 with -ac 2).
I am building a surveillance camera for a school project, which is based on a raspberry pi and a infrared raspberry Pi camera.
I am capturing the camera's video stream and outputting it as an HLS stream directly from ffmpeg. However, the resulting video is really low fps (~5 at max)
Strangely, raspivid manages to ouput a 60fps 720p stream without any problem, but when put through ffmpeg for streaming, the video is cropped in half and i cannot get it to show up entirely.
Here is the ffmpeg command i use:
#!/bin/bash
ffmpeg -v verbose \
-re \
-i /dev/video0 \
-c:v libx264 \
-an \
-f hls \
-g 10 \
-sc_threshold 0 \
-hls_time 1 \
-hls_list_size 4 \
-hls_delete_threshold 1 \
-hls_flags delete_segments \
-hls_start_number_source datetime \
-hls_wrap 15 \
-preset superfast \
-start_number 1 \
/home/pi/serv/assets/stream.m3u8
And the resulting log output (notice the fps)
Here is the command using raspivid that i tested, based on a blog post i read:
raspivid -n \
-t 0 \
-w 960 \
-h 540 \
-fps 25 \
-o - | ffmpeg \
-v verbose \
-i - \
-vcodec copy \
-an \
-f hls \
-g 10 \
-sc_threshold 0 \
-hls_time 1 \
-hls_list_size 4 \
-hls_delete_threshold 1 \
-hls_flags delete_segments \
-hls_start_number_source datetime \
-hls_wrap 15 \
-preset superfast \
-start_number 1 \
/home/pi/serv/assets/stream.m3u8
I am not a ffmpeg expert and am open to any suggestions that would help improve the stream's quality and stability :)
I want to have an HDR YouTube video published, my source file is either an Apple ProRes or DNxHR using a chroma subsamplig 4:4:4 or full RGB, both 10bit, so the original source file has all what is needed in order to be encoded into a 10bit 4:2:0 H.265/HEVC (HDR).
I have followed some answers listed here, reviewed lots of different approaches, tried out many different commands without success, colors aren't right when using only FFmpeg, to much red, when using only Adobe to encode into H.264 with the recommended settings on their support page, the results is darker, here are the commands I've using:
I have tried this:
ffmpeg \
-i input.mov \
-c:v libx265 \
-tag:v hvc1 \
-crf 21 \
-preset fast \
-pix_fmt yuv420p10le \
-x265-params "colorprim=bt2020:transfer=smpte2084:colormatrix=bt2020nc:master-display=G(13250,34500)B(7500,3000)R(34000,16000)WP(15635,16450)L(10000000,10):max-cll=1000,400" \
-c:a libfdk_aac \
-b:a 128k \
-ac 2 \
-ar 44100 \
-movflags +faststart \
output.mp4
And this:
ffmpeg \
-y \
-hide_banner \
-i input.mov \
-pix_fmt yuv420p10le \
-vf "scale=out_color_matrix=bt2020:out_h_chr_pos=0:out_v_chr_pos=0,format=yuv420p10" \
-c:v libx265 \
-tag:v hvc1 \
-crf 21 \
-preset fast \
-x265-params 'crf=12:colorprim=bt2020:transfer=smpte-st-2084:colormatrix=bt2020nc:master-display="G(13250,34500)B(7500,3000)R(34000,16000)WP(15635,16450)L(10000000,1)":max-cll="1000,400"' \
-c:a libfdk_aac \
-b:a 128k \
-ac 2 \
-ar 44100 \
-movflags +faststart \
output.mp4
I have also tried using MKVToolNix in order to insert the metadata into the encoded HEVC/H.265 file with the following command:
/Applications/MKVToolNix-9.7.1.app/Contents/MacOS/mkvmerge \
-o output.mkv \
--colour-matrix 0:9 \
--colour-range 0:1 \
--colour-transfer-characteristics 0:16 \
--colour-primaries 0:9 \
--max-content-light 0:1000 \
--max-frame-light 0:300 \
--max-luminance 0:1000 \
--min-luminance 0:0.01 \
--chromaticity-coordinates 0:0.68,0.32,0.265,0.690,0.15,0.06 \
--white-colour-coordinates 0:0.3127,0.3290 \
input.mp4
But the result is the same and YouTube don't recognize the file as an HDR file, it does only with the first FFmpeg command and with the file encoded with Adobe Premiere, but the colors don't look well, so, maybe I'm getting some concept wrong, thanks for your help.
Hello everybody..
just need to know if i can stream "webpage-html" via FFmpeg ,
i have script in my server , i used it to stream live poll into facebook, just need to know if i can stream any html or web page.
this is my stream code:
ffmpeg \
-re -y \
-loop 1 \
-f image2 \
-i images/stream.jpg \
-i /home/sounds/silence-loop.wav \
-acodec libfdk_aac \
-ac 1 \
-ar 44100 \
-b:a 128k \
-vcodec libx264 \
-pix_fmt yuv420p \
-vf scale=640:480 \
-r 30 \
-g 60 \
-f flv \
"rtmp://rtmp-api.facebook.com:80/rtmp/1270000000015267?ds=1&s_l=1&a=ATh1XXXXXXXXXXXuX"
You can do this using PHP GD or ImageMagik
Check out this git repo for an example of how to do it.
https://github.com/JamesTheHacker/Facebook-Live-Reactions