I am trying to convert a source VBR SPTS MPEG-2 TS file into CBR using ffmpeg. The code I am using is the following:
#!/bin/bash
pkill ffmpeg
ffmpeg \
-re -i source.ts -c copy \
-muxrate 18000K \
-f mpegts \
udp://destination_ip:1234?pkt_size=1316
The source VPID bitrate is ~ 10Mbps and the APID is 296Kbps. So according to my understanding this code should deliver 18Mbps CBR where the difference between the muxrate and the bitrate of all the PIDs is filled with null packets.
The problem is that the output is far from perfect. The overall bitrate is semi-CBR at best. It ranges between 12Mbps and 15Mbps and I see a lot of PCR accuracy and PCR repetition errors along with CC errors both on the VPID and APID.
Some ideas:
make sure you have a recent version of ffmpeg because at some point there was a bug which messed up PCR insertion when stream copying
if you want constant UDP output you must use the bitrate option like:
-flush_packets 0 -f mpegts "udp://destination_ip:1234?pkt_size=1316&bitrate=18000000"
UDP is an unreliable protocol and you might experience packet loss (unfortunately the bitrate option only works for UDP for now AFAIK)
if you have a dedicated connection but still experience CC errors check the destination OS max UDP buffer sizes and make sure it can handle 18 Mbps
specify -minrate and -maxrate too.
use -bufsize bigger than bitrate.
set -muxrate value like bufsize.
The final command:
ffmpeg \
-re -i source.ts \
-b:v 10500k \
-minrate 10500k \
-maxrate 10500k \
-bufsize 18000k \
-muxrate 18000k \
-f mpegts \
udp://destination_ip:1234?pkt_size=1316
Related
Trying to transcode two streams into one gives me poor/unstable encoding speeds from x0.400 to x0.988, sometimes above x1.
fmpeg \
-thread_queue_size 15 -rtbufsize 100M -i "https://.../stream.m3u8" \
-thread_queue_size 15 -rtbufsize 100M -i "http://.../video.mjpg" \
-filter_complex \
"[0:v]setpts=PTS-STARTPTS [bg]; \
[1:v]scale=200:-1,setpts=PTS-STARTPTS [fg]; \
[bg][fg]overlay=W-w-10:10" \
-c:v mpeg1video \
-b:v 1000k \
-r 25 \
-threads 1 \
-f mjpeg udp://127.0.0.1:1235?pkt_size=1316
Hardware specs:
CPU is Intel Core 2 Duo
Mechanical hard drive
I choose the mpeg1video encoder because of the low CPU usage. It seems that my Core 2 Duo can't keep up with libx264 .
I played with output bitrates, fps and threads, -re but nothing seems to improve and stabilize encoding speed to x1. Which parameters do I need to change/add/remove to achieve a reliable x1 encoding speed?
Input streams are not reliable, download internet connection is slow and unreliable.
I'm trying to use the concat protocol in ffmpeg as described in the ffmpeg docs:
https://trac.ffmpeg.org/wiki/Concatenate
However I'm getting lots of errors about corrupt packets when running the concat, so I'm worried that this isn't the best approach. My actual use case will involve running unsupervised with a ton of different source videos, so I want to be sure that it's solid.
The concat demuxer approach succeeds without errors but takes about 10 times as long.
Steps to reproduce
Download Big Buck Bunny:
wget https://download.blender.org/demo/movies/BBB/bbb_sunflower_1080p_30fps_normal.mp4
Transcode a 30 second chunk:
ffmpeg -i bbb_sunflower_1080p_30fps_normal.mp4 -ss '00:06:30' -t 30 -c:v libx264 -crf 18 bbb30.mp4
Create one second parts:
mkdir -p parts;
for i in $(seq -f "%02g" 0 29); do \
ffmpeg \
-i bbb30.mp4 \
-ss "00:00:$i" -t 1 \
-c:v libx264 -pix_fmt yuv420p -crf 18 \
-bsf:v h264_mp4toannexb \
-f mpegts \
-y parts/$i.ts;
done
Combine all the parts into a new output mp4:
ffmpeg -y \
-i "concat:parts/00.ts|parts/01.ts|parts/02.ts|parts/03.ts|parts/04.ts|parts/05.ts|parts/06.ts|parts/07.ts|parts/08.ts|parts/09.ts|parts/10.ts|parts/11.ts|parts/12.ts|parts/13.ts|parts/14.ts|parts/15.ts|parts/16.ts|parts/17.ts|parts/18.ts|parts/19.ts|parts/20.ts|parts/21.ts|parts/22.ts|parts/23.ts|parts/24.ts|parts/25.ts|parts/26.ts|parts/27.ts|parts/28.ts|parts/29.ts" \
-c copy \
output.mp4
Stderr has lots of warnings about corrupt packets (in this case always at dts = 21300):
[mpegts # 0x555d172daa00] Packet corrupt (stream = 0, dts = 213000).
concat:parts/00.ts|parts/01.ts|parts/02.ts|parts/03.ts|parts/04.ts|parts/05.ts|parts/06.ts|parts/07.ts|parts/08.ts|parts/09.ts|parts/10.ts|parts/11.ts|parts/12.ts|parts/13.ts|parts/14.ts|parts/15.ts|parts/16.ts|parts/17.ts|parts/18.ts|parts/19.ts|parts/20.ts|parts/21.ts|parts/22.ts|parts/23.ts|parts/24.ts|parts/25.ts|parts/26.ts|parts/27.ts|parts/28.ts|parts/29.ts: corrupt input packet in stream 0
The resulting mp4 looks ok to my eye, but obviously ffmpeg isn't happy about something. Any ideas?
You can ignore these warnings.
Packets in a MPEG-TS container have a counter field which increments with each packet. These are expected to be continuous. Since you encoded your TS files in separate instances, that counter will start with 0 when switching from one TS input to another. But this has no salience in this use case. This happens with the concat protocol because a single TS demuxer instance is used to read all inputs as an amalgamated whole. The concat demuxer open a fresh TS demuxer for each input and then stitches all packets as a single stream.
At the outset, I will say that I am trying to establish a connection via websocket and rtsp. I use forked node-rtsp-stream.
I have problem with huge latency about 10-15 seconds. I came to the conclusion that ffmpeg is to blame.
My observations:
When I use
ffmpeg -rtsp_transport tcp \
-fflags discardcorrupt \
-f mpeg1video \
-i rtsp://{id-address}
at the start, there is a delay of about 8-6 seconds after which the stream gently accelerates and finally the delay is on the level of 1-2sec. However with the newest version of jsmpeg I get on frontend player only jsmpeg possible garbage data. skipping
When I use
ffmpeg -rtsp_transport tcp \
-fflags discardcorrupt \
-i rtsp://{id-address}\
-f mpegts \
-codec:v mpeg1video \
-b:v 1000k \
-bf 0
I have a latency of about 18-10 seconds at startup and I can't go below that even if I try to speed up the whole stream with startup arguments
What's wrong?
I'm working on a robot (raspberry pi 4 based) that is accessible from anywhere. My robot is currently at a 3-second latency. I also use OvenMediaEngine (RTMP to WebRTC) to transmit my stream to the client (on a website).Here is my command:
raspivid -n -t 0 -w 1280 -h 720 -fps 25 -b 3500000 -g 50 -fl -o - | ffmpeg -thread_queue_size 1024 -i - -itsoffset 6 -f alsa -channels 1 -thread_queue_size 1024 -i hw:2 -preset ultrafast -tune zerolatency -vcodec libx264 -r 25 -b:v 512k -s 1280x720 -acodec aac -ac 2 -ab 32k -ar 44100 -f flv rtmp://xxxxxxxx:1935/app/stream
Does anyone know why it won't stream at subsecond latency?
Thanks in advance!
I am not exactly sure where you are incurring latency, but it usually happens either during transport or encoding.
If possible I would see if you avoid re-encoding to H264. You are going to pay a penalty of decoding (or just parsing?) and then encoding.
I would also see if you can ingest into OME with something other then RTMP. WebRTC and RTSP both will give you better latency.
The goal is to create multiple output files that differ only in bitrate from a single source file. The solutions for this that were documented worked, but had inefficiencies. The solution that I discovered to be most efficient was not documented anywhere that I could see. I am posting it here for review and asking if others know of additional optimizations that can be made.
Source file MPEG-2 Video (Letterboxed) 1920x1080 #>10Mbps
MPEG-1 Audio # 384Kbps
Destiation files H264 Video 720x400 # multiple bitrates
AAC Audio # 128Kbps
Machine Multi-core Processor
The video quality at each bitrate is important so we are running in 2-Pass mode with the 'medium' preset
VIDEO_OPTIONS_P2 = -vcodec libx264 -preset medium -profile:v main -g 72 -keyint_min 24 -vf scale=720:-1,crop=720:400
The first approach was to encode them all in parallel processes
ffmpeg -y -i $INPUT_FILE $AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 250k -threads auto -f mp4 out-250.mp4 &
ffmpeg -y -i $INPUT_FILE $AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 500k -threads auto -f mp4 out-500.mp4 &
ffmpeg -y -i $INPUT_FILE $AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 700k -threads auto -f mp4 out-700.mp4 &
The obvious inefficiencies are that the source file is read, decoded, scaled, and cropped identically for each process. How can we do this once and then feed the encoders with the result?
The hope was that generating all the encodes in a single ffmpeg command would optimize-out the duplicate steps.
ffmpeg -y -i $INPUT_FILE \
$AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 250k -threads auto -f mp4 out-250.mp4 \
$AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 500k -threads auto -f mp4 out-500.mp4 \
$AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 700k -threads auto -f mp4 out-700.mp4
However, the encoding time was nearly identical to the previous multi-process approach. This leads me to believe that all the steps are again being performed in duplicate.
To force ffmpeg to read, decode, and scale only once, I put those steps in one ffmpeg process and piped the result into another ffmpeg process that performed the encoding. This improved the overall processing time by 15%-20%.
INPUT_STREAM="ffmpeg -i $INPUT_FILE -vf scale=720:-1,crop=720:400 -threads auto -f yuv4mpegpipe -"
$INPUT_STREAM | ffmpeg -y -f yuv4mpegpipe -i - \
$AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 250k -threads auto out-250.mp4 \
$AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 500k -threads auto out-500.mp4 \
$AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 700k -threads auto out-700.mp4
Does anyone see potential problems with doing it this way, or know of a better method?
If you apply the audio/video options to the piped output of the first process, you could save some CPU, since it would exchange 3 encodings to a single one.
ffmpeg -i $INPUT_FILE -vf scale=720:-1,crop=720:400 $AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -f yuv4mpegpipe -\
| ffmpeg -y -f yuv4mpegpipe -i - \
-b:v 250k out-250.mp4 \
-b:v 500k out-500.mp4 \
-b:v 700k out-700.mp4
This is the recommended way for older versions of ffmpeg. There's a newer method (didn't test it) available since earlier this month: http://ffmpeg.org/trac/ffmpeg/wiki/Creating%20multiple%20outputs
I think what the OP kind of wants more is to use the filters once, encode several times. The method used is good, though you might get more speed with the "tee" filter, see also the recent addition to the bottom of http://ffmpeg.org/trac/ffmpeg/wiki/Creating%20multiple%20outputs "Multiple encodings for same input"