FFmpeg, how to skip late input? - ffmpeg

I'm running ffmpeg to display incoming stream on a Decklink BlackMagic card with the following command line:
ffmpeg -y -f ourFmt -probesize 32 -i - -f decklink -preset ultrafast
-pix_fmt uyvy422 -s 1920x1080 -r 30 -af volume=0.1 -max_delay 10000
DeckLink Mini Monitor
Basically I get the video over the internet by UDP and stream it to ffmpeg stdin. Both audio and video streams have pts and dts and are fully in sync, if the connection is good there is no problems.
However if there are issues with the connection i start getting errors, sometimes the video delay grows significantly, and audio stops working.
The errors i get are:
ffmpeg: [decklink # 0x26cc600] There are not enough buffered video
frames. Video may misbehave! ffmpeg: [decklink # 0x26cc600] There's no
buffered audio. Audio will misbehave! ffmpeg: Last message
repeated 4 times ffmpeg: [decklink # 0x26cc600] There are not enough
buffered video frames. Video may misbehave! ffmpeg: [decklink #
0x26cc600] There's no buffered audio. Audio will misbehave! ffmpeg:
Last message repeated 3 times ffmpeg: frame= 5204 fps= 30 q=-0.0
size=N/A time=00:02:53.76 bitrate=N/A dup=385 drop=5 speed=0.993x
ffmpeg: [decklink # 0x26cc600] There's no buffered audio. Audio will
misbehave! ffmpeg: Last message repeated 18 times ffmpeg:
[decklink # 0x26cc600] There are not enough buffered video frames.
Video may misbehave! ffmpeg: [decklink # 0x26cc600] There's no
buffered audio. Audio will misbehave!
The problem is when the connection is back to normal, the video keeps misbehaving until I restart the stream. What I want to do is for FFmpeg to skip to the content of the last second and play synchronized video from there, drop all the late data in between, is it possible?

Related

When I use the hevc_videotoolbox encoder in ffmpeg it always prompts me the same error

when i use
ffmpeg -i BabyShark.mp4 -c:v hevc_videotoolbox -b:v 6000k BabyShark1.mp4
it always returns an error like
[hevc_videotoolbox # 0x7fec79206e00] Error encoding frame: -12905
[hevc_videotoolbox # 0x7fec79206e00] popping: -542398533 Error
initializing output stream 0:0 -- Error while opening encoder for
output stream #0:0 - maybe incorrect parameters such as bit_rate,
rate, width or height [aac # 0x7fec79208a80] Qavg: 13693.864 [aac #
0x7fec79208a80] 2 frames left in the queue on closing Conversion
failed!
but
ffmpeg -i BabyShark1.mp4 -c:v h264_videotoolbox -b:v 6000k BabyShark2.mp4
works fine.
Other tutorials say that the above error is a problem of out-of-sync audio and video
But I changed a lot of videos, the error still appears, even the videos I recorded myself.
How can i fix it.
My computer model is MacBook Pro 2019
I want to use GPU to convert video to H265 encoding
Same problem with a computer using another M1 chip With GPU

How to convert MP4 frame rate like 14.939948fps to 15fps

Description
I pushed a USB camera stream by ffmpeg to a RTMP stream server which is called SRS.
The SRS had saved a MP4 file for me. The frame rate is not a common value in VLC - it's 14.939948. I've checked it out - It seems to be the 'ntsc' format.
Meanwhile, I had received the stream by OpenCV and saved it as another MP4 file.They're not synchronized.
I have tried to convert the frame rate by ffmpeg but was still not synchronized. The only way to make it is to put it in Adobe Premiere and modify the frame rate. Here is the ffmpeg commands I executed:
ffmpeg -i 1639444871684_copy.mp4 -filter:v fps=15 out.mp4
Aside from the stream server, how can I convert the frame rate to normal and keep synchronized at the same time?
Note: For live streaming, you should never depends on the FPS, because RTMP/FLV always use fixed TBN 1k, so there is always introduce some deviation, when publish stream as RTMP or record to other format like TS/MP4.
Note: For WebRTC, the fps is variant, please read Would WebRTC use a constant frame rate to capture video frame or about the Variable Frame Rate (VFR)
It's not a problem of SRS or FPS, you can also replay it by FFmpeg.
Use FFmpeg to transcode doc/source.flv from 25fps to 15fps, then publish to SRS by RTMP(15fps).
Use FFmpeg to record the RTMP(15fps) as output.mp4(15fps).
Use VLC to play the output.mp4(15fps), it show the fps IS NOT 15fps.
First, please start SRS by bellow config, note that DVR disabled:
# ./objs/srs -c test.conf
listen 1935;
daemon off;
srs_log_tank console;
vhost __defaultVhost__ {
}
Run FFmpeg to transcode and publish to SRS, change the fps to 15:
cd srs/trunk
ffmpeg -re -i doc/source.flv -c:v libx264 -r 15 -c:a copy \
-f flv rtmp://localhost/live/livestream
Record the RTMP stream(in 15fps) to output.mp4, note tat the fps is, in FFmpeg logs, it's 15fps:
ffmpeg -f flv -i rtmp://localhost/live/livestream -c copy -y output.mp4
Use VLC to play the output.mp4 which is 15fps, open the Window -> Media Information, you will find out that the fps is changing around 14.8fps, not 15fps!
It's because the TBN of RTMP/FLV, is fixed 1000(1k tbn, each frame is about 66.66666666666667ms), so the deviation is introduced when publish MP4 to RTMP stream. It's not caused by DVR, it's caused by RTMP/FLV TBN.
Note: However, for SRS, using fixed TBN 1k is not a good choice, because it's not friendly for MP4 duration, I reopen the issue srs#2790.
Ultimately, the framerate/fps is not a fixed stuff, it's just a number that give some tips about the stream. Instead, the player always use the DTS/PTS to decide when and how to render the picture.
Answer myself. Here is my method: Read by OpenCV and write frames to a new file at 15FPS. They're going to be synchronized.
with -r
ffmpeg -i 1639444871684_copy.mp4 -r 15 out.mp4

How to make ffmpeg exit when rtmp input stream ends?

I am using ffmpeg to transcode a stream from my local nginx rtmp server, and send the transcoded media back to the same local rtmp server. When the stream goes offline, ffmpeg stays active. When the stream starts again, ffmpeg picks up the transcoding work.
I'd like ffmpeg to exit when the input rtmp stream stops, but I cannot find out to configure it to do that. I've looked through the manual page and the online documentation.
For completeness, these are the arguments I currectly provide to ffmpeg:
ffmpeg -i 'rtmp://localhost/live/ijsbeer' -s '1280x720' -vcodec 'libx264' -preset 'veryfast' -crf '25' -maxrate '2000k' -bufsize '1000.0k' -force_key_frames '0:00:02' -max_muxing_queue_size '4096' -acodec 'copy' -copyts -copytb '0' -f 'flv' 'rtmp://localhost/live/ijsbeer_720p'
Making ffmpeg exit itself solves two of my problems. Firstly, ffmpeg sometimes experiences non-monotonous DTS issues when the encoder (OBS) streaming to the rtmp server restarts. Second, I'd like to implement a basic monitoring system to notify me of issues in the transcoder. Making ffmpeg exit when the input stream stops seems like to most reliable way to determine if ffmpeg is doing work. Parsing the stderr/stdout isn't reliable as there is no clear output format.
When transcoding I receive these regular stderr lines:
frame= 241 fps= 30 q=30.0 size= 38kB time=00:00:09.74 bitrate= 32.0kbits/s speed= 1.2x
Which pause when I stop the source stream. There is no log output stating the source stream has exited.

Blackmagic Deck Link Quad 2 and Multiple Streams with FFmpeg

I am trying to accomplish streaming videos from 4 or more feeds on a local display from a DeckLink Quad 2 using FFmpeg as my transcoder. I can play two different videos (I only have two sources I can use simultaneously at my desk) fine, but struggle with connecting them into a single video if they are both on the DeckLink. The code I have for a single stream run as a .bat is below...
ffplay -video_size 1280x720 -framerate 60 -pixel_format uyvy422 -f dshow -i video="Decklink Video Capture" \ pause
Reading most forums it would seem that sticking them together with a complex filter should work, as such:
ffmpeg -video_size 1280x720 -pixel_format uyvy422 -framerate 60 -vsync drop -f dshow -rtbufsize 150M -i video="Decklink Video Capture (5)" -i video="Decklink Video Capture" -i video="Decklink Video Capture (5)" -i video="Decklink Video Capture" -an -filter_complex "[0:v][1:v]hstack[t]; [2:v][3:v]hstack[b]; [t][b]vstack" -c:v libx264 -preset ultrafast -f mpegts pipe: | ffplay pipe: -vf scale=1280:720 \ pause
And, with two videos not from the DeckLink (i.e. DeckLink and file), it does work! But with both coming from the DeckLink I get the following in the console:
Input #0, dshow, from 'video=Decklink Video Capture (5)':0B f=0/0
Duration: N/A, start: 71582788.354257, bitrate: N/A
Stream #0:0: Video: rawvideo (HDYC / 0x43594448), uyvy422(tv), 1280x720, 60 fps, 60 tbr, 10000k tbn, 10000k tbc
video=Decklink Video Capture: No such file or directory
pipe:: Invalid data found when processing inputKB sq= 0B f=0/0
And that stream works running on its own too. So my optimistic concern is just that I'm using the wrong naming scheme; my only other idea is that I can't read two streams from the DeckLink card simultaneously (though I feel like I've read I can). Another concern is introduced here too: one of my streams does not run with frame rate set to 60fps, I need to set it to 59.94fps to work, otherwise it is a black screen.
Would I need to split these into multiple processed to run each stream simultaneously, save them to a temporary file or a pipeline, then combine them in another stream to display? I am concerned about the latency that program would introduce though. Thank you in advance!
You have not enabled USB Debugging in your mobile.
So enable Develope Mode and USB Debugging the run the below command
adb shell screenrecord --output-format=h264 - | ffplay -
Wait 10 o 15 seconds then you should see your screen on your pc

Ffmpeg hangs while transcoding HDHomerun Prime on channel change

I'm using ffmpeg to transcode a live stream from a HDHomerun Prime. Everything is working beautifully. However, one ability that I would like, if possible, is the ability to change the channel on the HDHomerun without having to stop and restart the ffmpeg transcoding process.
I start the ffmpeg process to begin reading the UDP feed from the HDHomerun. It writes the stream to a series of *.ts files with a m3u8 playlist.
The second I use hdhomerun_config to change the channel on the device, ffmpeg immediately reports the following and hangs:
[mpegts # 0000018b4e05be60] New video stream 0:3 at pos:295211888 and DTS:40884.7s=108 drop=0 speed=1.02x
[mpegts # 0000018b4e05be60] New audio stream 0:5 at pos:295279568 and DTS:40884.4s
frame= 4488 fps= 29 q=23.0 q=27.0 q=23.0 size=N/A time=00:02:28.94 bitrate=N/A dup=108 drop=0 speed=0.959x
The command I am using to launch ffmpeg is:
ffmpeg.exe -t 03:00:00 -i "udp://192.168.1.150:5000?fifo_size=1000000&overrun_nonfatal=1" -vf yadif=0:-1:1 -y -threads 4 -c:v libx264 -s 1280x720 -r 30 -b:v 4500k -force_key_frames expr:gte(t,n_forced*2) -profile:v high -preset fast -x264opts level=41 -c:a libfdk_aac -b:a 96k -ac 2 -hls_time 10 -hls_list_size 6 -hls_wrap 6 -hls_base_url /stream/ -hls_flags temp_file -hls_playlist_type event "C:\temp\streams\4500-stream.m3u8"
Is there a specific command that I can pass to allow ffmpeg to "recover" from this? Or, is there an arg that prevents this hang in the first place? I'm using the latest version v3.4 of ffmpeg cross-compiled for Windows from Ubuntu. The issue also occurred using the stable version v3.4 of ffmpeg for Windows from ffmpeg.org.
Edit:
A new discovery to the issue, but still not yet resolved:
If I change to the channel BACK to the original channel, Ffmpeg is able to continue writing the stream.
Example: I start on channel X. Ffmpeg is recording to a file. I change to channel Y. Ffmpeg outputs a message similar to the one posted above and "hangs." I change back to channel X and ffmpeg picks up where it left off, no problem.
No answer for your issue and too new to comment, but I had been trying to do this for a while based on another software package at https://github.com/bkirkman/hdhrtv
Basically what this implementation is doing is opening a new stream for each channel. I suspect that you will need to do the same and that also explains why when you go "back" the channel resumes itself. Unfortunately, unless you make a playlist and buffer every channel you may not be able to get a solution since the input stream changes.
I'd be interested in taking a look and testing if your willing to share your code.

Resources