Slow startup when extracting thumbnails from UDP live stream using FFMpeg - ffmpeg

I'm running the following command using FFMpeg to extract an image every 1 second from a UDP stream:
ffmpeg -i "udp://224.1.2.123:9001" -s 256x144 -vf fps=1 -update 1 test.jpg -y
This works well, but it takes about 5 seconds to actually start producing images. Is there any way to lower the startup time?
The UDP stream uses mpegts format and is encoded with H264/AAC.
Thanks!

Related

FFMPEG and FFPlay can access rtsp stream from one ip, but from other ip, it can't

The situation is kind of complex. I was archiving several CCTV camera feeds (rtsp, h264, no audio) through OpenCV, which worked but the CPU utilization was too high and started to lose some frames time by time.
To reduce the CPU utilization, I started to use FFMPEG to skip the decoding and encoding processes, which worked perfectly on my home machine. However, when I connected to my university VPN and tried to deploy it on our lab server, FFmpeg couldn't read any frame, ffplay couldn't get anything either. However, OpenCV, VLC Player and IINA Player could still read and display the feed.
In Summary,
1 FFMPEG/ffplay
1.1 can only read the feed from my home network(Wi-Fi, optimum)
1.2 from other two networks, the error message says: "Could not find codec parameters for stream 0 (Video: h264, none): unspecified size
Consider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options"
2 IINA/VLC Player, OpenCV
These tools can get the video all the time.
I'm wondering whether it's related to some specific port access, that the ffmpeg required but the others don't. I'd appreciate it if anyone can provide any suggestions.
As references, the tested ffplay command is simple:
ffplay 'the rtsp address'
Thanks
Update
More tests have been performed.
By specifying rtsp_transport as TCP, ffplay can play the video, but FFmpeg can't access the video. (In the beginning, when both FFmpeg and ffplay worked through my home network, it was UDP)
The FFmpeg command is as follows:
ffmpeg -i rtsp://the_ip_address/axis-media/media.amp -hide_banner -c:v copy -s 1920x1080 -segment_time 00:30:00 -f segment -strftime 1 -reset_timestamps 1 -rtsp_transport tcp "%Y-%m-%d-%H-%M-%S_Test.mp4"
Please help...
Solved by forcing it to use "-rtsp_transport tcp" right before -i.
ffmpeg -rtsp_transport tcp -i rtsp://the_ip_address/axis-media/media.amp -hide_banner -c:v copy -s 1920x1080 -segment_time 00:30:00 -f segment -strftime 1 -reset_timestamps 1 "%Y-%m-%d-%H-%M-%S_Test.mp4"

connect to rtsp and create multiple files every 10 seconds based from computer/device time

I wanted to run ffmpeg to connect to an RTSP stream and create multiple mkv files every 10 seconds based on the device time (device time where ffmpeg is running), and not the runtime of ffmpeg.
So if ffmpeg was run on midnight, file splits would be on 00:00:00, 00:00:10, 00:00:20, 00:00:30, 00:00:40, 00:00:50, 00:01:00, 00:01:10, etc.
An example file output would be:
stream_2022MAR07_00.00.00.mkv
stream_2022MAR07_00.00.10.mkv
stream_2022MAR07_00.00.20.mkv
etc.
And if the run hasn't started on the 10th second, it should still split on the next 10th second.
Example, the ffmpeg ran on 09:24:43, it should split on 09:24:50 and the succeeding 10th second of the device time. Expected file output would be:
stream_2022MAR07_09.24.43.mkv
stream_2022MAR07_09.24.50.mkv
stream_2022MAR07_09.25.00.mkv
etc.
The code that I have seen splits on the 10th second of the runtime, and not based on the device time.
ffmpeg -rtsp_transport tcp -i <rtsp_url> -f segment -strftime 1 \
-segment_time 00:00:10 -segment_atclocktime 1 -segment_clocktime_offset 30 \
-segment_format mp4 -an -vcodec copy -reset_timestamps 1 \
stream_%Y-%m-%d-%H.%M.%S.mp4
And also, I haven't really got the explanation of the flags used. Can someone please help me here as I am new with ffmpeg.

Streaming RTMP to JANUS-Gateway only showing bitrate but no video

I'm currently using the streaming plugin as follows
Fancy artchitecture here
OBS--------RTMP--------->NGINX-Server------FFMPEG(input RTMP output RTP)--------->JANUS---------webrtc-------->Client
When using the ffmpeg command (bellow), on the Janus streaming interface, we only see the bitrate that corresponds to that of the ffmpeg output in the console but we don't see any video.
ffmpeg -i rtmp://localhost/live/test -an -c:v copy -flags global_header -bsf dump_extra -f rtp rtp://localhost:8004
(using "-c:v copy" so that no encoding is used and hence reducing the
latency)
The video shows fine if I use "-c:v libx264", the only issue is that it is CPU intensive and adds latency.
Previously I had tried using RTSP as input for FFMPEG and in this case the video show fine with almost no latency even though I use "-c:v copy".
So I don't realy get why for RTSP the copy works fine, but for RTMP I have to use the libx264 codec. If anyone has an idea about this I am all ears :)
I had similar issue and my problem was that the stream / video that I used has large GOP size.
For WebRTC, latency is sub-second, so the input source should have short interval I frames. Better to remove B frames since they referring backward and forward as well.
Here are commands that you could use for small GOP size (4) and remove B frames.
Using RTMP streaming src:
ffmpeg rtmp://<your_src> -c:v libx264 -g 4 -bf 0 -f rtp -an rtp://<your_dst>
Using a mp4 file:
ffmpeg -re -i test.mp4 -c:v libx264 -g 4 -bf 0 -f rtp -an rtp://<your_dst>
-c:v copy does not reduce latency. It merely tells ffmpeg not to transcode.

Option to generate a .m4s file every second

I am trying to stream my live recording from a camera (web cam/ IP cam) to my web application. The streaming technique I use is MPEG-DASH, which has manifest in MPD format. To generate an MPD format from the web-cam, I use FFmpeg tool in shell command line:
ffmpeg -re -y -f dshow -i video="Logitech HD Webcam C525" -c:v libx264 -c:a libfdk_aac -f dash "manifest.mpd"
This code will generate a video chunk in .m4s format every 5-8 seconds.
Question is, what FFmpeg option can I use to generate a .m4s file every second instead of every 5-8 seconds? I suppose it has something to do with segment?
-seg_duration 1 -ldash 1 -streaming 1 would help you.

Add multiple audio files to video at specific points using FFMPEG

I am trying to create a video out of a sequence of images and various audio files using FFmpeg. While it is no problem to create a video containing the sequence of images with the following command:
ffmpeg -f image2 -i image%d.jpg video.mpg
I haven't found a way yet to add audio files at specific points to the generated video.
Is it possible to do something like:
ffmpeg -f image2 -i image%d.jpg -i audio1.mp3 AT 10s -i audio2.mp3 AT 15s video.mpg
Any help is much appreciated!
EDIT:
The solution in my case was to use sox as suggested by blahdiblah in the answer below. You first have to create an empty audio file as a starting point like that:
sox -n -r 44100 -c 2 silence.wav trim 0.0 20.0
This generates a 20 sec empty WAV file. After that you can mix the empty file with other audio files.
sox -m silence.wav "|sox sound1.mp3 -p pad 0" "|sox sound2.mp3 -p pad 2" out.wav
The final audio file has a duration of 20 seconds and plays sound1.mp3 right at the beginning and sound2.mp3 after 2 seconds.
To combine the sequence of images with the audio file we can use FFmpeg.
ffmpeg -i video_%05d.png -i out.wav -r 25 out.mp4
See this question on adding a single audio input with some offset. The -itsoffset bug mentioned there is still open, but see users' comments for some cases in which it does work.
If it works in your case, that would be ideal:
ffmpeg -i in%d.jpg -itsoffset 10 -i audio1.mp3 -itsoffset 15 -i audio2.mp3 out.mpg
If not, you should be able to combine all the audio files with sox, overlaying or inserting silence to produce the correct offsets and then use that as input to FFmpeg. Not as convenient, but guaranteed to work.
One approach I can think of is to create your audio file for the whole duration of the video first and then mux the audio with the video file

Resources