Fast seeking ffmpeg multiple times for screenshots - ffmpeg

I have come across https://askubuntu.com/questions/377579/ffmpeg-output-screenshot-gallery/377630#377630, it's perfect. That has done exactly what I wanted.
However, I'm using remote URLs to generate the screenshot timeline. I do know it's possible to fast seek with remote files using https://trac.ffmpeg.org/wiki/Seeking%20with%20FFmpeg (using -ss before the -i) but this only runs the once.
I'm looking for a way to use the
./ffmpeg -i input -vf "select=gt(scene\,0.4),scale=160:-1,tile,scale=600:-1" \
-frames:v 1 -qscale:v 3 preview.jpg
command but using the fast seek method as it's currently very slow when used with a remote file. I use PHP but I am aware that a C method exists by using av_seek_frame, I barely know C so I'm unable to implement this into a PHP script I'm writing. So hopefully, it is possible to do this directly with ffmpeg in the PHP system() function.
Currently, I run seperate ffmpeg commands (with the -ss method) and then combine the screenshots together in PHP. However, with this method it will be refetching the metadata each time and a more optimized method would be to have it all happen in the same command line because I want to reduce the amount of requests made to the remote url so I can run more scripts in sequence with each other.
Thank you for your help.

Yes it's because -ss is not before -i and you need to add that before each input.
So here's a working example that takes it out super fast.
ffmpeg -ss 10 -i test.avi -frames:v 1 -f image2 -map 0:v:0 thumbnails/output_0.png \
-ss 800 -i test.avi -frames:v 1 -f image2 -map 1:v:0 thumbnails/output_1.png \
-ss 2400 -i test.avi -frames:v 1 -f image2 -map 2:v:0 thumbnails/output_2.png
So the 0 : v : 0 means 1st input, and select video streams, first videostream 1 : v : 0 means 2nd input, and select video streams, first videostream (0) 2 : v : 0 means 2nd input, and select video streams, first videostream (0)

The main reason why this is slow is because "select=gt(scene\,0.4)" requires every frame to be decoded and compared to the next so that scene changes can be detected.
I don't believe it is possible to do what you are doing any faster than you are doing with the scene change detector. You could provide n screenshot from video_duration/n steps through the video, additionally you could also check each frame isn't black by checking the image intensity is above a threshold.

$ffmpeg = "ffmpeg.exe";
$cmd = "$ffmpeg -ss 20 -i $Filename -frames:v 1 mjpeg -map 0:v:0 $Thumbnail";
$Return = `$cmd`;
Makes extremely fast thumbnails of videos. $Filename is the file and path of your video e.g. C:\videos\video_1.mp4
And $Thumbnail is the file path AND filename to where you want your thumbnail stored e.g. C:\Thumbnails\Thumbnail_1.jpg

Related

How to force ffmpeg to refresh overlay image more often?

I am trying to do sports live-streaming using ffmpeg. Score of a streaming match is being fetched from server and converted to png. This png must appear on top of the video.
ffmpeg allows to put an overlay over a video stream using image2 demuxer. If I use -loop1, this overlay updates approximately every 5 seconds. How can I force ffmpeg to read it from disk more often?
My current attempt with overlay updating once in 5 seconds(mp4 video for testing purposes):
nice -n -19 ffmpeg \
-re -y \
-i s.mp4 \
-f image2 -loop 1 -i http://127.0.0.1:3000/img \
-filter_complex "[0:v][1:v]overlay" \
-threads 4 \
-v 0 -f mpegts -preset ultrafast udp://127.0.0.1:23000 \
&
P.S
I know, that I can make youtube streaming widget on the website and put score on top of it just using html/css/js. But unfortunately it must be done directly in the video stream.
P.P.S
I know, that I can use ffmpeg drawtext. But it is not what I want. I have specially designed png, which must be updated as frequently, as possible ( once in 1-2 seconds would be just great )
Three things:
1) -re is applied per input, so ffmpeg is currently reading your image at a rate asynchronous with respect to the video. Since the video is being read in real time, the image reader queues the packets of the looped image till the filtergraph can consume them. So the updated image will consumed much later and with a greater timestamp assigned than when it was actually updated. Add -re before the image -i to correct this.
2) Skip -loop 1 and use -stream_loop -1 since the image2 demuxer can abort if the input is blocked or empty (due to update) when it's trying to read it. Although, since the input is read via a network protocol, this may not be an issue for you.
3) You've specified no encoder in the output options. Since the format is MPEG-TS, ffmpeg will choose mpeg2video with a default bitrate of 200 kbps. The ultrafast preset does not apply to this encoder. You probably want to add -c:v libx264.
I have found, that increasing framerate value of image2 to 90-100 makes file reading process faster, but audio becomes throttled

Sync files timestamp with ffmpeg

I'm capturing video from 4 cameras connected with HDMI through a capture card. I'm using ffmpeg to save the video feed from the cameras to multiples jpeg files (30 jpeg per second per camera).
I want to be able to save the images with the capture time. Currently I'm using this command for one camera:
ffmpeg -f video4linux2 -pixel_format yuv420p -timestamps abs -I /dev/video0 -c:a jpeg -t 60 -ts_from_file 2 camera0-%5d.jpeg
It saves my file with the names camera0-00001.jpg, camera0-00002.jpg, etc.
Then I rename my file with camera0-HH-mm-ss-(1-30).jpeg based on the modified time of the file.
So in the end I have 4 files with the same time and same frame like this:
camera0-12-00-00-1.jpeg
camera1-12-00-00-1.jpeg
camera2-12-00-00-1.jpeg
camera3-12-00-00-1.jpeg
My issue is that the file may be offset from one to two frame. They may have the same name but sometime one or two camera may show different frame.
Is there a way to be sure that the capture frames has the actual time of the capture and not the time of the creation of the file?
You can use the mkvtimestamp_v2 muxer
ffmpeg -f video4linux2 -pixel_format yuv420p -timestamps abs -copyts -i /dev/video0 \
-vf setpts=PTS-STARTPTS -vsync 0 -vframes 1800 camera0-%5d.jpeg \
-c copy -vsync 0 -vframes 1800 -f mkvtimestamp_v2 timings.txt
timings.txt will have output like this
# timecode format v2
1521177189530
1521177189630
1521177189700
1521177189770
1521177189820
1521177189870
1521177189920
1521177189970
...
where each reading is the Unix epoch time in milliseconds.
I've switched to output frame count limit to stop the process instead of -t 60. You can use -t 60 for the first output since we are resetting timestamps there, but not for the second. If you do that, remember to only use the first N entries from the text file, where N is the number of images produced.

How to capture multiple screenshot from online video stream using ffmpeg with specific seek time

I'm using ffmpeg to take screenshot from online video stream. I want to seek multiple timeline. I've used the following command to capture 1 screenshot by seek command:
ffmpeg -ss 00:02:10 -i "stream-url" -frames:v 1 out1.jpg
How I can take multiple screenshot via multiple seek time. I've searched for the solution but no success.
I've used the following command to take multiple screenshot as follows:
ffmpeg -noaccurate_seek -ss 00:01:10 -i "stream-url" -map 0:v:0 -vframes 1 -f mpeg "thumb/output_01.jpg" -ss 00:02:10 -i "stream-url" -map 1:v:0 -vframes 1 -f mpeg "thumb/output_02.jpg"
Is there any way to generate screenshots from same input via seek command? How to make it more faster? How to skip multiple input(-i param)? I've also tried with other commands but those are more slower. Can anyone help me?
There's no easy way I know to specify a number of arbitrary seek points from which to extract frames (similar question here).
However, seeking is very fast with the way you specified. Instead of constructing a complex command, you could just download the YouTube video using youtube-dl (if you haven't done that already) and generate the commands like this:
ffmpeg -ss 00:01:10 -i input -frames:v 1 out1.jpg
ffmpeg -ss 00:02:05 -i input -frames:v 1 out2.jpg
ffmpeg -ss 00:03:20 -i input -frames:v 1 out3.jpg
Note that exporting JPG might lead to low quality. Using PNG is preferred; you will get lossless frames that you can handle with another program later (e.g. to resize or compress).
If you want to get frames from regular intervals, use the fps filter to drop the framerate:
ffmpeg -i input -filter:v fps=1/60 out%02d.jpg
This will output a frame every minute (1/60 frames per second = 1 frame per minute), with two zero-padded digits as output numbers. You could additionally offset the start by providing a -ss option before the input file.

FFmpeg extracts different number of frames when using -filter_complex together with the split filter

I am fiddling with ffmpeg, extracting jpg pictures from videos. I am splitting the input stream into two output stream with -filter_complex, because I process my videos from direct http link (scarce free space on VPS), and I don't want to read through the whole video twice (traffic quota is also scarce). Furthermore I need two series of pitcures, one for applying some filters (fps changing, scale, unsharp, crop, scale) and then selecting from them by naked eye, and the other series being untouched (expect fps changing, and cropping the black borders), using them for furter processing after selecting from the first series. I call my ffmpeg command from Ruby script, so it contains some string interpolation / substitution in the form #{}. My working command line looked like:
ffmpeg -y -fflags +genpts -loglevel verbose -i #{url} -filter_complex "[0:v]fps=fps=#{new_fps.round(5).to_s},split=2[in1][in2];[in1]crop=iw-#{crop[0]+crop[2]}:ih-#{crop[1]+crop[3]}:#{crop[0]}:#{crop[1]},scale=#{thumb_width}:-1:flags=lanczos,unsharp,lutyuv=y=gammaval(#{gammaval})[out1];[in2]crop=iw-#{crop[0]+crop[2]}:ih-#{crop[1]+crop[3]}:#{crop[0]}:#{crop[1]}[out2]" -f #{format} -c copy #{options} -map_chapters -1 - -map '[out1]' -f image2 -q 1 %06d.jpg -map '[out2]' -f image2 -q 1 big_%06d.jpg
#{options} is set when output is MP4, then its value is "-movflags frag_keyframe+empty_moov" so I can send it to standard output without seeking capability and uploading the stream somewhere without making huge temporary video files.
So I get two series of pictures, one of them is filtered, sharpened, the other is in fact untouched. And I also get an output stream of the video on the standard output which is handled by Open3.popen3 library connecting the output stream of the input of two other commands.
Problem arise when I would like to seek in the video to a given point and omitting the streamed video output on the STDOUT. I try to apply combined seeking, fast seek before the given time code and the slow seek to the exact time code, given in floating seconds:
ffmpeg -report -y -fflags +genpts -loglevel verbose -ss #{(seek_to-seek_before).to_s} -i #{url} -ss #{seek_before.to_s} -t #{t_duration.to_s} -filter_complex "[0:v]fps=fps=#{pics_per_sec},split=2[in1][in2];[in1]crop=iw-#{crop[0]+crop[2]}:ih-#{crop[1]+crop[3]}:#{crop[0]}:#{crop[1]},scale=#{thumb_width}:-1:flags=lanczos,unsharp,lutyuv=y=gammaval(#{gammaval})[out1];[in2]crop=iw-#{crop[0]+crop[2]}:ih-#{crop[1]+crop[3]}:#{crop[0]}:#{crop[1]}[out2]" -map '[out1]' -f image2 -q 1 %06d.jpg -map '[out2]' -f image2 -q 1 big_%06d.jpg
Running this command I get the needed two series of pictures, but they contains different number of images, 233 vs. 484.
Actual values can be read from this interpolated / substituted command line:
ffmpeg -report -y -fflags +genpts -loglevel verbose -ss 1619.0443599999999 -i fabf.avi -ss 50.0 -t 46.505879999999934 -filter_complex "[0:v]fps=fps=5,split=2[in1][in2];[in1]crop=iw-0:ih-0:0:0,scale=280:-1:flags=lanczos,unsharp,lutyuv=y=gammaval(0.526316)[out1];[in2]crop=iw-0:ih-0:0:0[out2]" -map '[out1]' -f image2 -q 1 %06d.jpg -map '[out2]' -f image2 -q 1 big_%06d.jpg
Detailed log can be found here: http://www.filefactory.com/file/1yih17k2hrmp/ffmpeg-20160610-223820.txt
Before last line it shows 188 duplicated frames.
I also tried passing "-vsync 0" option, but didn't help. When I generate the two series of images in two consecutive steps, with two different command lines, then no problem arises, I get same amount of pictures in both series of course. So my question would be, how can I use the later command line, generating the two series of images by only one reading / parsing of the remote video file?
You have to replicate the -ss -t options for the 2nd output as well i.e.
...-f image2 -q 1 %06d.jpg -map '[out2]' -ss 50 -t 46.5 -f image2 -q 1 big_%06d.jpg
Each output option (those not before -i) only apply to the output that immediately follows.

ffmpeg capture current frame and overwrite the image output file

I am trying to extract the image file from a RTSP stream url every second (could be every 1 min also) and overwrite this image file.
my below code works but it outputs to multiple image jpg files: img1.jpg, img2.jpg, img3.jpg...
ffmpeg -i rtsp://IP_ADDRESS/live.sdp -f image2 -r 1 img%01d.jpg
How to use ffmpeg or perhaps bash scripts in Linux to overwrite the same image file while continuously extract the image at a NOT high frequecy, say 1 min or 10 sec?
To elaborate a bit on the already accepted answer from pragnesh,
FFmpeg
As stated in the ffmpeg documentation:
ffmpeg command line options are specified as
ffmpeg [global_options] {[input_options] -i input_file} ... {[output_options] output_file} ...
So
ffmpeg -i rtsp://<rtsp_source_addr> -f image2 -update 1 img.jpg
Uses output option -f image2 , force output format to image2 format, as part of the muxer stage.
Note that in ffmpeg, if the output file name specifies an image format the image2 muxer will be used by default, so the command could be shortened to:
ffmpeg -i rtsp://<rtsp_source_addr> -update 1 img.jpg
The image2 format muxer expects a filename pattern, such as img%01d.jpg to produce a sequentially numbered series of files. If the update option is set to 1, the filename will be interpreted as just a filename, not a pattern, thereby overwriting the same file.
Using the -r , set frame rate, video option works, but generated me a whole lot of dropping frame messages which was bugging me.
Thanks to another answer on the same topic, I found the fps Video Filter to do a better job.
So my version of the working command is
ffmpeg -i rtsp://<rtsp_source_addr> -vf fps=fps=1/20 -update 1 img.jpg
For some reason still unkown to me the minimum framerate I can achieve from my feed is 1/20 or 0.05.
There also exists the video filter thumbnail, which selects an image from a series of frames but this is more processing intensive and therefore I would not recommend it.
Most of this and more I found on the FFMpeg Online Documentation
AVconv
For those of you who use avconv it is very similar. They are after all forks of what was once a common library. The AVconv image2 documentation is found here.
avconv -i rtsp://<rtsp_source_addr> -vf fps=fps=1/20 -update 1 img.jpg
As Xianlin pointed out there may be a couple other interesting options to use:
-an : Disables audio recording.
Found in Audio Options Section
-r < fps > : sets frame rate
Found in the Video Options Section
used as an output option is actually a a substitute for the fps filter
leading to an alternate version :
avconv -i rtsp://<rtsp_source_addr> -r 1/20 -an -update 1 img.jpg
Hope it helps understand for possible further tweaking ;)
Following command line should work for you.
ffmpeg -i rtsp://IP_ADDRESS/live.sdp -f image2 -updatefirst 1 img.jpg
I couldn't get the option -update working to overwrite the .jpg. Doing some experiments resulted in a working solution (at least for me) with the option -y at the end (upper-case is not working). I also needed http:// instead of rstp:// for this camera.
ffmpeg -i http://xx:yy#192.168.1.xx:yyy/snapshot.cgi /tmp/Capture2.jpg -y
Grab a snapshot from an RTSP video stream every 10 seconds.
#!/bin/bash
#fetch-snapshots.sh
url='rtsp://IP_ADDRESS/live.sdp'
avconv -i $url -r 0.1 -vsync 1 -qscale 1 -f image2 images%09d.jpg
-r rate set frame rate to 0.1 frames a second (this equals to 1 frame every 10 seconds).
Thanks to westonruter, see https://gist.github.com/westonruter/4508842
Furthermore have a look at FFMPEG: Extracting 20 images from a video of variable length
ffmpeg -i rtsp://root:password#192.168.1.1/mpeg4 -ss 00:00:01 -f image2 -vframes 1 thumb.jpg
replace with your rtsp protocol url
make sure 00:00:01
if you put other numbers, the image will be crashed

Resources