I am trying to extract the image file from a RTSP stream url every second (could be every 1 min also) and overwrite this image file.
my below code works but it outputs to multiple image jpg files: img1.jpg, img2.jpg, img3.jpg...
ffmpeg -i rtsp://IP_ADDRESS/live.sdp -f image2 -r 1 img%01d.jpg
How to use ffmpeg or perhaps bash scripts in Linux to overwrite the same image file while continuously extract the image at a NOT high frequecy, say 1 min or 10 sec?
To elaborate a bit on the already accepted answer from pragnesh,
FFmpeg
As stated in the ffmpeg documentation:
ffmpeg command line options are specified as
ffmpeg [global_options] {[input_options] -i input_file} ... {[output_options] output_file} ...
So
ffmpeg -i rtsp://<rtsp_source_addr> -f image2 -update 1 img.jpg
Uses output option -f image2 , force output format to image2 format, as part of the muxer stage.
Note that in ffmpeg, if the output file name specifies an image format the image2 muxer will be used by default, so the command could be shortened to:
ffmpeg -i rtsp://<rtsp_source_addr> -update 1 img.jpg
The image2 format muxer expects a filename pattern, such as img%01d.jpg to produce a sequentially numbered series of files. If the update option is set to 1, the filename will be interpreted as just a filename, not a pattern, thereby overwriting the same file.
Using the -r , set frame rate, video option works, but generated me a whole lot of dropping frame messages which was bugging me.
Thanks to another answer on the same topic, I found the fps Video Filter to do a better job.
So my version of the working command is
ffmpeg -i rtsp://<rtsp_source_addr> -vf fps=fps=1/20 -update 1 img.jpg
For some reason still unkown to me the minimum framerate I can achieve from my feed is 1/20 or 0.05.
There also exists the video filter thumbnail, which selects an image from a series of frames but this is more processing intensive and therefore I would not recommend it.
Most of this and more I found on the FFMpeg Online Documentation
AVconv
For those of you who use avconv it is very similar. They are after all forks of what was once a common library. The AVconv image2 documentation is found here.
avconv -i rtsp://<rtsp_source_addr> -vf fps=fps=1/20 -update 1 img.jpg
As Xianlin pointed out there may be a couple other interesting options to use:
-an : Disables audio recording.
Found in Audio Options Section
-r < fps > : sets frame rate
Found in the Video Options Section
used as an output option is actually a a substitute for the fps filter
leading to an alternate version :
avconv -i rtsp://<rtsp_source_addr> -r 1/20 -an -update 1 img.jpg
Hope it helps understand for possible further tweaking ;)
Following command line should work for you.
ffmpeg -i rtsp://IP_ADDRESS/live.sdp -f image2 -updatefirst 1 img.jpg
I couldn't get the option -update working to overwrite the .jpg. Doing some experiments resulted in a working solution (at least for me) with the option -y at the end (upper-case is not working). I also needed http:// instead of rstp:// for this camera.
ffmpeg -i http://xx:yy#192.168.1.xx:yyy/snapshot.cgi /tmp/Capture2.jpg -y
Grab a snapshot from an RTSP video stream every 10 seconds.
#!/bin/bash
#fetch-snapshots.sh
url='rtsp://IP_ADDRESS/live.sdp'
avconv -i $url -r 0.1 -vsync 1 -qscale 1 -f image2 images%09d.jpg
-r rate set frame rate to 0.1 frames a second (this equals to 1 frame every 10 seconds).
Thanks to westonruter, see https://gist.github.com/westonruter/4508842
Furthermore have a look at FFMPEG: Extracting 20 images from a video of variable length
ffmpeg -i rtsp://root:password#192.168.1.1/mpeg4 -ss 00:00:01 -f image2 -vframes 1 thumb.jpg
replace with your rtsp protocol url
make sure 00:00:01
if you put other numbers, the image will be crashed
Related
I'm using ffmpeg to take screenshot from online video stream. I want to seek multiple timeline. I've used the following command to capture 1 screenshot by seek command:
ffmpeg -ss 00:02:10 -i "stream-url" -frames:v 1 out1.jpg
How I can take multiple screenshot via multiple seek time. I've searched for the solution but no success.
I've used the following command to take multiple screenshot as follows:
ffmpeg -noaccurate_seek -ss 00:01:10 -i "stream-url" -map 0:v:0 -vframes 1 -f mpeg "thumb/output_01.jpg" -ss 00:02:10 -i "stream-url" -map 1:v:0 -vframes 1 -f mpeg "thumb/output_02.jpg"
Is there any way to generate screenshots from same input via seek command? How to make it more faster? How to skip multiple input(-i param)? I've also tried with other commands but those are more slower. Can anyone help me?
There's no easy way I know to specify a number of arbitrary seek points from which to extract frames (similar question here).
However, seeking is very fast with the way you specified. Instead of constructing a complex command, you could just download the YouTube video using youtube-dl (if you haven't done that already) and generate the commands like this:
ffmpeg -ss 00:01:10 -i input -frames:v 1 out1.jpg
ffmpeg -ss 00:02:05 -i input -frames:v 1 out2.jpg
ffmpeg -ss 00:03:20 -i input -frames:v 1 out3.jpg
Note that exporting JPG might lead to low quality. Using PNG is preferred; you will get lossless frames that you can handle with another program later (e.g. to resize or compress).
If you want to get frames from regular intervals, use the fps filter to drop the framerate:
ffmpeg -i input -filter:v fps=1/60 out%02d.jpg
This will output a frame every minute (1/60 frames per second = 1 frame per minute), with two zero-padded digits as output numbers. You could additionally offset the start by providing a -ss option before the input file.
I am trying to create a video out of a sequence of images and various audio files using FFmpeg. While it is no problem to create a video containing the sequence of images with the following command:
ffmpeg -f image2 -i image%d.jpg video.mpg
I haven't found a way yet to add audio files at specific points to the generated video.
Is it possible to do something like:
ffmpeg -f image2 -i image%d.jpg -i audio1.mp3 AT 10s -i audio2.mp3 AT 15s video.mpg
Any help is much appreciated!
EDIT:
The solution in my case was to use sox as suggested by blahdiblah in the answer below. You first have to create an empty audio file as a starting point like that:
sox -n -r 44100 -c 2 silence.wav trim 0.0 20.0
This generates a 20 sec empty WAV file. After that you can mix the empty file with other audio files.
sox -m silence.wav "|sox sound1.mp3 -p pad 0" "|sox sound2.mp3 -p pad 2" out.wav
The final audio file has a duration of 20 seconds and plays sound1.mp3 right at the beginning and sound2.mp3 after 2 seconds.
To combine the sequence of images with the audio file we can use FFmpeg.
ffmpeg -i video_%05d.png -i out.wav -r 25 out.mp4
See this question on adding a single audio input with some offset. The -itsoffset bug mentioned there is still open, but see users' comments for some cases in which it does work.
If it works in your case, that would be ideal:
ffmpeg -i in%d.jpg -itsoffset 10 -i audio1.mp3 -itsoffset 15 -i audio2.mp3 out.mpg
If not, you should be able to combine all the audio files with sox, overlaying or inserting silence to produce the correct offsets and then use that as input to FFmpeg. Not as convenient, but guaranteed to work.
One approach I can think of is to create your audio file for the whole duration of the video first and then mux the audio with the video file
This question already has answers here:
How to create a video from images with FFmpeg?
(9 answers)
Closed 3 years ago.
I'm wanting to take a bunch of images and make a video slideshow out of them. There'll be an app for that, right? Yup, quite a few it seems. The problem is I want the slides synced to a piece of music, and all the apps I've seen only allow you to show each slide for a multiple of a whole second. I want them to show for multiples of 1.714285714 seconds to fit with 140 bpm.
The tools I've seen generally seem to have ffmpeg under the hood, so presumably this kind of thing could be done with a script. But ffmpeg has sooo many options...I'm hoping someone will have something close.
I'll have up to about 100 slides, the ones that have to show for 3.428571428 secs or whatever I guess I can simply show twice.
For very recent versions of ffmpeg (roughly from the end of year 2013)
The following will create a video slideshow (using video codec libx264 or webm) from all the png images in the current directory. The command accepts image names numbered and ordered in series (img001.jpg, img002.jpg, img003.jpg) as well as random bunch of images.
(each image will have a duration of 5 seconds)
ffmpeg -r 1/5 -pattern_type glob -i '*.png' -c:v libx264 out.mp4 # x264 video
ffmpeg -r 1/5 -pattern_type glob -i '*.png' out.webm # WebM video
For older versions of ffmpeg
This will create a video slideshow (using video codec libx264 or webm) from series of png images, named img001.png, img002.png, img003.png, …
(each image will have a duration of 5 seconds)
ffmpeg -f image2 -r 1/5 -i img%03d.png -vcodec libx264 out.mp4 # x264 video
ffmpeg -f image2 -r 1/5 -i img%03d.png out.webm # WebM video
You may have to slightly modify the following commands if you have a very recent version of ffmpeg
This will create a slideshow in which each image has a duration of 15 seconds:
ffmpeg -f image2 -r 1/15 -i img%03d.png out.webm
If you want to create a video out of just one image, this will do (output video duration is set to 30 seconds):
ffmpeg -loop 1 -f image2 -i img.png -t 30 out.webm
If you don't have images numbered and ordered in series (img001.jpg, img002.jpg, img003.jpg) but rather random bunch of images, you might try this:
cat *.jpg | ffmpeg -f image2pipe -r 1 -vcodec mjpeg -i - out.webm
or for png images:
cat *.png | ffmpeg -f image2pipe -r 1 -vcodec png -i - out.webm
That will read all the jpg/png images in the current directory and write them, one by one, using the pipe, to the ffmpeg's input, which will produce the video out of it.
Important: All images in a series need to be of the same size (x and y dimensions) and format.
Explanation: By telling FFmpeg to set the input file's FPS option (frames per second) to some very low value, we made FFmpeg duplicate frames at the output and thus we achieved to display each image for some time on screen. You have seen, that you can set any fraction as framerate. 140 beats per minute would be -r 140/60.
Source: The FFmpeg wiki
For creating images from a video use
ffmpeg -i video.mp4 img%03d.png
This will create images named img001.png, img002.png, img003.png, …
You can extract images from a video, or create a video from many
images:
For extracting images from a video:
ffmpeg -i foo.avi -r 1 -s WxH -f image2 foo-%03d.jpeg
This will extract one video frame per second from the video and will
output them in files named 'foo-001.jpeg', 'foo-002.jpeg', etc. Images
will be rescaled to fit the new WxH values. If you want to extract
just a limited number of frames, you can use the above command in
combination with the -vframes or -t option, or in combination with -ss
to start extracting from a certain point in time. For creating a video
from many images:
ffmpeg -f image2 -i foo-%03d.jpeg -r 12 -s WxH foo.avi
The syntax foo-%03d.jpeg specifies to use a decimal number composed
of three digits padded with zeroes to express the sequence number. It
is the same syntax supported by the C printf function, but only
formats accepting a normal integer are suitable.
This is an excerpt from the documentation, for more info check on the documentation page of ffmpeg.
I wound up using this:
mencoder "mf://html/*.png" -ovc x264 -mf fps=1.16666667 -o output.avi
and changing the sample rate afterwards in LiVES.
a load more details (and the end result video) at: http://hyperdata.org/hackit/ (mirror)
I need to create a movie/stream with ffmpeg from a HTTP url that points to an image. This image gets updated 1 time per second.
I already know how to convert from MPEG-4 to flv for example using the ffmpeg command line, but now I need to start from this still image that gets updated. I would like ffmpeg to 'GET' the url 1 time per second for example.
regards,
Wim
The command line option needed is -loop_input. I am currenly using this command line to do it:
ffmpeg -loop_input -analyzeduration 0 -r 3 -i http://ipaddress/current.jpg -an -re -copyts -f flv output.flv
The -loop_input instructs ffmpeg to read the jpg on the given URL at the input frame rate (3 fps in this example). The -analyzeduration 0 will give a quicker startup and show the first frame of your movie faster. The output can be anything, in the example here, it is a flash movie, but it can be anything ffmpeg supports.
Anyone knows the trick?
And how to install ffmpeg ? yum install mpeg only returns this:
======================================================================================== Matched: mpeg ========================================================================================
libiec61883.i386 : Streaming library for IEEE1394
libiec61883.x86_64 : Streaming library for IEEE1394
qffmpeg-devel.i386 : Development package for qffmpeg
qffmpeg-devel.x86_64 : Development package for qffmpeg
qffmpeg-libs.i386 : Libraries for qffmpeg
qffmpeg-libs.x86_64 : Libraries for qffmpeg
I've cobbled up this command line from various answers that works great for me to get the absolutely first frame out from a video. I use this to save a thumbnail screenshot for the video.
ffmpeg -i inputfile.mkv -vf "select=eq(n\,0)" -q:v 3 output_image.jpg
Explanation:
The select filter -vf "select=eq(n\,0)" is to select only frame #0.
-q:v allows you to set the quality of the output jpeg between 1 and 31. Lower the number, higher the quality. 2 - 5 works good, I use 3.
Note: This will get you an image with the same size as the video. To get a thumbnail, you can use the scale filter to get a thumbnail to fit whatever width you need, like so:
ffmpeg -i inputfile.mkv -vf "select=eq(n\,0)" -vf scale=320:-2 -q:v 3 output_image.jpg
The above command will give you a thumbnail jpeg that will be scaled to match width of 320, and height will be calculated to match the aspect ratio.
It's on the manpage:
* You can extract images from a video, or create a video from many
images:
For extracting images from a video:
ffmpeg -i foo.avi -r 1 -s WxH -f image2 foo-%03d.jpeg
This will extract one video frame per second from the video and will
output them in files named foo-001.jpeg, foo-002.jpeg, etc. Images
will be rescaled to fit the new WxH values.
If you want to extract just a limited number of frames, you can use
the above command in combination with the -vframes or -t option, or in
combination with -ss to start extracting from a certain point in time.
But of course you have to install it first. I'm on Debian and don't use yum.
[update for the other question]
i=1
for avi in *.avi; do
ffmpeg -i $avi -vframes 1 -f image2 /tmp/$i.jpg; i=$((i+1))
done
Tested and works.
[update for yet another question...]
for flv in *.flv; do
ffmpeg -i $flv -vframes 1 -f image2 ${flv%%.flv}.jpg
done
An easy to grok solution that works for me is
ffmpeg -i <input> -vframes 1 <output>.jpeg
Note that I do get an error "[swscaler # 0x111652000] deprecated pixel format used, make sure you did set range correctly" but according to a little reading (see for example https://stackoverflow.com/a/43038480/1241736) that can safely be ignored.
It's works for me
ffmpeg -i sample-mp4-file.mp4 -ss 1 -vframes 1 output.jpg