I have seen several other related questions but they all seem to be related to grabbing a still shot every X number of seconds. How can I grab 1 image when the command is run.
I was trying
ffmpeg -y -i rtsp://admin:admin#192.168.10.113:554/live -f image2 -updatefirst 1 do.jpg
Try
ffmpeg -y -i rtsp://admin:admin#192.168.10.113:554/live -vframes 1 do.jpg
I've been using variations to use my Ubiquiti cameras to give me a Weather Underground JPG.
The tcp transport addition fixed everything. The modified command follows.
E $FFMPEG -y -loglevel fatal -rtsp_transport tcp -i $URL1 -frames:v 2 -r 1 -s 320x240 $TMPFILE
My take on this command, but its not perfect, about 20% of the time I get a corrupted (as in incomplete, or glitchy) image over a bad link:
avconv -rtsp_transport tcp -y -i rtsp://user:pass#192.168.0.1:554/live -vframes 1 do.jpg
Firstly you need download ffmpeg.exe file to your computer and unzip,
Secondly, open Windows Terminal or PowerShell or CMD in your unzipped path and enter the bin directory,enter the following command:
.\ffmpeg -i rtsp://username:password#192.168.1.1:554/media/video0 -ss 1 -f image2 C:\Users\Desktop\1.jpg
You also can use a "proxy" app like https://github.com/gallofeliz/snapshot-proxy-cam that handle fallbacks and centralize your cams
Related
I have an ffmpeg version built with VMAF library. I can use it to calculate the VMAF scores of a distorted video against a reference video using commands like this:
ffmpeg -i distorted.mp4 -i original.mp4 -filter_complex "[0:v]scale=640:480:flags=bicubic[main];[main][1:v]libvmaf=model_path=model/vmaf_v0.6.1.json:log_path=log.json" -f null -
Now, I remember there was a way to get VMAF scores while performing regular ffmpeg encoding. How can I do that at the same time?
I want to encode a video like this, while also calulate the VMAF of the output file:
ffmpeg -i original.mp4 -crf 27 -s 640x480 out.mp4
[edited]
Alright, scratch what I said earlier...
You should be able to use [the `tee` muxer](http://ffmpeg.org/ffmpeg-formats.html#tee-1) to save the file and pipe the encoded frames to another ffmpeg process. Something like this should work for you:
ffmpeg -i original.mp4 -crf 27 -s 640x480 -f tee "out.mp4 | [f=mp4]-" \
| ffmpeg -i - -i original.mp4 -filter_complex ...
(make them into 2 lines and remove \ for Windows)
Here is what works on my Windows PC (thanks to #Rotem for his help)
ffmpeg -i in.mp4 -vcodec libx264 -crf 27 -f nut pipe:
|
ffmpeg -i in.mp4 -f nut -i pipe:
-filter_complex "[0:v][1:v]libvmaf=log_fmt=json:log_path=log.json,nullsink"
-map 1 -c copy out.mp4
The main issue that #Rotem and I missed is that we need to terminate the libvmaf's output. Also, h264 raw format does not carry header info, and using `nut alleviates that issue.
There are a couple caveats:
testing with the testsrc example that #Rotem suggested in the comment below does not produce any libvmaf log, at least as far as I can see, but in debug mode, you can see the filter is getting initialized.
You'll likely see [nut # 0000026b123afb80] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8) message in the log. This just means that the frames are piped in faster than the 2nd ffmpeg is processing. FFmpeg does block on both ends, so no info should be lost.
For the full disclosure, I posted my Python test script on GitHub. It just runs the shell command, so it should be easy to follow even if you don't do Python.
I've got ffmpeg to read some RTSP stream and output image2 format to stdout like so:
ffmpeg -rtsp_transport tcp -i "rtsp:xxxxx" -f image2 -update 1 -
But stdout is not good enough for me.. I am trying to pass it to "push" it to some other process that I cannot "pipe" to ffmpeg due to some architecture constraints. I am running on Linux so I was hoping to simulate some tcp/udp socket via the file system e.g. /dev/somthing or similar. Alternatively, maybe it's possible to get ffmpeg to send the image directly to a given tcp/udp address? This didn't work though (ffmpeg expects a file output):
ffmpeg -rtsp_transport tcp -i "rtsp:xxxxx" -f image2 -update 1 "udp://localhost:3333"
Any ideas?
Thanks
The normal image2 muxer expects to write to one or more image files. Use the image2pipe muxer.
ffmpeg -rtsp_transport tcp -i "rtsp:xxxxx" -f image2pipe "udp://localhost:3333"
(-update has no relevance when piping).
Using FFMPG I'm creating a poster image from a video and adding an watermark/overlay to the poster. The following works great with small video files, but destroys my CPU with 1080p files.
ffmpeg -ss 15 -i preview.mp4 -i play-button.png \
-filter_complex overlay='(main_w-overlay_w)/2:(main_h-overlay_h)/2', \
scale='min(640\, iw):-1' -vframes 1 poster.jpg
Is there any way to speed this up? Or should I look to another solution for the overlay?
My solution is similar yours. But i use -s to set the output resolution for the image, and -f image2 for rendering. This commands works fine for me:
ffmpeg -ss 15 -i preview.mp4 -i play-button.png -filter_complex "overlay=(main_w-overlay_w)/2:(main_h-overlay_h)/2" -vframes 1 -s 640x360 -f image2 -y poster.jpg
I am trying to extract the image file from a RTSP stream url every second (could be every 1 min also) and overwrite this image file.
my below code works but it outputs to multiple image jpg files: img1.jpg, img2.jpg, img3.jpg...
ffmpeg -i rtsp://IP_ADDRESS/live.sdp -f image2 -r 1 img%01d.jpg
How to use ffmpeg or perhaps bash scripts in Linux to overwrite the same image file while continuously extract the image at a NOT high frequecy, say 1 min or 10 sec?
To elaborate a bit on the already accepted answer from pragnesh,
FFmpeg
As stated in the ffmpeg documentation:
ffmpeg command line options are specified as
ffmpeg [global_options] {[input_options] -i input_file} ... {[output_options] output_file} ...
So
ffmpeg -i rtsp://<rtsp_source_addr> -f image2 -update 1 img.jpg
Uses output option -f image2 , force output format to image2 format, as part of the muxer stage.
Note that in ffmpeg, if the output file name specifies an image format the image2 muxer will be used by default, so the command could be shortened to:
ffmpeg -i rtsp://<rtsp_source_addr> -update 1 img.jpg
The image2 format muxer expects a filename pattern, such as img%01d.jpg to produce a sequentially numbered series of files. If the update option is set to 1, the filename will be interpreted as just a filename, not a pattern, thereby overwriting the same file.
Using the -r , set frame rate, video option works, but generated me a whole lot of dropping frame messages which was bugging me.
Thanks to another answer on the same topic, I found the fps Video Filter to do a better job.
So my version of the working command is
ffmpeg -i rtsp://<rtsp_source_addr> -vf fps=fps=1/20 -update 1 img.jpg
For some reason still unkown to me the minimum framerate I can achieve from my feed is 1/20 or 0.05.
There also exists the video filter thumbnail, which selects an image from a series of frames but this is more processing intensive and therefore I would not recommend it.
Most of this and more I found on the FFMpeg Online Documentation
AVconv
For those of you who use avconv it is very similar. They are after all forks of what was once a common library. The AVconv image2 documentation is found here.
avconv -i rtsp://<rtsp_source_addr> -vf fps=fps=1/20 -update 1 img.jpg
As Xianlin pointed out there may be a couple other interesting options to use:
-an : Disables audio recording.
Found in Audio Options Section
-r < fps > : sets frame rate
Found in the Video Options Section
used as an output option is actually a a substitute for the fps filter
leading to an alternate version :
avconv -i rtsp://<rtsp_source_addr> -r 1/20 -an -update 1 img.jpg
Hope it helps understand for possible further tweaking ;)
Following command line should work for you.
ffmpeg -i rtsp://IP_ADDRESS/live.sdp -f image2 -updatefirst 1 img.jpg
I couldn't get the option -update working to overwrite the .jpg. Doing some experiments resulted in a working solution (at least for me) with the option -y at the end (upper-case is not working). I also needed http:// instead of rstp:// for this camera.
ffmpeg -i http://xx:yy#192.168.1.xx:yyy/snapshot.cgi /tmp/Capture2.jpg -y
Grab a snapshot from an RTSP video stream every 10 seconds.
#!/bin/bash
#fetch-snapshots.sh
url='rtsp://IP_ADDRESS/live.sdp'
avconv -i $url -r 0.1 -vsync 1 -qscale 1 -f image2 images%09d.jpg
-r rate set frame rate to 0.1 frames a second (this equals to 1 frame every 10 seconds).
Thanks to westonruter, see https://gist.github.com/westonruter/4508842
Furthermore have a look at FFMPEG: Extracting 20 images from a video of variable length
ffmpeg -i rtsp://root:password#192.168.1.1/mpeg4 -ss 00:00:01 -f image2 -vframes 1 thumb.jpg
replace with your rtsp protocol url
make sure 00:00:01
if you put other numbers, the image will be crashed
I'm using this shell command to make thumbnail from VIDEO_FILE from 123 second and save it to THUMBNAIL_FILE.
ffmpeg -i VIDEO_FILE -r 1 -ss 123 -f image2 THUMBNAIL_FILE
It works, but it is really slow for big movies. Is there any way to make it a little faster?
it has happened to me also, changing the argument order fixes this problem.
tested on a 1.4GB 90 minute mp4 video - took about 1-2 seconds. before that it took MINUTES...
try this:
ffmpeg -ss 123 -i "VIDEO_FILE" "THUMBNAIL_FILE" -r 1 -vframes 1 -an -vcodec mjpeg
Ffmpeg is not really good with creating thumbnails as I investigated. People recommend to use mplayer (by ffmpeg creators).
mplayer VIDEO_FILE -ss 00:10:11 -frames 1 -vo jpeg:outdir=THUMBNAILS_DIRECTORY
A small enhancement to Kirzilla's code: If you want to create PNG files (with compression), you can use the following code:
mplayer VIDEO_FILE -ss 00:10:11 -frames 1 -vo png:z=9:outdir=THUMBNAILS_DIRECTORY
This will probably create better thumbnails but of course with a larger size than JPEG.