Generate a movie with ffmpeg from a changing still image url? - ffmpeg

I need to create a movie/stream with ffmpeg from a HTTP url that points to an image. This image gets updated 1 time per second.
I already know how to convert from MPEG-4 to flv for example using the ffmpeg command line, but now I need to start from this still image that gets updated. I would like ffmpeg to 'GET' the url 1 time per second for example.
regards,
Wim

The command line option needed is -loop_input. I am currenly using this command line to do it:
ffmpeg -loop_input -analyzeduration 0 -r 3 -i http://ipaddress/current.jpg -an -re -copyts -f flv output.flv
The -loop_input instructs ffmpeg to read the jpg on the given URL at the input frame rate (3 fps in this example). The -analyzeduration 0 will give a quicker startup and show the first frame of your movie faster. The output can be anything, in the example here, it is a flash movie, but it can be anything ffmpeg supports.

Related

Using ffmpeg, jpg to mp4 to mpegts, play with HLS M3U8, only first TS file plays - why?

Before posting I have searched and found similar questions on stackoverflow (I list some below) - none have helped me towards a solution, hence this post. The duration that each image is shown within the movie file differs from many posts that I have seen thus far.
A camera captures 1 image every 30 seconds. I need stream them, preferably via HLS, thus I wrap 2 images in an MP4. I then convert MP4 to mpegts. Each MP4 and TS file play fine individually (each contain two images, each image transitions after 30seconds, each movie file is 1minute long).
When I reference the two TS files in an M3U8 playlist, only the first TS file gets played. Can anyone advise why it stops and how I can get it to play all the TS files that I expect to create, not just the first TS file? Besides my ffmpeg commands, I also include my VLC log file (though I expect to stream to Firefox/Chrome clients). I am using ffmpeg 4.2.2-static installed on an AWS EC2 with AMI2 Linux.
I have four jpgs named image11.jpg, image12.jpg, image21.jpg, image22.jpg - The images look near identical as only the timestamp in top left changes.
The following command creates 1.mp4, using image11.jpg and image12.jpg, each image displayed for 30 seconds, total duration of the mp4 is 1 minute. It plays like expected.
ffmpeg -y -framerate 1/30 -f image2 -i image1%1d.jpg -c:v libx264 -vf "fps=1,format=yuvj420p" 1.mp4
I then convert 1.mp4 to an mpegts file, creating 1.ts. It plays like expected.
ffmpeg -y -i 1.mp4 -c:v libx264 -vbsf h264_mp4toannexb -flags -global_header -f mpegts 1.ts
I repeat the above steps except specific to image21.jpg and image22.jpg, creating 2.mp4 and 2.ts
ffmpeg -y -framerate 1/30 -f image2 -i image1%1d.jpg -c:v libx264 -vf "fps=1,format=yuvj420p" 2.mp4
ffmpeg -y -i 1.mp4 -c:v libx264 -vbsf h264_mp4toannexb -flags -global_header -f mpegts 2.ts
Thus now I have 1.mp4, 1.ts, 2.mp4, 2.ts and all four play individually just fine.
Using ffprobe I can confirm their duration is 60seconds, for example:
ffprobe -i 1.ts -v quiet -show_entries format=duration -hide_banner -print_format json
My m3u8 playlist follows:
#EXTM3U
#EXT-X-VERSION:4
#EXT-X-PLAYLIST-TYPE:VOD
#EXT-X-MEDIA-SEQUENCE:1
#EXT-X-TARGETDURATION:60.000
#EXTINF:60.0000,
1.ts
#EXTINF:60.000,
2.ts
#EXT-X-ENDLIST
Can anyone advise where I am going wrong?
VLC Error Log (though I expect to play via web browser)
I have researched the process using these (and other pages) as a guide:
How to create a video from images with ffmpeg
convert from jpg to mp4 by ffmpeg
ffmpeg examples page
FFMPEG An Intermediate Guide/image sequence
How to use FFmpeg to convert images to video
Take a look at the start_pts/start_time in the ffprobe -show_streams output, my guess is that they all start at zero/near-zero which will cause playback to fail after your first segment.
You can still produce them independently but you will want to use something like -output_ts_offset to correctly set the timestamps for subsequent segments.
The following solution works well for me. I have tested it uninterrupted for more than two hours and believe it ticks all my boxes. (Edited because I forgot the all important -re tag)
ffmpeg will loop continuously, reading test.jpg and stream it to my RTMP server. When my camera posts an image every 30seconds, I copy the new image on top of the existing test.jpg which in effect changes what is streamed out.
Note the command below is all one line, I have put new lines in to assist reading and The order of the parameters are important - the loop and fflags genpts for example must appear before the -i parameter
ffmpeg
-re
-loop 1
-fflags +genpts
-framerate 1/30
-i test.jpg
-c:v libx264
-vf fps=25
-pix_fmt yuvj420p
-crf 30
-f fifo -attempt_recovery 1 -recovery_wait_time 1
-f flv rtmp://localhost:5555/video/test
Some arguments explained:
-re implies play in real time
loop 1 (1 turns the loop on, 0 off)
-fflags +genpts is something I only half understand. PTS I believe is the start/end time of the segment and without this flag, the PTS is reset to zero with every new image. Using this arguement means I avoid EXT-X-DISCONTINUITY when a new image is served.
-framerate 1/30 means one frame for 30seconds
-i test.jpg is my image 'placeholder'. As new images are received via a separate script, it overwrites this image. When combined with loop it means the ffmpeg output will reference the new image.
-c:v libx264 is for H264 video output formating
-vf fps=25 Removing this, or using a different value resulted in my output stream not being 30seconds.
-pix_fmt yuvj420p (sometimes I have seen yuv420p referenced but this did not work on my environment). I believe there are different jpg colour palettes and this switch ensures I can process a wider choice.
-crf 30 implies highest quality image, lowest compression (important for my client)
-f fifo -attempt_recovery 1 -recovery_wait_time 1 -f flv rtmp://localhost:5555/video/test is part of the magic to go with loop. I believe it keeps the connection open with my stream server, reduces the risk of DISCONTINUITY in the play list.
I hope this helps someone going forward.
The following links helped nudge me forward and I share as it might help others to improve upon my solution
Creating a video from a single image for a specific duration in ffmpeg
How can I loop one frame with ffmpeg? All the other frames should point to the first with no changes, maybe like a recusion
Display images on video at specific framerate with loop using FFmpeg
Loop image ffmpeg HLS
https://trac.ffmpeg.org/wiki/Slideshow
https://superuser.com/questions/1699893/generate-ts-stream-from-image-file
https://ffmpeg.org/ffmpeg-formats.html#Examples-3
https://trac.ffmpeg.org/wiki/StreamingGuide

How to force ffmpeg to refresh overlay image more often?

I am trying to do sports live-streaming using ffmpeg. Score of a streaming match is being fetched from server and converted to png. This png must appear on top of the video.
ffmpeg allows to put an overlay over a video stream using image2 demuxer. If I use -loop1, this overlay updates approximately every 5 seconds. How can I force ffmpeg to read it from disk more often?
My current attempt with overlay updating once in 5 seconds(mp4 video for testing purposes):
nice -n -19 ffmpeg \
-re -y \
-i s.mp4 \
-f image2 -loop 1 -i http://127.0.0.1:3000/img \
-filter_complex "[0:v][1:v]overlay" \
-threads 4 \
-v 0 -f mpegts -preset ultrafast udp://127.0.0.1:23000 \
&
P.S
I know, that I can make youtube streaming widget on the website and put score on top of it just using html/css/js. But unfortunately it must be done directly in the video stream.
P.P.S
I know, that I can use ffmpeg drawtext. But it is not what I want. I have specially designed png, which must be updated as frequently, as possible ( once in 1-2 seconds would be just great )
Three things:
1) -re is applied per input, so ffmpeg is currently reading your image at a rate asynchronous with respect to the video. Since the video is being read in real time, the image reader queues the packets of the looped image till the filtergraph can consume them. So the updated image will consumed much later and with a greater timestamp assigned than when it was actually updated. Add -re before the image -i to correct this.
2) Skip -loop 1 and use -stream_loop -1 since the image2 demuxer can abort if the input is blocked or empty (due to update) when it's trying to read it. Although, since the input is read via a network protocol, this may not be an issue for you.
3) You've specified no encoder in the output options. Since the format is MPEG-TS, ffmpeg will choose mpeg2video with a default bitrate of 200 kbps. The ultrafast preset does not apply to this encoder. You probably want to add -c:v libx264.
I have found, that increasing framerate value of image2 to 90-100 makes file reading process faster, but audio becomes throttled

How to capture multiple screenshot from online video stream using ffmpeg with specific seek time

I'm using ffmpeg to take screenshot from online video stream. I want to seek multiple timeline. I've used the following command to capture 1 screenshot by seek command:
ffmpeg -ss 00:02:10 -i "stream-url" -frames:v 1 out1.jpg
How I can take multiple screenshot via multiple seek time. I've searched for the solution but no success.
I've used the following command to take multiple screenshot as follows:
ffmpeg -noaccurate_seek -ss 00:01:10 -i "stream-url" -map 0:v:0 -vframes 1 -f mpeg "thumb/output_01.jpg" -ss 00:02:10 -i "stream-url" -map 1:v:0 -vframes 1 -f mpeg "thumb/output_02.jpg"
Is there any way to generate screenshots from same input via seek command? How to make it more faster? How to skip multiple input(-i param)? I've also tried with other commands but those are more slower. Can anyone help me?
There's no easy way I know to specify a number of arbitrary seek points from which to extract frames (similar question here).
However, seeking is very fast with the way you specified. Instead of constructing a complex command, you could just download the YouTube video using youtube-dl (if you haven't done that already) and generate the commands like this:
ffmpeg -ss 00:01:10 -i input -frames:v 1 out1.jpg
ffmpeg -ss 00:02:05 -i input -frames:v 1 out2.jpg
ffmpeg -ss 00:03:20 -i input -frames:v 1 out3.jpg
Note that exporting JPG might lead to low quality. Using PNG is preferred; you will get lossless frames that you can handle with another program later (e.g. to resize or compress).
If you want to get frames from regular intervals, use the fps filter to drop the framerate:
ffmpeg -i input -filter:v fps=1/60 out%02d.jpg
This will output a frame every minute (1/60 frames per second = 1 frame per minute), with two zero-padded digits as output numbers. You could additionally offset the start by providing a -ss option before the input file.

ffmpeg capture current frame and overwrite the image output file

I am trying to extract the image file from a RTSP stream url every second (could be every 1 min also) and overwrite this image file.
my below code works but it outputs to multiple image jpg files: img1.jpg, img2.jpg, img3.jpg...
ffmpeg -i rtsp://IP_ADDRESS/live.sdp -f image2 -r 1 img%01d.jpg
How to use ffmpeg or perhaps bash scripts in Linux to overwrite the same image file while continuously extract the image at a NOT high frequecy, say 1 min or 10 sec?
To elaborate a bit on the already accepted answer from pragnesh,
FFmpeg
As stated in the ffmpeg documentation:
ffmpeg command line options are specified as
ffmpeg [global_options] {[input_options] -i input_file} ... {[output_options] output_file} ...
So
ffmpeg -i rtsp://<rtsp_source_addr> -f image2 -update 1 img.jpg
Uses output option -f image2 , force output format to image2 format, as part of the muxer stage.
Note that in ffmpeg, if the output file name specifies an image format the image2 muxer will be used by default, so the command could be shortened to:
ffmpeg -i rtsp://<rtsp_source_addr> -update 1 img.jpg
The image2 format muxer expects a filename pattern, such as img%01d.jpg to produce a sequentially numbered series of files. If the update option is set to 1, the filename will be interpreted as just a filename, not a pattern, thereby overwriting the same file.
Using the -r , set frame rate, video option works, but generated me a whole lot of dropping frame messages which was bugging me.
Thanks to another answer on the same topic, I found the fps Video Filter to do a better job.
So my version of the working command is
ffmpeg -i rtsp://<rtsp_source_addr> -vf fps=fps=1/20 -update 1 img.jpg
For some reason still unkown to me the minimum framerate I can achieve from my feed is 1/20 or 0.05.
There also exists the video filter thumbnail, which selects an image from a series of frames but this is more processing intensive and therefore I would not recommend it.
Most of this and more I found on the FFMpeg Online Documentation
AVconv
For those of you who use avconv it is very similar. They are after all forks of what was once a common library. The AVconv image2 documentation is found here.
avconv -i rtsp://<rtsp_source_addr> -vf fps=fps=1/20 -update 1 img.jpg
As Xianlin pointed out there may be a couple other interesting options to use:
-an : Disables audio recording.
Found in Audio Options Section
-r < fps > : sets frame rate
Found in the Video Options Section
used as an output option is actually a a substitute for the fps filter
leading to an alternate version :
avconv -i rtsp://<rtsp_source_addr> -r 1/20 -an -update 1 img.jpg
Hope it helps understand for possible further tweaking ;)
Following command line should work for you.
ffmpeg -i rtsp://IP_ADDRESS/live.sdp -f image2 -updatefirst 1 img.jpg
I couldn't get the option -update working to overwrite the .jpg. Doing some experiments resulted in a working solution (at least for me) with the option -y at the end (upper-case is not working). I also needed http:// instead of rstp:// for this camera.
ffmpeg -i http://xx:yy#192.168.1.xx:yyy/snapshot.cgi /tmp/Capture2.jpg -y
Grab a snapshot from an RTSP video stream every 10 seconds.
#!/bin/bash
#fetch-snapshots.sh
url='rtsp://IP_ADDRESS/live.sdp'
avconv -i $url -r 0.1 -vsync 1 -qscale 1 -f image2 images%09d.jpg
-r rate set frame rate to 0.1 frames a second (this equals to 1 frame every 10 seconds).
Thanks to westonruter, see https://gist.github.com/westonruter/4508842
Furthermore have a look at FFMPEG: Extracting 20 images from a video of variable length
ffmpeg -i rtsp://root:password#192.168.1.1/mpeg4 -ss 00:00:01 -f image2 -vframes 1 thumb.jpg
replace with your rtsp protocol url
make sure 00:00:01
if you put other numbers, the image will be crashed

How to extract the 1st frame and restore as an image with ffmpeg?

Anyone knows the trick?
And how to install ffmpeg ? yum install mpeg only returns this:
======================================================================================== Matched: mpeg ========================================================================================
libiec61883.i386 : Streaming library for IEEE1394
libiec61883.x86_64 : Streaming library for IEEE1394
qffmpeg-devel.i386 : Development package for qffmpeg
qffmpeg-devel.x86_64 : Development package for qffmpeg
qffmpeg-libs.i386 : Libraries for qffmpeg
qffmpeg-libs.x86_64 : Libraries for qffmpeg
I've cobbled up this command line from various answers that works great for me to get the absolutely first frame out from a video. I use this to save a thumbnail screenshot for the video.
ffmpeg -i inputfile.mkv -vf "select=eq(n\,0)" -q:v 3 output_image.jpg
Explanation:
The select filter -vf "select=eq(n\,0)" is to select only frame #0.
-q:v allows you to set the quality of the output jpeg between 1 and 31. Lower the number, higher the quality. 2 - 5 works good, I use 3.
Note: This will get you an image with the same size as the video. To get a thumbnail, you can use the scale filter to get a thumbnail to fit whatever width you need, like so:
ffmpeg -i inputfile.mkv -vf "select=eq(n\,0)" -vf scale=320:-2 -q:v 3 output_image.jpg
The above command will give you a thumbnail jpeg that will be scaled to match width of 320, and height will be calculated to match the aspect ratio.
It's on the manpage:
* You can extract images from a video, or create a video from many
images:
For extracting images from a video:
ffmpeg -i foo.avi -r 1 -s WxH -f image2 foo-%03d.jpeg
This will extract one video frame per second from the video and will
output them in files named foo-001.jpeg, foo-002.jpeg, etc. Images
will be rescaled to fit the new WxH values.
If you want to extract just a limited number of frames, you can use
the above command in combination with the -vframes or -t option, or in
combination with -ss to start extracting from a certain point in time.
But of course you have to install it first. I'm on Debian and don't use yum.
[update for the other question]
i=1
for avi in *.avi; do
ffmpeg -i $avi -vframes 1 -f image2 /tmp/$i.jpg; i=$((i+1))
done
Tested and works.
[update for yet another question...]
for flv in *.flv; do
ffmpeg -i $flv -vframes 1 -f image2 ${flv%%.flv}.jpg
done
An easy to grok solution that works for me is
ffmpeg -i <input> -vframes 1 <output>.jpeg
Note that I do get an error "[swscaler # 0x111652000] deprecated pixel format used, make sure you did set range correctly" but according to a little reading (see for example https://stackoverflow.com/a/43038480/1241736) that can safely be ignored.
It's works for me
ffmpeg -i sample-mp4-file.mp4 -ss 1 -vframes 1 output.jpg

Resources