I'd like to grab the last frame in a video (.mpg, .avi, whatever) and dump it into an image file (.jpg, .png, whatever). Toolchain is a modern Linux command-line, so things like mencoder, transcode, ffmpeg &c.
Cheers,
Bob.
This isn't a complete solution, but it'll point you along the right path.
Use ffprobe -show_streams IN.AVI to get the number of frames in the video input. Then
ffmpeg -i IN.AVI -vf "select='eq(n,LAST_FRAME_INDEX)'" -vframes 1 LAST_FRAME.PNG
where LAST_FRAME_INDEX is the number of frames less one (frames are zero-indexed), will output the last frame.
I have a mp4 / h264 matroska input video file. And none of the above solutions worked for me. (Although I sure they work for other file formats).
Combining samrad's answer above and also this great answer and came up with this working code:
input_fn='output.mp4'
image_fn='output.png'
rm -f $image_fn
frame_count=`ffprobe -v error -count_frames -select_streams v:0 -show_entries stream=nb_read_frames -of default=nokey=1:noprint_wrappers=1 $input_fn`
ffmpeg -i $input_fn -vf "select='eq(n,$frame_count-1)'" -vframes 1 "$image_fn" 2> /dev/null
I couldn't get Nelson's solution to work. This worked for me.
https://gist.github.com/samelie/32ecbdd99e07b9d8806f
EDIT (just in case the link disappears, here is the shellscript—bobbogo):
#!/bin/bash
fn="$1"
of=`echo $1 | sed s/mp4/jpg/`
lf=`ffprobe -show_streams "$fn" 2> /dev/null | grep nb_frames | head -1 | cut -d \= -f 2`
rm -f "$of"
let "lf = $lf - 1"
ffmpeg -i $fn -vf select=\'eq\(n,$lf\) -vframes 1 $of
One thing I have not seen mentioned is that the expected frame count can be off if the file contains dupes. If your method of counting frames is causing your image extraction command to come back empty, this might be what is mucking it up.
I have developed a short script to work around this problem. It is posted here.
I found that I had to add -vsycn 2 to get it to work reliably. Here's the full command I use:
ffmpeg -y -sseof -3 -i $file -vsync 2 -update 1 -vf scale=640:480 -q:v 1 /tmp/test.jpg
If you write to a NFS folder then the results will be very inconsistent. So I write to /tmp then copy the file once done. If it is not done, I do a kill -9 on the process ID.
Related
I have been getting to grips with FFMPEG for the last few days...so please excuse my lack of knowledge. It's very much early days.
I need to join 3 video elements together with one of the videos becoming an overlay at a specific time.
intro.mp4
mainvideo.mp4
endboard.mp4
I need the intro.mp4 to bolt on to the front of the mainvideo.mp4 and then ideally with 20 seconds to go before the end of the mainvideo.mp4, I need the endboard.mp4 video to be bolted on to the sequence and take over the frame. When this happens, I then need the mainvideo.mp4 to be overlayed in the top left corner and continue playing seamlessly through the transition.
I also need the audio from the main video to play until the end of the video.
I currently achieve this but putting all of the video elements into Premiere and exporting them out but I know this process can be much quicker with FFMPEG. For reference, here is an example of how it looks. If you skip to the end of the video below (just after 45 mins into the video) as the credits are rolling you will see the main video transition to the picture in picture overlay, and the endboard video take over the main frame.
https://www.youtube.com/watch?v=RtgIvWxZUwM&t=2723s
There will be lots of mainvideo.mp4 files that this will be applied to individually, and the lengths of these videos will always be different. I am hoping that there is a way to have the transition to the endboard.mp4 happen relative to 20secs before the end of the files. If not I guess I would have to manually input the time I want this change over transition to happen.
I roughly understand in theory what needs to be done, but being so new to this world I am really unsure of how something this complicated would be pieced together.
If there is anyone out there that can help me , it would be greatly appreciated!
I have got my head around the process of merging videos together with a simple concat command and I can see that overlaying a video in the top left corner of the frame is also possible...but my brain cannot figure out the sequence of events that needs to happen to bolt the intro video on to the main video....and then have the main video transition into the picture in picture overlay video at a specific time, while also bolting on the endboard video for the main video to overlay onto.
Any help for a complete newb would be so unbelievably appreciated!
Safe way (one ffmpeg run, full video reencoding)
There are different ways to do it, but I think the straitforward one is to split mainvideo into two parts, resize the second part and overlay it onto endboard. Then concat intro, the first part of mainvideo, PIP endboard together and pack it with concatenated audio from intro and mainvideo. Since the duration of the mainvideo may vary, your script should detect it to define trim point. ffmpeg can trim from the end with the special seek option, but in this case you get intermediate file. This approach is for to do all the job without intermediate files:
#!/bin/bash
mainvideo=mainvideo.mp4
tailtime=20
duration=$(ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 $mainvideo)
ffmpeg -hide_banner -y \
-i intro.mp4 \
-i $mainvideo \
-i endboard.mp4 \
-filter_complex \
"[0:v]setpts=PTS-STARTPTS[v_intro]; \
[0:a]asetpts=PTS-STARTPTS[a_intro]; \
[1:v]setpts=PTS-STARTPTS,split[v_main1][v_main2]; \
[1:a]asetpts=PTS-STARTPTS[a_main]; \
[2:v]setpts=PTS-STARTPTS[v_endboard]; \
[v_main1]select='gt(t,$duration-$tailtime)',scale=w=iw/2:h=ih/2,setpts=PTS-STARTPTS[v_tail]; \
[v_endboard][v_tail]overlay[v_pip]; \
[v_main2]select='lte(t,$duration-$tailtime)',setpts=PTS-STARTPTS[v_mid]; \
[v_intro][v_mid][v_pip]concat=n=3:v=1:a=0[v_out]; \
[a_intro][a_main]concat=n=2:v=0:a=1[a_out]" \
-map "[v_out]" \
-map "[a_out]" \
-r 25 \
output.mp4
Semi-safe way (few ffmpeg runs, tail encoding, zsh)
To avoid full video re-encoding you may use something like this:
setopt interactivecomments
# Input parameters
mainvideo=mainvideo.mp4
endboard=endboard.mp4
intro=intro.mp4
tailtime=20
# Time calculations to define cut point
duration=$(ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 $mainvideo)
midtime=$(echo "scale=2;$duration-$tailtime" | bc)
# Safety check
tbn_main=$(ffprobe -v error -select_streams v -show_entries stream=time_base -of default=noprint_wrappers=1:nokey=1 $mainvideo)
tbn_main=${tbn_main#*/}
tbn_intro=$(ffprobe -v error -select_streams v -show_entries stream=time_base -of default=noprint_wrappers=1:nokey=1 $intro)
tbn_intro=${tbn_intro#*/}
tbn_end=$(ffprobe -v error -select_streams v -show_entries stream=time_base -of default=noprint_wrappers=1:nokey=1 $endboard)
tbn_end=${tbn_end#*/}
if [[ $(( ($tbn_intro+$tbn_main+$tbn_end)/3 )) -ne $tbn_main ]]; then
echo "WARNING: source video files have the different timebase."
echo "The use of the concat demuxer will produce incorrect output."
echo "Re-encoding is highly recommended."
read -s -k $'?Press any key to exit.\n'
exit 1
fi
# Trim the main part of mainvideo
ffmpeg -hide_banner -y -i $mainvideo -to $midtime -c copy mid.mp4
# Trim the tail of mainvideo and overlay it onto endboard
ffmpeg -hide_banner -y \
-i $mainvideo \
-i $endboard \
-filter_complex \
"[0:v]select='gt(t,$duration-$tailtime)',scale=w=iw/2:h=ih/2,setpts=PTS-STARTPTS[v_tail]; \
[0:a]aselect='gt(t,$duration-$tailtime)',asetpts=PTS-STARTPTS[a_out]; \
[1:v][v_tail]overlay=format=auto[v_out]" \
-map "[v_out]" \
-map "[a_out]" \
-video_track_timescale $tbn_main \
pip.mp4
# Pass all parts through the concat demuxer
[ -f filelist.txt ] && rm filelist.txt
for f in $intro mid.mp4 pip.mp4; do echo "file '$PWD/$f'" >> filelist.txt; done
ffmpeg -hide_banner -y -f concat -safe 0 -i filelist.txt -c copy output.mp4
# Sweep the table
rm mid.mp4 pip.mp4 filelist.txt
I've included timebase check of source video streams to warn about the unsuitability of the concat demuxer method. If you ignore this warning, most likely you'll get an incorrect concatenation result and a lot of ffmpeg's warnings "Non-monotonous DTS in output stream...". For the same reason I've added the video_track_timescale option to the command that generates pip.mp4.
You can use both methods (full re-encoding and partial) if you wish, using the if-then-else from the second method as a wrapper.
I have an ffmpeg version built with VMAF library. I can use it to calculate the VMAF scores of a distorted video against a reference video using commands like this:
ffmpeg -i distorted.mp4 -i original.mp4 -filter_complex "[0:v]scale=640:480:flags=bicubic[main];[main][1:v]libvmaf=model_path=model/vmaf_v0.6.1.json:log_path=log.json" -f null -
Now, I remember there was a way to get VMAF scores while performing regular ffmpeg encoding. How can I do that at the same time?
I want to encode a video like this, while also calulate the VMAF of the output file:
ffmpeg -i original.mp4 -crf 27 -s 640x480 out.mp4
[edited]
Alright, scratch what I said earlier...
You should be able to use [the `tee` muxer](http://ffmpeg.org/ffmpeg-formats.html#tee-1) to save the file and pipe the encoded frames to another ffmpeg process. Something like this should work for you:
ffmpeg -i original.mp4 -crf 27 -s 640x480 -f tee "out.mp4 | [f=mp4]-" \
| ffmpeg -i - -i original.mp4 -filter_complex ...
(make them into 2 lines and remove \ for Windows)
Here is what works on my Windows PC (thanks to #Rotem for his help)
ffmpeg -i in.mp4 -vcodec libx264 -crf 27 -f nut pipe:
|
ffmpeg -i in.mp4 -f nut -i pipe:
-filter_complex "[0:v][1:v]libvmaf=log_fmt=json:log_path=log.json,nullsink"
-map 1 -c copy out.mp4
The main issue that #Rotem and I missed is that we need to terminate the libvmaf's output. Also, h264 raw format does not carry header info, and using `nut alleviates that issue.
There are a couple caveats:
testing with the testsrc example that #Rotem suggested in the comment below does not produce any libvmaf log, at least as far as I can see, but in debug mode, you can see the filter is getting initialized.
You'll likely see [nut # 0000026b123afb80] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8) message in the log. This just means that the frames are piped in faster than the 2nd ffmpeg is processing. FFmpeg does block on both ends, so no info should be lost.
For the full disclosure, I posted my Python test script on GitHub. It just runs the shell command, so it should be easy to follow even if you don't do Python.
I am trying to extract the image file from a RTSP stream url every second (could be every 1 min also) and overwrite this image file.
my below code works but it outputs to multiple image jpg files: img1.jpg, img2.jpg, img3.jpg...
ffmpeg -i rtsp://IP_ADDRESS/live.sdp -f image2 -r 1 img%01d.jpg
How to use ffmpeg or perhaps bash scripts in Linux to overwrite the same image file while continuously extract the image at a NOT high frequecy, say 1 min or 10 sec?
To elaborate a bit on the already accepted answer from pragnesh,
FFmpeg
As stated in the ffmpeg documentation:
ffmpeg command line options are specified as
ffmpeg [global_options] {[input_options] -i input_file} ... {[output_options] output_file} ...
So
ffmpeg -i rtsp://<rtsp_source_addr> -f image2 -update 1 img.jpg
Uses output option -f image2 , force output format to image2 format, as part of the muxer stage.
Note that in ffmpeg, if the output file name specifies an image format the image2 muxer will be used by default, so the command could be shortened to:
ffmpeg -i rtsp://<rtsp_source_addr> -update 1 img.jpg
The image2 format muxer expects a filename pattern, such as img%01d.jpg to produce a sequentially numbered series of files. If the update option is set to 1, the filename will be interpreted as just a filename, not a pattern, thereby overwriting the same file.
Using the -r , set frame rate, video option works, but generated me a whole lot of dropping frame messages which was bugging me.
Thanks to another answer on the same topic, I found the fps Video Filter to do a better job.
So my version of the working command is
ffmpeg -i rtsp://<rtsp_source_addr> -vf fps=fps=1/20 -update 1 img.jpg
For some reason still unkown to me the minimum framerate I can achieve from my feed is 1/20 or 0.05.
There also exists the video filter thumbnail, which selects an image from a series of frames but this is more processing intensive and therefore I would not recommend it.
Most of this and more I found on the FFMpeg Online Documentation
AVconv
For those of you who use avconv it is very similar. They are after all forks of what was once a common library. The AVconv image2 documentation is found here.
avconv -i rtsp://<rtsp_source_addr> -vf fps=fps=1/20 -update 1 img.jpg
As Xianlin pointed out there may be a couple other interesting options to use:
-an : Disables audio recording.
Found in Audio Options Section
-r < fps > : sets frame rate
Found in the Video Options Section
used as an output option is actually a a substitute for the fps filter
leading to an alternate version :
avconv -i rtsp://<rtsp_source_addr> -r 1/20 -an -update 1 img.jpg
Hope it helps understand for possible further tweaking ;)
Following command line should work for you.
ffmpeg -i rtsp://IP_ADDRESS/live.sdp -f image2 -updatefirst 1 img.jpg
I couldn't get the option -update working to overwrite the .jpg. Doing some experiments resulted in a working solution (at least for me) with the option -y at the end (upper-case is not working). I also needed http:// instead of rstp:// for this camera.
ffmpeg -i http://xx:yy#192.168.1.xx:yyy/snapshot.cgi /tmp/Capture2.jpg -y
Grab a snapshot from an RTSP video stream every 10 seconds.
#!/bin/bash
#fetch-snapshots.sh
url='rtsp://IP_ADDRESS/live.sdp'
avconv -i $url -r 0.1 -vsync 1 -qscale 1 -f image2 images%09d.jpg
-r rate set frame rate to 0.1 frames a second (this equals to 1 frame every 10 seconds).
Thanks to westonruter, see https://gist.github.com/westonruter/4508842
Furthermore have a look at FFMPEG: Extracting 20 images from a video of variable length
ffmpeg -i rtsp://root:password#192.168.1.1/mpeg4 -ss 00:00:01 -f image2 -vframes 1 thumb.jpg
replace with your rtsp protocol url
make sure 00:00:01
if you put other numbers, the image will be crashed
I used to calculate the duration of MP3 files server-side using ffmpeg - which seemed to work fine. Today i discovered that some of the calculations were wrong. Somehow, for some reason, ffmpeg will miscalculate the duration and it seems to happen with variable bit rate mp3 files only.
When testing this locally, i noticed that ffmpeg printed two extra lines in green.
Command used:
ffmpeg -i song_9747c077aef8.mp3
ffmpeg says:
[mp3 # 0x102052600] max_analyze_duration 5000000 reached at 5015510
[mp3 # 0x102052600] Estimating duration from bitrate, this may be inaccurate
After a nice, warm google session, i discovered some posts on this, but no solution was found.
I then tried to increase the maximum duration:
ffmpeg -analyzeduration 999999999 -i song_9747c077aef8.mp3
After this, ffmpeg returned only the second line:
[mp3 # 0x102052600] Estimating duration from bitrate, this may be inaccurate
But in either case, the calculated duration was just plain wrong. Comparing it to VLC i noticed that there the duration is correct.
After more research i stumbled over mp3info - which i installed and used.
mp3info -p "%S" song_9747c077aef8.mp3
mp3info then returned the CORRECT duration, but only as an integer, which i cannot use as i need a more accurate number here. The reason for this was explained in a comment below, by user blahdiblah - mp3info is simply pulling ID3 info from the file and not actually performing any calculations.
I also tried using mplayer to retrieve the duration, but just as ffmpeg, mplayer is returning the wrong value.
I finally found a proper solution to this problem using sox - which returns the correct information.
sox file.mp3 -n stat
Samples read: 19321344
Length (seconds): 219.062857
Scaled by: 2147483647.0
Maximum amplitude: 1.000000
Minimum amplitude: -1.000000
Midline amplitude: -0.000000
Mean norm: 0.141787
Mean amplitude: 0.000060
RMS amplitude: 0.191376
Maximum delta: 0.947598
Minimum delta: 0.000000
Mean delta: 0.086211
RMS delta: 0.115971
Rough frequency: 4253
Volume adjustment: 1.000
Length (seconds): 219.062857
You can decode the file completely to get the actual duration:
ffmpeg -i input.mp3 -f null -
The second to the last line of the console output will show something like:
size=N/A time=00:03:49.12 bitrate=N/A
Where time is the actual duration. In this example the whole process took about 0.5 seconds.
Simpler is to use ffmpeg to copy the file from the one with the faulty duration in its ID3 tag. This causes it to write the correct information.
ffmpeg -i "audio.mp3" -acodec copy "audio_fixed.mp3"
Because it uses copy it takes a fraction of the time the original encoding takes. This is hardly noticeable with a song, but you really appreciate it with a 7 hour audiobook. After re-encoding, the ID3 "Duration" tag now has the correct information.
Extending solution from llogan (LordNeckbeard). To get only stats you can add flags -v quiet -stats
ffmpeg -v quiet -stats -i input.mp3 -f null -
ffmpeg will print all file information if no other arguments are provided.
Use grep or awk to only return the "Duration":
ffmpeg -i file.mp3 2>&1 | grep Duration
ffmpeg -i file.mp3 2>&1 | awk '/Duration/ { print substr($2,0,length($2)-1) }'
AV_LOG_FORCE_NOCOLOR=y ffmpeg -nostdin -hide_banner -nostats -loglevel info -i audio.mp3 -f null -vn -c:a copy - 2>&1 | tail -n 2
declare out="$(AV_LOG_FORCE_NOCOLOR=y ffmpeg -nostdin -hide_banner -nostats -loglevel info -i video.mp4 -f null -vn -c:a copy - 2>&1 | tail -n 2 | head -n 1)"
if [[ "$out" =~ \ time=([0-9]+):([0-9]{2}):([0-9]{2})\.([0-9]+) ]]; then
declare duration=0 us="${BASH_REMATCH[4]}" t
for t in "${BASH_REMATCH[#]:1:3}"; do
((duration *= 60))
((duration += ${t#0} ))
done
while [ ${#us} -lt 6 ]; do us+=0; done
((us >= 500000)) && ((duration++))
((duration)) || ((duration++))
fi
echo -E Duration: "$duration"
sudo apt install sox
sudo apt-get install libsox-fmt-mp3
and then:
sox yourfile.mp3 -n stat
Does anyone know if it is possible to encode a video using ffmpeg in reverse? (So the resulting video plays in reverse?)
I think I can by generating images for each frame (so a folder of images labelled 1.jpg, 2.jpg etc), then write a script to change the image names, and then re-encode the ivdeo from these files.
Does anyone know of a quicker way?
This is an FLV video.
Thank you
No, it isn't possible using ffmpeg to encode a video in reverse without dumping it to images and then back again. There are a number of guides available online to show you how to do it, notably:
http://ubuntuforums.org/showthread.php?t=1353893
and
https://sites.google.com/site/linuxencoding/ffmpeg-tips
The latter of which follows:
Dump all video frames
$ ffmpeg -i input.mkv -an -qscale 1 %06d.jpg
Dump audio
$ ffmpeg -i input.mkv -vn -ac 2 audio.wav
Reverse audio
$ sox -V audio.wav backwards.wav reverse
Cat video frames in reverse order to FFmpeg as input
$ cat $(ls -r *jpg) | ffmpeg -f image2pipe -vcodec mjpeg -r 25 -i - -i backwards.wav -vcodec libx264 -vpre slow -crf 20 -threads 0 -acodec flac output.mkv
Use mencoder to deinterlace PAL dv and double the frame rate from 25 to 50, then pipe to FFmpeg.
$ mencoder input.dv -of rawvideo -ofps 50 -ovc raw -vf yadif=3,format=i420 -nosound -really-quiet -o - | ffmpeg -vsync 0 -f rawvideo -s 720x576 -r 50 -pix_fmt yuv420p -i - -vcodec libx264 -vpre slow -crf 20 -threads 0 video.mkv
I've created a script for this based on Andrew Stubbs' answer
https://gist.github.com/hfossli/6003302
Can be used like so
./ffmpeg_sox_reverse.sh -i Desktop/input.dv -o test.mp4
New Solution
A much simpler method exists now, simply use the command (adjusting input.mkv and reversed.mkv accordingly):
ffmpeg -i input.mkv -af areverse -vf reverse reversed.mkv
The -af areverse will reverse audio, and -vf reverse will reverse video. The video and audio will be in sync automatically in the output file reversed.mkv, no need to worry about the input frame rate or anything else.
On one video if I only specified the -vf reverse to reverse video (but not audio), the output file didn't play correctly in mkv format but did work if I changed it to mp4 output format (I don't think this use case of reversing video only but not audio is common, but if you do run into this issue you can try changing the output format). On large input videos that exceed the RAM available in your computer, this method may not work and you may need to chop up the input file or use the old solution below.
Old Solution
One issue is the frame rate can vary depending on the video, many answers depend on a specific frame rate (like "-r 25" for 25 frames per second). If the frame rate in the video is different, this will cause the reversed audio and video to go out of sync.
You can of course manually adjust the frame rate each time (you can get the frame rate by running ffmpeg -i video.mkv and look for the number in front of the fps, this is sometimes a decimal number like 23.98). But with some bash code you can easily extract the fps, store it in a variable, and automatically pass it to the programs.
Based on this I've created the following bash script to do that. Simply chmod +x it and run it ./make-reversed-video.sh input.mkv output.mkv. The code is as follows:
#!/bin/bash
#Partially based on https://nhs.io/reverse/, but with some modifications, including automatic extraction of the frame rate.
#Get parameters.
VIDEO_FILE=$1
OUTPUT_FILE=$2
TEMP_FOLDER=$3
echo Using input file: $VIDEO_FILE
echo Using output file: $OUTPUT_FILE
mkdir /tmp/create_reversed_video
#Get frame rate.
FRAME_RATE=$(ffmpeg -i "$VIDEO_FILE" 2>&1 | grep -o -P '[0-9\\. ]+fps' | grep -o -P '[0-9\\.]+')
echo The frame rate is: $FRAME_RATE
#Extract audio from video.
ffmpeg -i "$VIDEO_FILE" -vn -ac 2 /tmp/create_reversed_video/audio.wav
#Reverse the audio.
sox -V /tmp/create_reversed_video/audio.wav /tmp/create_reversed_video/backwards.wav reverse
#Extract each video frame as an image.
ffmpeg -i "$VIDEO_FILE" -an -qscale 1 /tmp/create_reversed_video/%06d.jpg
#Recombine into reversed video.
ls -1 /tmp/create_reversed_video/*.jpg | sort -r | xargs cat | ffmpeg -framerate $FRAME_RATE -f image2pipe -i - -i /tmp/create_reversed_video/backwards.wav "$OUTPUT_FILE"
#Delete temporary files.
rm -rf /tmp/create_reversed_video
I've tested it and it works well on my Ubuntu 18.04 machine on lots of videos (after installing the dependencies like sox). Please let me know if it works on other Linux distributions and versions.