I want to convert video to images, do some image processing and convert images back to video.
Here is my commands:
./ffmpeg -r 30 -i $VIDEO_NAME "image%d.png"
./ffmpeg -r 30 -y -i "image%d.png" output.mpg
But in output.mpg video I have some artefacts like in jpeg.
Also I don't know how to detrmine fps, I set fps=30 (-r 30).
When I use above first command without -r it produces a lot of images > 1kk, but than I use -r 30 option it produce same number of images as this command calculationg number of frames:
FRAME_COUNT=`./ffprobe -v error -count_frames -select_streams v:0 -show_entries stream=nb_read_frames -of default=nokey=1:noprint_wrappers=1 $VIDEO_NAME`
So my questions are:
How to determine frame rate ?
How to convert images to video and don't reduce initial quality?
UPDATE:
Seems this helped, after I removed -r option
Image sequence to video quality
so resulting command is :
./ffmpeg -y -i "image%d.png" -vcodec mpeg4 -b $BITRATE output_$BITRATE.avi
but I'm still not sure how to select bitrate.
How can I see bitrate of original .mp4 file?
You can use the qscale parameter instead of bitrate e.g.
ffmpeg -y -i "image%d.png" -vcodec mpeg4 -q:v 1 output_1.avi
q:v is short for qscale:v. 1 may produce too large files. 4-6 is a decent range to use.
Related
I want to take the input, blend N frames, decimate the other frames and use those for the output with the fps of my choice.
I used this line:
ffmpeg -y -i input.mp4 -vf tmix=frames=15:weights="1",select='not(mod(n\,15))' -vsync vfr frames/output-%05d.tif
That generated images, which I combined into the video. So far, so good.
But I'd like to skip the image output and go straight to video, so I tried this:
ffmpeg -y -i input.mp4 -vf tmix=frames=15:weights="1",select='not(mod(n\,15))' -vsync vfr -r 30 -c:v prores_ks -profile:v 3 -vendor apl0 -bits_per_mb 8000 -pix_fmt yuv422p10le output.mov
That produces 1.62 fps video, instead of 30 fps.
I'm at a loss on how to get it to output 30fps without the intermediate step of outputting images.
Thanks
I think the simplest way to achieve this is to feed the input at the 15-times the desired rate and drop all intermediate frames with -r 30:
ffmpeg -y -r 450 -i input.mp4 \
-vf tmix=frames=15:weights="1" \
-r 30 sandbox/out.mp4
However, a tmix solution is somewhat inefficient for your use case because it's mixing for all frames, including those dropped. If you don't mind a longer expression, you can try:
ffmpeg -i in.mp4 \
-vf
setpts=\'floor(N/15)/(30*TB)\',select=\'mod(n,15)+1\':n=15[v0][v1][v2][v3][v4][v5][v6][v7][v8][v9][v10][v11][v12][v13][v14];\
[v0][v1][v2][v3][v4][v5][v6][v7][v8][v9][v10][v11][v12][v13][v14]mix=inputs=15:weights=1 \
-r 30 sandbox/out.mp4
[edit] setpts expression should be floor(N/15)/(30*TB) not mod(n,15)+1 for 15 successive frames to have the same pts.
I have a folder of exactly 300 images in png format (labelled 1.png, 2.png, ..., 300.png), which I'm trying to convert to a video. I would like the video to be in the webm format, but there seems to be an issue:
using the following command:
ffmpeg -start_number 1 -i ./frames/%d.png -frames:v 300 -r 30 out.webm
does generate an out.webm file, and, according to ffprobe -select_streams v -count_frames -show_entries stream=nb_read_frames,r_frame_rate out.webm (which is presumably quite an inefficient way to get that information, but that's besides the point), it does contain 300 frames and has a framerate of exactly 30/1, however, instead of the expected exactly 10 seconds (from 300 frames being played at 30 fps), the video lasts slightly longer (about 12 seconds).
This discrepancy does seem to scale up with video length; 900 frames being converted to a video the same way and with the same frame rate yield a 36 (instead of 30) second video.
For testing, I also tried generating an mp4 file instead of a webm one, with the following command (exact same as above, but out.mp4 instead of out.webm), and that worked exactly as expected, out.mp4 was a 10-second long video.
ffmpeg -start_number 1 -i ./frames/%d.png -frames:v 100 -r 30 out.mp4
How do I fix this? is my ffmpeg command off or is this a bug within the tool?
The documentation ( https://www.ffmpeg.org/ffmpeg.html ) has an example:
For creating a video from many images: ffmpeg -f image2 -framerate 12
-i foo-%03d.jpeg -s WxH foo.avi
and
To force the frame rate of the input file (valid for raw formats only)
to 1 fps and the frame rate of the output file to 24 fps: ffmpeg -r 1
-i input.m2v -r 24 output.avi
and also
As an input option, ignore any timestamps stored in the file and
instead generate timestamps assuming constant frame rate fps. This is
not the same as the -framerate option used for some input formats like
image2 or v4l2 (it used to be the same in older versions of FFmpeg).
If in doubt use -framerate instead of the input option -r.
For your case result:
ffmpeg -framerate 30 -i ./frames/%d.png output.webm
trying to get my head around ffmpeg to create a slideshow where each image is displayed for ~5 seconds with some audio. created a bat file to run the following so far:
ffmpeg -f image2 -i image-%%03d.jpg -i music.mp3 output.mpg
It gets the images and displayes them all very fast in the first second of the video, it then plays out the rest of the audio while showing the last image.
I want to make the images stay up longer (about 5 seconds), and stop the video after the last frame (not playing the rest of the song), are either of these things possible? i could hack the frame rate thing i guess by having hundreds of the same image in order to keep it up longer, but this is far from ideal!
Thanks
The default encoder for mpg output, mpeg1video, is strict about the allowed frame rates, so an input and an output -r are required:
ffmpeg -r 1/5 -i image-%03d.jpg -i music.mp3 -r 25 -qscale:v 2 -shortest -codec:a copy output.mpg
The input images will have a frame rate of 1 frame every 5 seconds and the output will duplicate frames to reach 25 frames per second.
-f image2 is generally not required.
-qscale:v can control output quality. A sane range is 2-5.
-shortest will make the output duration the same as the shortest input duration.
-codec:a copy copy your MP3 audio instead of re-encoding.
MPEG-1 video has more modern alternatives. See the FFmpeg and x264 Encoding Guide for more info.
Also see:
* FFmpeg FAQ: How do I encode single pictures into movies?
* FFmpeg Wiki: Create a video slideshow from images
You could use the filter fps instead of output framerate
ffmpeg -r 1/5 -i img%03d.png -i musicfile -c:v libx264 -vf fps=25 -pix_fmt yuv420p out.mp4
This however skips the last image for me strangely.
I currently have a jpeg file which I converted to an flv using the following command:
ffmpeg -r 10 -b 180000 -i test.jpg test.mp4
Now, I want to increase the duration of this .mp4 clip, so the picture stays on the screen for more than a split second. Eventually, I hope to merge a stream of these files to create a slide show out of jpeg files.
Does anyone know how to increase the duration of a clip in ffmpeg?
Looping the input and setting a duration should achieve the effect you want:
ffmpeg -loop_input -i test.jpg -t 10 test.mp4
Doing something like this should work (at least for a single image):
ffmpeg -loop_input -i picture.jpg -r 1 -vcodec flv -b 192k -i Music.mp3 -acodec copy -shortest output.flv
I bet you could get it working with multiple images by adding more inputs though I haven't tested.
(http://forum.videohelp.com/threads/280695-FFMPEG-Loop-input-video)
Does anyone know if it is possible to encode a video using ffmpeg in reverse? (So the resulting video plays in reverse?)
I think I can by generating images for each frame (so a folder of images labelled 1.jpg, 2.jpg etc), then write a script to change the image names, and then re-encode the ivdeo from these files.
Does anyone know of a quicker way?
This is an FLV video.
Thank you
No, it isn't possible using ffmpeg to encode a video in reverse without dumping it to images and then back again. There are a number of guides available online to show you how to do it, notably:
http://ubuntuforums.org/showthread.php?t=1353893
and
https://sites.google.com/site/linuxencoding/ffmpeg-tips
The latter of which follows:
Dump all video frames
$ ffmpeg -i input.mkv -an -qscale 1 %06d.jpg
Dump audio
$ ffmpeg -i input.mkv -vn -ac 2 audio.wav
Reverse audio
$ sox -V audio.wav backwards.wav reverse
Cat video frames in reverse order to FFmpeg as input
$ cat $(ls -r *jpg) | ffmpeg -f image2pipe -vcodec mjpeg -r 25 -i - -i backwards.wav -vcodec libx264 -vpre slow -crf 20 -threads 0 -acodec flac output.mkv
Use mencoder to deinterlace PAL dv and double the frame rate from 25 to 50, then pipe to FFmpeg.
$ mencoder input.dv -of rawvideo -ofps 50 -ovc raw -vf yadif=3,format=i420 -nosound -really-quiet -o - | ffmpeg -vsync 0 -f rawvideo -s 720x576 -r 50 -pix_fmt yuv420p -i - -vcodec libx264 -vpre slow -crf 20 -threads 0 video.mkv
I've created a script for this based on Andrew Stubbs' answer
https://gist.github.com/hfossli/6003302
Can be used like so
./ffmpeg_sox_reverse.sh -i Desktop/input.dv -o test.mp4
New Solution
A much simpler method exists now, simply use the command (adjusting input.mkv and reversed.mkv accordingly):
ffmpeg -i input.mkv -af areverse -vf reverse reversed.mkv
The -af areverse will reverse audio, and -vf reverse will reverse video. The video and audio will be in sync automatically in the output file reversed.mkv, no need to worry about the input frame rate or anything else.
On one video if I only specified the -vf reverse to reverse video (but not audio), the output file didn't play correctly in mkv format but did work if I changed it to mp4 output format (I don't think this use case of reversing video only but not audio is common, but if you do run into this issue you can try changing the output format). On large input videos that exceed the RAM available in your computer, this method may not work and you may need to chop up the input file or use the old solution below.
Old Solution
One issue is the frame rate can vary depending on the video, many answers depend on a specific frame rate (like "-r 25" for 25 frames per second). If the frame rate in the video is different, this will cause the reversed audio and video to go out of sync.
You can of course manually adjust the frame rate each time (you can get the frame rate by running ffmpeg -i video.mkv and look for the number in front of the fps, this is sometimes a decimal number like 23.98). But with some bash code you can easily extract the fps, store it in a variable, and automatically pass it to the programs.
Based on this I've created the following bash script to do that. Simply chmod +x it and run it ./make-reversed-video.sh input.mkv output.mkv. The code is as follows:
#!/bin/bash
#Partially based on https://nhs.io/reverse/, but with some modifications, including automatic extraction of the frame rate.
#Get parameters.
VIDEO_FILE=$1
OUTPUT_FILE=$2
TEMP_FOLDER=$3
echo Using input file: $VIDEO_FILE
echo Using output file: $OUTPUT_FILE
mkdir /tmp/create_reversed_video
#Get frame rate.
FRAME_RATE=$(ffmpeg -i "$VIDEO_FILE" 2>&1 | grep -o -P '[0-9\\. ]+fps' | grep -o -P '[0-9\\.]+')
echo The frame rate is: $FRAME_RATE
#Extract audio from video.
ffmpeg -i "$VIDEO_FILE" -vn -ac 2 /tmp/create_reversed_video/audio.wav
#Reverse the audio.
sox -V /tmp/create_reversed_video/audio.wav /tmp/create_reversed_video/backwards.wav reverse
#Extract each video frame as an image.
ffmpeg -i "$VIDEO_FILE" -an -qscale 1 /tmp/create_reversed_video/%06d.jpg
#Recombine into reversed video.
ls -1 /tmp/create_reversed_video/*.jpg | sort -r | xargs cat | ffmpeg -framerate $FRAME_RATE -f image2pipe -i - -i /tmp/create_reversed_video/backwards.wav "$OUTPUT_FILE"
#Delete temporary files.
rm -rf /tmp/create_reversed_video
I've tested it and it works well on my Ubuntu 18.04 machine on lots of videos (after installing the dependencies like sox). Please let me know if it works on other Linux distributions and versions.