ffmpeg concatenate images in one image - ffmpeg

I use this to get frames from video and concatenate them in one image:
ffmpeg -i output.mp4 -vf 'fps=2,tile=1000x1' out.jpg
But there is a problem: I do not know number of frames that will be fetched. Here I hardcoded tile size 1000x1, but if there will be more than 1000 frames, then will be an error. Before starting ffmpeg I do not know actual size of tile.
So I want use command like:
ffmpeg -i output.mp4 -vf 'fps=2,tile=*x1' out.jpg
That means: I want you to concatenate ALL images that will be fetched in one row, but I cannot use * as an argument for tile.
Is there some way to solve my problem?

I got an idea:
$ FRAMES=`ffprobe -v error -count_frames -select_streams v:0 -show_entries stream=nb_read_frames -of default=nokey=1:noprint_wrappers=1 xxx.mp4`
$ FFMPEG="ffmpeg -i xxx.mp4 -vf 'fps=2,tile=\$FRAMESx1' out.jpg"
$ `echo "${FFMPEG//\$FRAMES/$FRAMES}"`

Related

FFMPEG Convert video to images

when I use ffmpeg to convert video to images there's one problem,the total number of pictures is not equal to the number of frames.
first, I used ffprobe cmd to get the total number of frames:
ffprobe -v error -count_frames -select_streams v:0 -show_entries stream=nb_read_frames -of default=nokey=1:noprint_wrappers=1 in.mp4
and got the number is 278
then,I used ffmpeg cmd to convert video to images:
ffmpeg -i 'in.mp4' -f image2 -qscale:v 2 'out_%05d.png'
but I got 281 pictures.
I checked out the ffmpeg's documentation but found nothing about this
so how can I solve this problem

Non-consistent results between ffprobe and ffmpeg for keyframes identification

Trying to identify both thumbnails and timestamps of keyframes on a set of videos, I'm getting different results from ffmpeg and ffprobe.
Taking a 1 min. long video as an example:
youtube-dl -f 'bestvideo[ext=mp4]+bestaudio[ext=m4a]/mp4' 'https://www.youtube.com/watch?v=BHlAlN3z4ss' --output "test.mp4"
1/ I extract thumbnails and write on the image the timestamp at which it was extracted:
ffmpeg -i test.mp4 -q:v 2 -vf select="eq(pict_type\,PICT_TYPE_I)","drawtext=fontfile=/path/to/Arial.ttf:fontsize=45:fontcolor=yellow:box=1:boxcolor=black:x=(W-tw)/2:y=H-th-10:text='Time\: %{pts\:hms}'" -vsync 0 thumbs/preview%05d.jpg
2/ I extract and save the timestamps of all keyframes:
ffprobe -v error -skip_frame nokey -show_entries frame=pkt_pts_time -select_streams v -of csv=p=0 test.mp4 | sort -n > keyframes_timestamps.txt
3/ Comparing results, I figure ffprobe found 29 keyframes, while ffmpeg found only 32. Comparing manually, we can see that specific keyframes are not detected by ``ffprobe` while most are very similar.
ffprobe_ts ffmpeg_ts
0.000000 00:00:00.00
5.366667 00:00:05.367
7.200000 00:00:07.200
8.666667 00:00:08.667
10.100000 00:00:10.100
11.500000 00:00:11.500
14.233333 00:00:14.233
15.333333 00:00:15.333
17.366667 00:00:17.367
NO_TS 00:00:18.833
20.800000 00:00:20.800
24.533333 00:00:24.533
25.700000 00:00:25.700
26.033333 00:00:26.033
On larger videos, this happens for around less that 5% of the keyframes.
I can't find an explanation about that, does anyone have a clue ? or an advice on where/what I should inquire further ?
Thanks for your help !
Not all I-frames are keyframes. -skip_frame nokey will skip non-KF I-frames.

How to bash-script or simplify ffmpeg commands for resizing the watermark, adding it to the video and adding subtitle.ass to the video?

I would like to create hard-subbed video with watermark by using ffmpeg. And I'd like to know how to combine and simplify multiple commands or how to create a bash-script for this purpose.
I've tried searching in stackflow and tested some commands but they didn't work. Here are the comands I'm using.
To detect video width & height:
ffprobe -v quiet -show_entries stream=width,height -of default=noprint_wrappers=1 video_in.mp4
To resize the watermark image: (video width = 1280)
ffmpeg -i watermark.png -y -v quiet -vf scale=1280*0.15:-1 watermark_scaled.png
To add watermark to the video:
ffmpeg -i video_in.mp4 -i watermark_scaled.png -filter_complex "overlay=W-w-5:5" video_marked.mp4
To add .ass subtitle to the video: (it need to be '.ass')
ffmpeg -i video_marked.mp4 -vf ass=subtitle.ass video_final.mp4
You don't need to detect video dimensions. The scale2ref filter can resize input using a reference.
Here's all steps in one command.
ffmpeg -i video_in.mp4 -i watermark.png
-filter_complex "[1][0]scale2ref=iw*0.15:ow/mdar[wm][v];
[v][wm]overlay=W-w-5:5,ass=subtitle.ass"
-c:a copy video_final.mp4

Trying to limit output of ffmpeg

I have the following command line:
ffmpeg -hide_banner -ss 5 -i test.mp4 -y -vf
"select='eq(pict_type\,PICT_TYPE_I)',
mpdecimate,showinfo,scale=320:240,tile=12x25" -vsync 2 out%%03d.png
As you can see, I make a mosaic of 12x25 (=300) tiles per output image. But I'd like to cap the output to a single image.
Is there a way to have ffmpeg stop processing the video after it found 300 frames?
Additionally, when grabbin the I-frames, is there a way to just keep 1/x for example
After playing with different options, I couldn't find any way to do this.
Use
ffmpeg -hide_banner -ss 5 -skip_frame nokey -i test.mp4 -y -vf "framestep=7,mpdecimate,showinfo,scale=320:240,tile=12x25" -vsync 0 -vframes 1 out.png
framestep value sets x in 1/x. You probably don't need mpdecimate if you're skipping x-1 keyframes. I've added -skip_frame nokey to avoid using the select filter. This method is much faster.

ffmpeg convertation image<->video causes artefacts

I want to convert video to images, do some image processing and convert images back to video.
Here is my commands:
./ffmpeg -r 30 -i $VIDEO_NAME "image%d.png"
./ffmpeg -r 30 -y -i "image%d.png" output.mpg
But in output.mpg video I have some artefacts like in jpeg.
Also I don't know how to detrmine fps, I set fps=30 (-r 30).
When I use above first command without -r it produces a lot of images > 1kk, but than I use -r 30 option it produce same number of images as this command calculationg number of frames:
FRAME_COUNT=`./ffprobe -v error -count_frames -select_streams v:0 -show_entries stream=nb_read_frames -of default=nokey=1:noprint_wrappers=1 $VIDEO_NAME`
So my questions are:
How to determine frame rate ?
How to convert images to video and don't reduce initial quality?
UPDATE:
Seems this helped, after I removed -r option
Image sequence to video quality
so resulting command is :
./ffmpeg -y -i "image%d.png" -vcodec mpeg4 -b $BITRATE output_$BITRATE.avi
but I'm still not sure how to select bitrate.
How can I see bitrate of original .mp4 file?
You can use the qscale parameter instead of bitrate e.g.
ffmpeg -y -i "image%d.png" -vcodec mpeg4 -q:v 1 output_1.avi
q:v is short for qscale:v. 1 may produce too large files. 4-6 is a decent range to use.

Resources