Using FFMPEG concat to produce a short output video - image

I need to concatenate 306 images using FFMPEG.
so, I proceed with the following script shell :
touch input.txt
for i in `seq 0 305`;
do
echo "file $i.png" >> input.txt
echo "duration 1" >> input.txt
done
echo "file 305.png" >> input.txt
ffmpeg -f concat -i input.txt -vf fps=10 -vsync vfr -pix_fmt yuv420p video.mp4
I can't find the right parameters for concat.
I searching about duration, I can't find if it's possible to specify
less than 1 second.
Every time, I change fps, the duration of output video became so
big; example: if fps = 10 , output video duration = 3060 seconds
also, I tried :
1. ffmpeg -f concat -i input.txt -vf fps=10 -vsync vfr -pix_fmt yuv420p
video.mp4
2. ffmpeg -f concat -i input.txt -y -vf fps=1 -crf 22 -threads 2
-preset veryfast video.mp4
I tried also, with duration 1 for all images, the final video show only 1 image.
I need to concat those images and produce a short output video.
any idea

Related

I want to pipe these two ffmpeg commands for to convert a video to grayscale frames

Please, I want to pipe these two commands.
ffmpeg -i input.flv -vf fps=1 out%d.png | ffmpeg -i input -vf format=gray output
If you just need frames, try this:
ffmpeg -i input.flv -r 1 -pix_fmt gray out%d.png
There is no need to call it twice
-r sets the output frame rate (1 frame/sec) dropping excess frames
pix_fmt sets the output pixel format
[edit]
Try this to output both grayscale video and images:
ffmpeg -i input.flv \
-filter_complex format=gray,split[v0][v1]
-map [v0] -r 1 out%d.png
-map [v1] output.mp4

How to add a hard code of subs to this filter_complex

ffmpeg -ss 00:11:47.970 -t 3.090 -i "file.mkv" -ss 00:11:46.470 -t 1.500 -i "file" -ss 00:11:51.060 -t 0.960 -i "file.mkv" -an -c:v libvpx -crf 31 -b:v 10000k -y -filter_complex "[0:v:0][0:a:0][1:v:0][1:a:0][2:v:0][2:a:0]concat=n=3:v=1:a=1[outv][outa];[outv]scale='min(960,iw)':-1[outv];[outv]subtitles='file.srt'[outv]" -map [outv] file_out.webm -map [outa] file.mp3
I have a filter where take three different points in a file concat them together and scale them down this part works
Im looking to see how to add to the filter_complex a sub burn in step rendering the subs from the exact timings usings a file that I specify when I use the above code it doesn't work
The subtitles filter is receiving a concatenated stream. It does not contain the timestamps from the original segments. So the subtitles filter starts from the beginning. I'm assuming this is the problem when you said, "it doesn't work".
The simple method to solve this is to make temporary files then concatenate them.
Output segments
ffmpeg -ss 00:11:47.970 -t 3.090 -copyts -i "file.mkv" -filter_complex "scale='min(960,iw)':-1,subtitles='file.srt',setpts=PTS-STARTPTS;asetpts=PTS-STARTPTS" -crf 31 -b:v 10000k temp1.webm
ffmpeg -ss 00:11:46.470 -t 1.500 -copyts -i "file.mkv" -filter_complex "scale='min(960,iw)':-1,subtitles='file.srt',setpts=PTS-STARTPTS;asetpts=PTS-STARTPTS" -crf 31 -b:v 10000k temp2.webm
ffmpeg -ss 00:11:51.060 -t 0.960 -copyts -i "file.mkv" -filter_complex "scale='min(960,iw)':-1,subtitles='file.srt',setpts=PTS-STARTPTS;asetpts=PTS-STARTPTS" -crf 31 -b:v 10000k temp3.webm
The timestamps are reset when fast seek is used (-ss before -i). -copytswill preserve the timestamps so the subtitles filter knows where to start the subtitles.
Make input.txt:
file 'temp1.webm'
file 'temp2.webm'
file 'temp3.webm'
Concatenate with the concat demuxer:
ffmpeg -f concat -i input.txt -c copy output.webm
-c copy enables stream copy mode so it avoids re-encoding to concatenate.

FFmpeg slideshow concat outputs only the last image

I'am trying to produce image slideshow by ffmpeg concat.
The problem is that the output video only plays the last image from my input file with images.
The input:
file '/var/www/html/docroot/types/video/images/img0.jpg'
duration 10
file '/var/www/html/docroot/types/video/images/img1.jpg'
duration 10
file '/var/www/html/docroot/types/video/images/img2.jpg'
duration 10
The command:
ffmpeg -y -r 1/10 -f concat -safe 0 -i /var/www/html/docroot/types/video/info.txt -c:v libx264 -vf "pad=ceil(iw/2)*2:ceil(ih/2)*2,fps=30,format=yuv420p" /var/www/html/docroot/types/video/output.mp4
And in the output I have something like this:
GIF
Remove -r 1/10 and ,fps=30:
ffmpeg -y -f concat -safe 0 -i /var/www/html/docroot/types/video/info.txt -c:v libx264 -vf "pad=ceil(iw/2)*2:ceil(ih/2)*2,format=yuv420p" /var/www/html/docroot/types/video/output.mp4

ffmpeg - set variable video duration

I use this command to convert files in batch plus crop and re-scale:
for i in $( ls *.mp4 ); do
ffmpeg -y -i "$i" -acodec libfaac -ab 128k -ar 44100 -vcodec libx264 -b 850k -threads 0 -vf [in]crop=in_w-112:in_h-63:56:0,scale=1280:720[out] "../../archive/${i/.mp4/}.mp4"
done
this command will start at second 15, and makes video 30 seconds long:
for i in $( ls *.mp4 ); do
ffmpeg -ss 00:00:15 -t 30 -y -i "$i" -acodec libfaac -ab 128k -ar 44100 -vcodec libx264 -b 850k -threads 0 -vf [in]crop=in_w-112:in_h-63:56:0,scale=1280:720[out] "${i/.mp4/}_test.mp4"
done
what I would like is a command that cuts off 15s from the beginning and 15s from the end of EACH video from the BATCH ... the trick is that each video has different duration, so "how long it is" must be a variable (duration minus 15s or minus 30s if I count 15s from the start as well)
video duration examples:
video 1 - 00:25:19
video 2 - 00:15:34
video 3 - 00:19:21
video 4 - 00:22:49
etc.
Couldn't you do this with a simple bash script?
As stated here this command will retrieve the video duration in seconds:
ffprobe -i some_video -show_entries format=duration -v quiet -of csv="p=0"
So you can read that into a variable, then simply withdraw 30 seconds and you will now what duration to set in your ffmpeg command.
So, step 1 will be the first for-loop you have already posted in your question.
Step 2 will be to withdraw 30 seconds from the movies duration and save that in a variable.
Then in step 3, rewrite your second for-loop like this:
for i in $( ls *.mp4 ); do
ffmpeg -ss 00:00:15 -t $your_new_duration_variable -y -i "$i" -acodec libfaac -ab 128k -ar 44100 -vcodec libx264 -b 850k -threads 0 -vf [in]crop=in_w-112:in_h-63:56:0,scale=1280:720[out] "${i/.mp4/}_test.mp4"
done

create video from images but repeat the first and the last frame 100 times

I have images in a specific order in a directory
Order of Images is as follows
frame2_0000.jpeg
frame2_0001.jpeg
frame2_0002.jpeg
frame2_0003.jpeg
....
....
,etc
I generate the video with the following command
"ffmpeg -y -r 23 -i location_of_image_folder/frame2_%04d.jpeg -c:v libx264 -s 1280*1024 -movflags faststart location_of_output_location.mp4"
Now I want to create a video such that the first frame is repeated 100 times to create the video and the last frame is repeated 100 times.
What strategy should I employ here?
Create a text file list containing all your image file names, but duplicating the first and last 100 times:
1.jpeg
1.jpeg
1.jpeg
......
2.jpeg
3.jpeg
....
Then:
cat $(cat list.txt) | ffmpeg -y -framerate 25/1 -i - -f image2pipe -c:v libx264 -pix_fmt yuv420p out.mp4
Try
ffmpeg -framerate 23 -loop 1 -i location_of_image_folder/frame2_0000.jpeg
-framerate 23 -start_number 1 -i location_of_image_folder/frame2_%04d.jpeg
-framerate 23 -loop 1 -i location_of_image_folder/frame2_1000.jpeg
-filter_complex "[0]trim=start_frame=0:end_frame=100[pre];
[2]trim=start_frame=0:end_frame=99[post];
[pre][1][post]concat,scale=1280:1024,setsar=1"
-c:v libx264 -movflags faststart location_of_output_location.mp4"
I've assumed the first image is frame2_0000.jpeg and last is frame2_1000.jpeg. Change accordingly. You'll have to alter start_number as well to have a number of the file after the first image file.

Resources