Rescaling and slowing down a movie at the same time with ffmpeg - ffmpeg

I would like with ffmpeg to slow down a movie I am creating using the flag:
-filter:v "setpts=2.0*PTS"
However the height of my still images is not divisible by 2, so to avoid the error: height not divisible by 2 (1238x833), I am using the flag:
-vf scale="trunc(iw/2)*2:trunc(ih/2)*2"
(I also tried -vf scale=1238:-2).
When I do this the film is generated but it isn't slowed down, like if the -filter:v "setpts=2.0*PTS" wasn't there.
Is there something particular to do in order to have both option working at the same time?
Here is the complete command I am using:
ffmpeg -an -i ./movie/cphmd1.%05d.ppm -vcodec libx264 -pix_fmt yuv420p -b:v 5000k -r 24 -crf 18 -filter:v "setpts=2.0*PTS" -vf scale="trunc(iw/2)*2:trunc(ih/2)*2" -preset slow -f mp4 cphmd1_slower.mp4
Many thanks in advance!

Multiple filters acting on the same input, in series, have to be chained together. So,
ffmpeg -an -i ./movie/cphmd1.%05d.ppm -vcodec libx264 -pix_fmt yuv420p -b:v 5000k -r 24 -crf 18 -vf "setpts=2.0*PTS,scale=trunc(iw/2)*2:trunc(ih/2)*2" -preset slow -f mp4 cphmd1_slower.mp4

Related

ffmpeg sorting order issue while creating video

I am able to successfully able to convert images to video using below command where all photos of
out directory is used.
$command1 = "ffmpeg -r 1/1 -framerate 25 -pattern_type glob -i 'out/*.jpg' -c:v libx264 -profile:v high -crf 20 -pix_fmt yuv420p -vf 'pad=ceil(iw/2)*2:ceil(ih/2)*2' out/1.mp4";
exec($command1);
However the photos are getting picked randomly and I want to pick them using sort order example.
Pic1.jpg, Pic2.jpg, Pic2.jpg ......
Please suggest what to do here?
You could try using a prefix start_number instead of -pattern_type glob.
As explained here, the * glob does not work as expected.
If your images are in these order:
Pic1.jpg, Pic2.jpg, Pic3.jpg and so on...
Use -start_number instead, the updated ffmpeg:
ffmpeg -r 1/1 -framerate 25 -start_number 1 -i 'out/Pic%d.jpg' -c:v libx264 -profile:v high -crf 20 -pix_fmt yuv420p -vf 'pad=ceil(iw/2)*2:ceil(ih/2)*2' out/1.mp4

FFMPEG: Combine "Create video from images" + scale to x + add audio + overlay logo

I´m working on a webcam-project. It is for generating timelapse videos of sunset/sundown.
I´m using a raspberrypi to generate them with gphoto2 + DSLR.
At the end of the day the images should get to an video, with audio and an overlay logo.
And it should be scaled to 1920 pixel.
I got a nice solution an it worked.
Producing the timelapse video an scale it:
ffmpeg -y -framerate 25 -start_number 0000001 -i /var/www/html/webcam/2020-01-05_bilder/%7d.jpg -vf scale=1920:-1 -pix_fmt yuv420p /var/www/html/webcam/2020-01-05-tag-output-1920.mp4
Taking the output of (1) and add an overlay-logo, add audio
ffmpeg -y -i '/var/www/html/webcam/2020-01-05-tag-output-1920.mp4'
-i '/var/www/html/webcam-scripts/graphics/logo.png'
-i '/var/www/html/webcam-scripts/sounds/chill_time_5.mp3'
-shortest -filter_complex '[1][0]scale2ref=h=ow/mdar:w=iw/6[#A logo][liebfrauen]; [#A logo]format=argb,colorchannelmixer=aa=0.95[#B logo transparent]; [liebfrauen][#B logo transparent] overlay=(main_w-w)-(main_w*0.05):(main_h-h)-(main_h*0.01)'
-c:v libx264 -crf 18 -preset slow -pix_fmt yuv420p -c:a aac -strict -2
'/var/www/html/webcam/2020-01-05-tag-1920.mp4
I tried to combine both actions, but I get an error:
ffmpeg -y -framerate 25 -start_number 0000001 -i '/var/www/html/webcam/2020-01-05_bilder/%7d.jpg' -vf scale=1920:-1 -pix_fmt yuv420p -i '/var/www/html/webcam-scripts/graphics/logo.png' -i '/var/www/html/webcam-scripts/sounds/chill_time_5.mp3' -shortest -filter_complex '[1][0]scale2ref=h=ow/mdar:w=iw/6[#A logo][liebfrauen]; [#A logo]format=argb,colorchannelmixer=aa=0.95[#B logo transparent]; [liebfrauen][#B logo transparent] overlay=(main_w-w)-(main_w*0.05):(main_h-h)-(main_h*0.01)' -c:v libx264 -crf 18 -preset slow -pix_fmt yuv420p -c:a aac -strict -2 '/var/www/html/webcam/2020-01-05-tag-1920.mp4'
Error: Filtergraph 'scale=720:-1' was specified through the -vf/-af/-filter option for output stream 0:0, which is fed from a complex filtergraph.
-vf/-af/-filter and -filter_complex cannot be used together for the same stream.
Isn`t it possible to combine these inputs and scale it? Or ... Where is my misunderstanding?
Don't mix -vf and -filter_complex. Do all filtering in one filtergraph.
ffmpeg -y -framerate 25 -i '/var/www/html/webcam/2020-01-05_bilder/%7d.jpg' -i '/var/www/html/webcam-scripts/graphics/logo.png' -i '/var/www/html/webcam-scripts/sounds/chill_time_5.mp3' -filter_complex '[0]scale=1920:-2[v0];[1][v0]scale2ref=h=ow/mdar:w=iw/6[#A logo][liebfrauen]; [#A logo]format=argb,colorchannelmixer=aa=0.95[#B logo transparent]; [liebfrauen][#B logo transparent] overlay=(main_w-w)-(main_w*0.05):(main_h-h)-(main_h*0.01),format=yuv420p' -c:v libx264 -crf 18 -preset slow -c:a aac -shortest '/var/www/html/webcam/2020-01-05-tag-1920.mp4'
No need for -strict -2. It does nothing for modern ffmpeg.
I replaced -pix_fmt yuv420p with format=yuv420p so it is more organized.
-start_number 0000001 is not needed because 1 is the default.

Crop, Resize and Cut all in one command - FFMPEG

I am trying to do three tasks with FFMPEG
Crop a video without losing quality
Resize (upscale) the cropped video with good quality
Cut specific part of a the upscaled vided without losing quality
Here are the command line I use:
to crop: video og.mp4 to video og1.mp4
ffmpeg -i og.mp4 -vf "crop=1330:615:22:120" -c:v libx264 -crf 1 -preset veryslow -c:a copy og1.mp4
to resize: video og1.mp4 (converted above) to video og2.mp4
ffmpeg -i og1.mp4 -vf scale=1920:-1 -c:v libx264 -crf 1 -preset veryslow -c:a copy og2.mp4
to cut: video og2.mp4 (converted above) to og3.mp4
ffmpeg -i og2.mp4 -ss 00:00:08.190 -t 00:00:11.680 -c:v libx264 -crf 1 -preset veryslow -c:a copy og3.mp4
I want to achieve highest quality of 1920 width video (irrespective of height and size of the file)
Is there a way to get the above tasks in one command or shorter time with best quality?
Also advice if there is a better command or parameters to be used.
Thanks
You can combine all commands by using a single filterchain, and adding the trim as well
ffmpeg -ss 8.190 -t 11.680 -i og.mp4 -vf "crop=1330:615:22:120,scale=1920:-2" -c:v libx264 -crf 1 -c:a copy og1.mp4
With crf 1, a slow preset is unnecessary.

ffmpeg - Convert MP4 to WebM, poor results

I am trying to encode a video to webm for playing through a HTML5 video tag. I have these settings...
ffmpeg -i input.mp4 -c:v libvpx-vp9 -b:a 128k -b:v 1M -c:a libopus output.webm
The results aren't great, video has lost lot's of it's sharpness. Looking at the original file I can see the bitrate is 1694kb/s.
Are there any settings I can add or change to improve the output? Would maybe a 2 pass encode improve things?
Try with
ffmpeg -i input.mp4 -c:v libvpx-vp9 -crf 30 -b:v 0 -b:a 128k -c:a libopus output.webm
Adjust the CRF value till the quality/size tradeoff is ok. Lower values produce bigger but better files.
Try to run two passes:
ffmpeg -i file.mp4 -b:v 0 -crf 30 -pass 1 -an -f webm -y /dev/null
ffmpeg -i file.mp4 -b:v 0 -crf 30 -pass 2 output.webm
From - https://trac.ffmpeg.org/wiki/Encode/VP9

FFMPEG image not updating

THE INPUT FILES
An overlay image that has is being updated every 5 seconds by a Python script
A small MP4 file that will be looped by a concat input
An MP3 file as audio source
THE COMMAND (UPDATED)
This is the command I'm currently using to combine and stream the inputs.
ffmpeg -re -i music.mp3 -f concat -i videoincludes.txt
-r 1 -loop 1 -f image2 -i overlay.png
-c:v libx264 -c:a aac -shortest -crf 23 -pix_fmt yuv420p
-maxrate 2500k -bufsize 2500k -preset ultrafast -r 30 -g 60 -b:v 2000k -b:a 192k -ar 44100
-filter_complex "[1:v][2:v] overlay=0:0" -map 0:a -strict -2
-f flv rtmp://a.rtmp.youtube.com/live2/{key}
Als tried using -framerate 1 instead of -r 1
THE ISSUE
So the issue is that the image doesn't always update. Sometimes it does update every couple seconds at the start but it stops updating after 10-20 seconds without any difference in log output and sometimes it just doesn't update.
I can however confirm that the image is being updated by the Python script but FFmpeg is just not picking this up.
I read setting the input format of the image to image2 should allow it to update so I am not sure what is wrong or what I can do to improve it.
I'm working on the same task, and finally, I think, I found the answer.
Because streams different from each other we must reset their timestamps with setpts=PTS-STARTPTS to have them begin in the same zero timestamp . And, also, try to use image2pipe instead of image2.
This is your code with timestamp reset:
ffmpeg -re -i music.mp3 -f concat -i videoincludes.txt
-r 1 -loop 1 -f image2pipe -i overlay.png
-c:v libx264 -c:a aac -shortest -crf 23 -pix_fmt yuv420p
-maxrate 2500k -bufsize 2500k -preset ultrafast -r 30 -g 60 -b:v 2000k -b:a 192k -ar 44100
-filter_complex "[1:v]setpts=PTS-STARTPTS[out_main]; [2:v]setpts=PTS-STARTPTS[out_overlay]; [out_main][out_overlay]overlay=0:0" -map 0:a -strict -2
-f flv rtmp://a.rtmp.youtube.com/live2/{key}
p.s and I think, there is no need in -r or -framerate anymore

Resources