ffmpeg glitch while converting DNG sequence into mp4 video - ffmpeg

I have two problems with FFmpeg, when I use it to join DNG files sequence into mp4 video file. I also need to downgrade the resolution of the video from 6016x3200 to 2030x1080.
First of all I got almost black screen in the resulting video. Had to play with gamma and brightness options. But it was not enough!
New problems:
something strange happens with aspect ratio in resulting video file: in the first frame aspect is normal, just like in the original picture, but all the rest frames are getting squeezed. can't figure out why this happen!? (see picture attached).
colors are desaturated. despite the fact that I set "saturation" option to the maximum value. and also, the first frame of the video is different from the rest (while DNG files are all similar, first is no exception)
I tried prores codec as well, with the same result.
command I use is simple:
ffmpeg.exe -start_number 1 -i "K:\video\copter_R%5d.dng" -c:v libx264 -vf "fps=25,format=yuv420p, eq=gamma=3.2:brightness=0.2:contrast=1.6:saturation=3, scale=w=2030:h=1080" e:\output.mp4
I tried to use different variants of scale parameter: "scale=-1:1080" as well.
Illustration:
UPDATE: ffmpeg log report for operation:
https://drive.google.com/file/d/1H6bdpU0Eo4WfR3h-SRtgf7WBNYVFRwz2/view?usp=sharing

Related

My filters seem to be applied in wrong order

I'm using ffmpeg to join frames into a video with some parameters.
Here is a sample of the commands I run :
"ffmpeg -y -r 24 -f image2 -i "C:\Users\Pictures\me\frame%04d.bmp" -filter_complex "[0:v]select=between(n,0,76)[selected];[selected]crop=in_w:in_h-60-60:0:60[cropped];[cropped]scale=w=2ceil(2048.0/20.5):h=2ceil(858.0 /20.5) " -c:v libx264 -q:v 1 -b:v 2M "C:\Users\me\Video\output.mp4""
When I run this command I have calculated the size of my cropping on the frames to remove black rectangles at the top and bottom of the frame (I tried using cropdetect but it doesn't fit my usecase so I'm using another soft).So my first that was that ffmpeg would crop on the input stream so it would only crop my black rectangles. But when I change my scale it crops a part of the image.
So my understanding is that ffmpeg crops after scaling (maybe I'm wrong) and if I get the crop parameters on the input images it is sure the they will be wrong if I apply them on the scaled video.
I tried using ";" and "," to separate my filters. I tried naming and not naming my streams between filters. Nothing seems to solve my issue.
What could I do to fix that or am I understanding the issue incorrectely?
Thanks in advance
So actually I didn't understand the problem correctly. The filters are indeed applied in the correct order but it seems like "scale" crops my video again so I'm losing the bottom of my images.
I'm gonna investigate that.

Multiple side-to-side video streams in one file without transcoding

I am investigating a possibility to store video streams which are coming from few sources already coded in h264 without video transcoding as the device I would like to use for this project won't be capable of transcoding combined video on the fly.
What I am looking for is two or more pictures side to side (not video concatenation) packed into mp4/avi/mkv.
I believe mkv container supports such kind of packaging but I've not been able to find appropriate options for ffmpeg or other tool to store it this way. What it does is very slow video transcoding into one big h264 stream.
If your player can handle it just make it perform the side-by-side view. No encoding or muxing required.
mpv video player
Example using mpv:
mpv --lavfi-complex="[vid1][vid2]hstack[vo];[aid1][aid2]amix[ao]" input1.mp4 --external-file=input2.mp4
The above example assumes each input has the same height. Otherwise you will have to add the scale, scale2ref, pad, and/or crop filters. Simple example using the crop filter to remove 20 pixels from the height:
mpv --lavfi-complex="[vid1]crop=iw:ih-20[c];[c][vid2]hstack[vo];[aid1][aid2]amix[ao]" input1.mp4 --external-file=input2.mp4
See the mpv documentation and FFmpeg Filters for more info.
Just specify multiple inputs.
ffmpeg -i [input 1] -i [input 2] ... -map 0 -map 1 ... -codec copy -f matroska [output]
As for the "side-to-side" part, it's up to the player to determine the presentation. If you don't control the player and you need a specific layout or presentation, then you must "burn" all these video streams into a new one and encode it as a new single stream.

ffmpeg how to crop and scale at the same time?

I'm trying to convert a video with black bars, to one without and if the source is 4k, I want the video to be converted to 1080p
Now to do this, I'm using the following command:*
ffmpeg -i input ... -filter:v "crop=..." -filter:V "scale=1920:-1" ouput
But running this, I found that the end product still has said black bars and is 1920x1080 as opposed to the 1920x800 I'd expect.
What gives, why does this not work?
*: Other settings have been left out for convenience.
I got it to work by putting both the crop and the scale in the same -vf tag. I was cropping and then increasing the size of an old video game, and I just did this:
-vf crop=256:192:2:16,scale=-2:1080:flags=neighbor
I knew it worked as soon as I saw it display the output file size as 1440x1080 (4:3 ratio at 1080p).

Different video players showing incorrect mp4 resolution after ffmpeg conversion

After getting help from https://stackoverflow.com/a/40601020/6318164 on how to convert webm to mp4. The result avoiding losing the video ratio by setting the height resolution with -vf scale=-2:720.
I then came across another problem. I've found both width and height had to be supported for the video players, when I thought it was just the height that had to be specified.
After browsing around I found this script https://stackoverflow.com/a/35487394/6318164 were I can change the video's canvas to a common width and height standard. It shrinks the video to fit inside the center of specified canvas without losing the ratio while filling the empty space with black padding if I'm correct, which is the result I want.
However, although it solved the playback problems in all the players, I've found different video players show different resolution information of the same video.
I've modified the script here for Linux terminal use.
X=1280; Y=720; ffmpeg -i old.webm -t 5 -vf "scale=min(iw*$Y/ih\,$X):min($Y\,ih*$X/iw),pad=$X:$Y:($X-iw)/2:($Y-ih)/2" new.mp4
This is the research on the resolution differences I've found for value I set.
X=1280; Y=720;
webm -> mp4
=========================================================
1280x752 -> 1280x720 X-plore (Android)
Not supported -> 1339x720 Telegram (Android)
1338x752 -> 1340x720 GNOME MPlayer (Linux)
Not supported -> ???????? Built-in Video Player (Android)
The question is, I'm I doing anything wrong with the ffmpeg conversion to return incorrect resolutions for different players? I checked out some other videos I have and they show the correct resolutions except this converted one.
Edit
With the help of the accepted answer. This was my working output if anyone needs it:
X=1280; Y=720; ffmpeg -i input.webm -vf "scale='if(gt(a*sar,16/9),${X},${Y}*iw*sar/ih)':'if(gt(a*sar,16/9),${X}*ih/iw/sar,${Y})',pad=${X}:${Y}:(ow-iw)/2:(oh-ih)/2,setsar=1" output.mp4
Add setsar=1 after pad.
Also, your scale expression doesn't account for videos with non-square pixels. Use the expression in this answer.

Overlaying video with ffmpeg

I'm attempting to write a script that will merge 2 separate video files into 1 wider one, in which both videos play back simultaneously. I have it mostly figured out, but when I view the final output, the video that I'm overlaying is extremely slow.
Here's what I'm doing:
Expand the left video to the final video dimensions
ffmpeg -i left.avi -vf "pad=640:240:0:0:black" left_wide.avi
Overlay the right video on top of the left one
ffmpeg -i left_wide.avi -vf "movie=right.avi [mv]; [in][mv] overlay=320:0" combined_video.avi
In the resulting video, the playback on the right video is about half the speed of the left video. Any idea how I can get these files to sync up?
Like the user 65Fbef05 said, the both videos must have the same framerate
use -f framerate and framerate must be the same in both videos.
To find the framerate use:
ffmpeg -i video1
ffmpeg -i video2
and look for the line which contains "Stream #0.0: Video:"
on that line you'll find the fps in movie.
Also I don't know what problems you'll encounter by mixing 2 audio tracks.
From my part I will try to use the audio from the movie which will be overlayed
over and discard the rest.

Resources