how to generate frames using ffmpeg `select` filter with custom function - ffmpeg

I got a video, and want to generate some frames to be my video cover candidates which should satisfied with some conditions below:
frame in vary scenes.
frame contain person with good look, no eyes closed and etc.
I found ffmpeg select filter abilities, like
ffmpeg -i input.mp4 -vf "select=gt(scene\,0.4)" -frames:v 5 -vsync vfr frames-diff-%02d.png
can generate frames that have more than 40% scene change compared to the previous frames.
I just found that only built-in select expressions support and no customization support.
Does ffmpeg support external function/api/local command/script to give a frame select evaluation?
Is there any other solutions to complete my goal?
Thanks.

Related

ffmpeg glitch while converting DNG sequence into mp4 video

I have two problems with FFmpeg, when I use it to join DNG files sequence into mp4 video file. I also need to downgrade the resolution of the video from 6016x3200 to 2030x1080.
First of all I got almost black screen in the resulting video. Had to play with gamma and brightness options. But it was not enough!
New problems:
something strange happens with aspect ratio in resulting video file: in the first frame aspect is normal, just like in the original picture, but all the rest frames are getting squeezed. can't figure out why this happen!? (see picture attached).
colors are desaturated. despite the fact that I set "saturation" option to the maximum value. and also, the first frame of the video is different from the rest (while DNG files are all similar, first is no exception)
I tried prores codec as well, with the same result.
command I use is simple:
ffmpeg.exe -start_number 1 -i "K:\video\copter_R%5d.dng" -c:v libx264 -vf "fps=25,format=yuv420p, eq=gamma=3.2:brightness=0.2:contrast=1.6:saturation=3, scale=w=2030:h=1080" e:\output.mp4
I tried to use different variants of scale parameter: "scale=-1:1080" as well.
Illustration:
UPDATE: ffmpeg log report for operation:
https://drive.google.com/file/d/1H6bdpU0Eo4WfR3h-SRtgf7WBNYVFRwz2/view?usp=sharing

Showing in-video visual progress bar with FFMPEG?

As OBS Studio lacks a visual indicator to show how far a video has progressed (and when you need to advance to the next scene), I was wondering if there is a command-line option (or solution) to get FFMPEG to re-encode the video and show a progress bar at the bottom of the video that shows how long the video has been playing so far.
Is there such a feature?
Here's a simple 3 second example using an animated overlay:
ffmpeg -i input.mp4 -filter_complex "color=c=red:s=1280x10[bar];[0][bar]overlay=-w+(w/10)*t:H-h:shortest=1" -c:a copy output.mp4
What you will have to change:
In the color filter I used 1280 as an example to match the width of input.mp4. You can use ffprobe to get the width or the scale2ref filter to resize to match input.mp4.
In the overlay filter I used 10 as an example for the total duration in seconds of input.mp4. You can use ffprobe to get the duration.

ffmpeg output doesn't play on multiple devices

I have read the other stackoverflow posts regarding this topic so I am fairly certain this is not exact duplicate.
ffmpeg exports a video that seems to only play on select players. I want to export a video that plays on iphone/mac/general players. I have seen the suggestions for the -pix_fmt yuv420p tag but this does not seem to work anymore - I read that Mac has since changed their systems that makes it not compatible anymore.
I am running:
ffmpeg -start_number 1 -framerate 4 -pix_fmt yuv420p -i screen%01d.png output.mp4
This all works fine and I can see the video by doing:
ffplay output.mp4
But I would like to be able to transfer this to mobile or general playback, any way to do this, ideally using ffmpeg? I'd rather not use two tools to do 1 job.
Works on gmail
Doesn't work on QuickTime Player
Doesn't work on Flip Player
Doesn't work on iPhone
Order of options is important. It should be,
ffmpeg -start_number 1 -framerate 4 -i screen%01d.png -pix_fmt yuv420p output.mp4
Now pix_fmt is set as an output option. Originally, it was trying to force the input format, but since PNGs are images with metadata and not raw pixel data, th eoption had no effect. Additionally, for web use, it's good to also set -movflags +faststart as an output option.
Note that old versions of VLC couldn't play videos with framerate < 6. Could possibly be an issue with a few other players as well. Add -r 8 as an output option to avoid that.

Multiple side-to-side video streams in one file without transcoding

I am investigating a possibility to store video streams which are coming from few sources already coded in h264 without video transcoding as the device I would like to use for this project won't be capable of transcoding combined video on the fly.
What I am looking for is two or more pictures side to side (not video concatenation) packed into mp4/avi/mkv.
I believe mkv container supports such kind of packaging but I've not been able to find appropriate options for ffmpeg or other tool to store it this way. What it does is very slow video transcoding into one big h264 stream.
If your player can handle it just make it perform the side-by-side view. No encoding or muxing required.
mpv video player
Example using mpv:
mpv --lavfi-complex="[vid1][vid2]hstack[vo];[aid1][aid2]amix[ao]" input1.mp4 --external-file=input2.mp4
The above example assumes each input has the same height. Otherwise you will have to add the scale, scale2ref, pad, and/or crop filters. Simple example using the crop filter to remove 20 pixels from the height:
mpv --lavfi-complex="[vid1]crop=iw:ih-20[c];[c][vid2]hstack[vo];[aid1][aid2]amix[ao]" input1.mp4 --external-file=input2.mp4
See the mpv documentation and FFmpeg Filters for more info.
Just specify multiple inputs.
ffmpeg -i [input 1] -i [input 2] ... -map 0 -map 1 ... -codec copy -f matroska [output]
As for the "side-to-side" part, it's up to the player to determine the presentation. If you don't control the player and you need a specific layout or presentation, then you must "burn" all these video streams into a new one and encode it as a new single stream.

Produce waveform video from audio using FFMPEG

I am trying to create a waveform video from audio. My goal is to produce a video that looks something like this
For my test I have an mp3 that plays a short clipped sound. There are 4 bars of 1/4 notes and 4 bars of 1/8 notes played at 120bpm. I am having some trouble coming up with the right combination of preprocessing and filtering to produce a video that looks like the image. The colors dont have to be exact, I am more concerned with the shape of the beats. I tried a couple of different approaches using showwaves and showspectrum. I cant quite wrap my head around why when using showwaves the beats go past so quickly, but using showspectrum produces a video where I can see each individual beat.
ShowWaves
ffmpeg -i beat_test.mp3 -filter_complex "[0:a]showwaves=s=1280x100:mode=cline:rate=25:scale=sqrt,format=yuv420p[v]" -map "[v]" -map 0:a output_wav.mp4
This link will download the output of that command.
ShowSpectrum
ffmpeg -i beat_test.mp3 -filter_complex "[0:a]showspectrum=s=1280x100:mode=combined:color=intensity:saturation=5:slide=1:scale=cbrt,format=yuv420p[v]" -map "[v]" -an -map 0:a output_spec.mp4
This link will download the output of that command.
I posted the simple examples because I didn't want to confuse the issue by adding all the variations I have tried.
In practice I suppose I can get away with the output from showspectrum but I'd like to understand where/how I am thinking about this incorrectly. Thanks for any advice.
Here is a link to the source audio file.
What showwaves does is show the waveform in realtime, and the display window is 1/framerate i.e. if the video output is 25 fps, then each frame shows the waveform of 40 ms of audio. There's no 'history' or 'memory' so you can't (directly) get a scrolling output like it seems your reference video shows.
The workaround for this is to use the showwavespic filter to produce a single frame showing the entire waveform at a high enough horizontal resolution. Then do a scrolling overlay of that picture over a desired background, at a speed such that the scroll lasts as long as the audio.
Basic command template would be:
ffmpeg -loop 1 -i bg.png -loop 1 -i wavespic.png -i audio.mp3
-filter_complex "[0][1]overlay=W-w*t/mp3dur:y=SOMEFIXEDVALUE" -shortest waves.mp4
mp3dur above should be replaced with the duration of the audio file.

Resources