Select audio track to use when generating a waveform image - ffmpeg

ffmpeg can draw a waveform with showwavespic which works fine. But in my case, I have a file with multiple audio tracks and I want to specify which audio track to use to draw the waveform.
The default example is: ffmpeg -i input -filter_complex "showwavespic=s=640x120" -frames:v 1 output.png
I tried to add a map 0:a:0 inbetween but that gives strange ffmpeg errors.
Does anybody know how I can set the track index to use without first extracting the desired audiotrack?

This can be achieved using filtergraph link labels (see https://ffmpeg.org/ffmpeg-filters.html#Filtergraph-syntax-1) to select the relevant input eg:
ffmpeg -i input -filter_complex "[0:a:6]showwavespic=s=640x240" -frames:v 1 output.png

Related

How to encode video then filter to change position block frame output?

Lets say video with 1920x1080 3 minutes. i want to split each frame to 4 block. so each block 480x270. then, i want to put block 1 to 4 position. block 4 to 1st position. https://i.ibb.co/QkGpKN6/Naruto-Uzumaki.png
exactly i can extract frame of video to image. then, edit that image. but i got reduce quality, and the size bigger. the disadvantage, it takes twice times.
ffmpeg -r 1 -i input.mp4 -r 1 output_%d.jpg
// convert back image to video after block frame set to specific position in backend.
ffmpeg -r 23.97602397602398 -i output_%d.jpg -c:v h264 -r 23.97602397602398 output.mp4
is there direct way to do this?
FFmpeg's -vf, -af, and -filter_complex options accept a combination of filters as a filtergraph (commas to chain them and semicolons to have multiple chains).
For your purpose, you can try this filter combo first (before crop+overlay):
ffmpeg -i input -vf 'untile=2x2,shuffleframes=0 2 1 3,tile=2x2' output
You probably want different shuffling order. Experiment to figure it out.
Read the doc for their options: https://ffmpeg.org/ffmpeg-filters.html
EDIT
If you want to use crop-overlay instead, try
ffmpeg -i input.mp4 -filter_complex \
"[0]crop=427:240:0:0[v1];\
[0]crop=427:240:427:0[v2];\
[0]crop=427:240:0:240[v3];\
[0]crop=427:240:427:240[v4];\
[0][v1]overlay=x=0:y=0[v5];\
[v5][v2]overlay=x=427:y=0[v6];\
[v6][v3]overlay=x=0:y=240[v7];\
[v7][v8]overlay=x=427:y=240[out]"\
-c:v libx264 -crf 30 output.mp4
Personally, I like the other approach better (a way more compact)
Another option is to use xstack filter instead of overlay filters (one xstack takes care of all the overlay ops).

How can I resize an overlay image with ffmpeg?

I am trying to add an overlay to a video using the following command
ffmpeg -y -i "$videoPath" -i "$overlayPath" -filter_complex "[0:v] [1:v] overlay=$overlayPosition" -pix_fmt yuv420p -c:a copy "$outputPath"
However, I would like to be able to resize the overlay I am about to apply to some arbitrary resolution (no care for keeping proportions). However, although I followed a couple of similar solutions from SO (like FFMPEG - How to resize an image overlay?), I am not quite sute about the meaning of the parameters or what I need to add it in my case.
I would need to add something like (?)
[1:v]scale=360:360[z] [1:v]overlay=$overlayPosition[z]
This doesn't seem to work so I'm not sure what I should be aiming for.
I would appreciate any assistance, perhaps with some explanation.
Thanks!
You have found all parts. Let's bring them together:
ffmpeg -i "$videoPath" -i "$overlayPath" -filter_complex "[1:v]scale=360:360[z];[0:v][z]overlay[out]" -map "[out]" -map "0:a" "$outputPath"
For explanation:
We're executing here two filter within the "filter_complex" parameter separated by a semicolon ";".
First we scale the second video input ([1:v]) to a new resolution and store the output in variable "z" (you can put here any name).
Second we bring the first input video ([0:v]) and the overlay ([z]) together and store the output in variable "out".
Now it's time to tell ffmpeg what he should pack into our output file:
-map "[out]" (for the video)
-map "0:a" (for the audio of the first input file)

FFMPEG - crop and pad a video (keep 3840x1920 but with black borders)

I am trying to crop a video so that I can remove a chunk of the content from the sides of a 360-degree video file using FFmpeg.
I used the following command and it does part of the job:
ffmpeg -i testVideo.mp4 -vf crop=3072:1920:384:0,pad=3840:1920:384:0 output.mp4
This will remove the sides of the video and that was initially exactly what I wanted (A). Now I'm wondering if it is possible to crop in the same way but to keep the top third of video. As such, A is what I have, B is what I want.:
I thought I could simply do this:
ffmpeg -i testVideo.mp4 -vf crop=3072:1920:384:640,pad=3840:1920:384:640 output.mp4
But that doesn't seem to work.
Any input would be very helpful.
Use the drawbox filter to fill cropped portion with default colour black.
ffmpeg -i testVideo.mp4 -vf drawbox=w=384:h=1280:x=0:y=640:t=fill,drawbox=w=384:h=1280:x=3840-384:y=640:t=fill -c:a copy output.mp4
The first filter acts on the left side, and the 2nd on the right.

ffmpeg normalization waveform

Im creating waveform's to my audio player by code:
ffmpeg -i source.wav -filter_complex "aformat=channel_layouts=mono,showwavespic=s=1280x90:colors=#000000" -frames:v 1 output.png
Sometimes waveform looking so bad like here:
Sometime in other song looking good like here:
So first waveform is tiny.. How can I normalize scale output waveform to size of output image 90px height?
There's a command you can add called "compand" which scales the waveform vertically. You could update your ffmpeg command to be:
ffmpeg -i source.wav -filter_complex "compand,aformat=channel_layouts=mono,showwavespic=s=1280x90:colors=#000000" -frames:v 1 output.png
You can check out the documentation here: https://trac.ffmpeg.org/wiki/Waveform

How to capture multiple screenshot from online video stream using ffmpeg with specific seek time

I'm using ffmpeg to take screenshot from online video stream. I want to seek multiple timeline. I've used the following command to capture 1 screenshot by seek command:
ffmpeg -ss 00:02:10 -i "stream-url" -frames:v 1 out1.jpg
How I can take multiple screenshot via multiple seek time. I've searched for the solution but no success.
I've used the following command to take multiple screenshot as follows:
ffmpeg -noaccurate_seek -ss 00:01:10 -i "stream-url" -map 0:v:0 -vframes 1 -f mpeg "thumb/output_01.jpg" -ss 00:02:10 -i "stream-url" -map 1:v:0 -vframes 1 -f mpeg "thumb/output_02.jpg"
Is there any way to generate screenshots from same input via seek command? How to make it more faster? How to skip multiple input(-i param)? I've also tried with other commands but those are more slower. Can anyone help me?
There's no easy way I know to specify a number of arbitrary seek points from which to extract frames (similar question here).
However, seeking is very fast with the way you specified. Instead of constructing a complex command, you could just download the YouTube video using youtube-dl (if you haven't done that already) and generate the commands like this:
ffmpeg -ss 00:01:10 -i input -frames:v 1 out1.jpg
ffmpeg -ss 00:02:05 -i input -frames:v 1 out2.jpg
ffmpeg -ss 00:03:20 -i input -frames:v 1 out3.jpg
Note that exporting JPG might lead to low quality. Using PNG is preferred; you will get lossless frames that you can handle with another program later (e.g. to resize or compress).
If you want to get frames from regular intervals, use the fps filter to drop the framerate:
ffmpeg -i input -filter:v fps=1/60 out%02d.jpg
This will output a frame every minute (1/60 frames per second = 1 frame per minute), with two zero-padded digits as output numbers. You could additionally offset the start by providing a -ss option before the input file.

Resources