ffmpeg color, size and amplitude - ffmpeg

Trying to get ffmpeg to create an audio waveform while being able to control the image size, color, and amplitude. I have tried this (and many variations) but it just returns unmatched " .
ffmpeg -i input -filter_complex "aformat=channel_layouts=mono,compand=gain=-3,showwavespic=s=1000x350,color=s=1000x350:color=A072FD” -frames:v 1 output.png
Thoughts?

Use
ffmpeg -i input -filter_complex
"aformat=channel_layouts=mono,
compand=gain=-3,
showwavespic=s=1000x350:colors=A072FD"
-frames:v 1 output.png
Your closing quote is curly which doesn't match the opening straight quote. Besides that, by putting a , after s=1000x350, you prematurely terminated the arguments for showwavespic.

Related

ffmpeg watermark: scale2ref output can't be used in second overlay

I was able to add a watermark to 2 position(top left & bottom right) of a video with scaling image height to tenth of the video height in one command
ffmpeg -hide_banner -i /path/to/input.mp4 -i /path/to/watermark.jpg -filter_complex "[1:v][0:v]scale2ref=oh*mdar:ih/10[logo-out][video-out];[video-out][logo-out]overlay=10:10[flag];[1:v][flag]scale2ref=oh*mdar:ih/10[logo-out2][video-out2];[video-out2][logo-out2]overlay=W-w-10:H-h-10" -c:a copy /path/to/output.mp4
But the above command is too redundant, so I remove the second scale2ref
ffmpeg -hide_banner -i /path/to/input.mp4 -i /path/to/watermark.jpg -filter_complex "[1:v][0:v]scale2ref=oh*mdar:ih/10[logo-out][video-out];[video-out][logo-out]overlay=10:10[flag];[flag][logo-out]overlay=W-w-10:H-h-10" -c:a copy /path/to/output.mp4
But sadly, error occurs
[mov,mp4,m4a,3gp,3g2,mj2 # 0x7fb195013c00] Invalid stream specifier: logo-out.
Last message repeated 1 times
Stream specifier 'logo-out' in filtergraph description [1:v][0:v]scale2ref=oh*mdar:ih/10[logo-out][video-out];[video-out][logo-out]overlay=10:10[flag];[flag][logo-out]overlay=W-w-10:H-h-10 matches no streams
I know error occurs because of the first overlay didn't set an image output specifier, but it seems we can't do this? I only know overlay can set a video stream specifier.
How can I use the [logo-out] specifier which output from scale2ref in the second overlay?
An output generated inside a filtergraph can only be consumed once. To reuse it, split it first.
ffmpeg -hide_banner -i /path/to/input.mp4 -i /path/to/watermark.jpg -filter_complex "[1:v][0:v]scale2ref=oh*mdar:ih/10[logo-out][video-out];[logo-out]split=2[logo-left][logo-right];[video-out][logo-left]overlay=10:10[flag];[flag][logo-right]overlay=W-w-10:H-h-10" -c:a copy /path/to/output.mp4

FFmpeg doesn't recognize correct input height in hstack command

Having an issue with an hstack FFmpeg command that has stumped me.
input1 and input2 are both vertical 360x640 videos. I am cropping input1 to a square, merging it vertically with input2, then cropping a vertical strip off each side of the resulting video and horizontally merging these three videos (left strip, middle vertically-stacked video, right strip).
ffmpeg -i input1.mp4 -i input2.mp4 -filter_complex [0:v]crop=360:360:0:140,fps=30[v0],[1:v]fps=30[v1],[v0][v1]vstack=inputs=2[m],[m]crop=101:ih:0:0[l],[m]crop=101:ih:259:0[r],[l][m][r]hstack=inputs=3[v];[0:a][1:a]amix[a] -map [v] -map [a] -preset ultrafast ./stackedOutput.mp4
When I run this, I get an error:
[Parsed_hstack_6 # 0x7ff5394482c0] Input 1 height 640 does not match input 0 height 1000. [Parsed_hstack_6 # 0x7ff5394482c0] Failed to configure output pad on Parsed_hstack_6
(Full FFmpeg output here.)
But the height of [m] (Input 1 in hstack) is not 640, it's 1000. I have verified this when the commands are run independently.
Why is FFmpeg not recognizing the correct height of [m]? Any help or pointers greatly appreciated! Thanks in advance!
Use:
ffmpeg -i input1.mp4 -i input2.mp4 -filter_complex "[0:v]crop=360:360:0:140,fps=30[v0];[1:v]fps=30[v1];[v0][v1]vstack=inputs=2,split=3[lc][m][rc];[lc]crop=101:ih:0:0[l];[rc]crop=101:ih:259:0[r];[l][m][r]hstack=inputs=3[v];[0:a][1:a]amix[a]" -map "[v]" -map "[a]" -preset ultrafast ./stackedOutput.mp4
Two problems:
Your syntax is incorrect. Filters in the same linear chain are separated by commas, and distinct linear chains of filters are separated by semicolons. See filtering introduction.
You can't re-use the output from a filter multiple times. In your command [m] was already consumed by the first crop, so it is no longer available for the following crop and hstack. The split filter can be used to make multiple copies of a filter output.

Select audio track to use when generating a waveform image

ffmpeg can draw a waveform with showwavespic which works fine. But in my case, I have a file with multiple audio tracks and I want to specify which audio track to use to draw the waveform.
The default example is: ffmpeg -i input -filter_complex "showwavespic=s=640x120" -frames:v 1 output.png
I tried to add a map 0:a:0 inbetween but that gives strange ffmpeg errors.
Does anybody know how I can set the track index to use without first extracting the desired audiotrack?
This can be achieved using filtergraph link labels (see https://ffmpeg.org/ffmpeg-filters.html#Filtergraph-syntax-1) to select the relevant input eg:
ffmpeg -i input -filter_complex "[0:a:6]showwavespic=s=640x240" -frames:v 1 output.png

How can I resize an overlay image with ffmpeg?

I am trying to add an overlay to a video using the following command
ffmpeg -y -i "$videoPath" -i "$overlayPath" -filter_complex "[0:v] [1:v] overlay=$overlayPosition" -pix_fmt yuv420p -c:a copy "$outputPath"
However, I would like to be able to resize the overlay I am about to apply to some arbitrary resolution (no care for keeping proportions). However, although I followed a couple of similar solutions from SO (like FFMPEG - How to resize an image overlay?), I am not quite sute about the meaning of the parameters or what I need to add it in my case.
I would need to add something like (?)
[1:v]scale=360:360[z] [1:v]overlay=$overlayPosition[z]
This doesn't seem to work so I'm not sure what I should be aiming for.
I would appreciate any assistance, perhaps with some explanation.
Thanks!
You have found all parts. Let's bring them together:
ffmpeg -i "$videoPath" -i "$overlayPath" -filter_complex "[1:v]scale=360:360[z];[0:v][z]overlay[out]" -map "[out]" -map "0:a" "$outputPath"
For explanation:
We're executing here two filter within the "filter_complex" parameter separated by a semicolon ";".
First we scale the second video input ([1:v]) to a new resolution and store the output in variable "z" (you can put here any name).
Second we bring the first input video ([0:v]) and the overlay ([z]) together and store the output in variable "out".
Now it's time to tell ffmpeg what he should pack into our output file:
-map "[out]" (for the video)
-map "0:a" (for the audio of the first input file)

FFMPEG multiple commands using filter-complex

I've pieced together 3 commands but my solution involves writing a number of tempory files. I would ultimately like to pipe the output of one command into the next command, without the temporary files.
Although many questions discuss filter-complex (which is how I believe results passing as inputs is accomplished), I can't seem to find an example of commands that use filter_complexs flowing into other filter_complex commands (nested filter-complex commands?). In my example, two distinct inputs are required, resulting in one output.
/*
Brighten & increase saturation of original image
Remove white shape from black background silhouette, leaving a transparent shape
Overlay black background silhouette over brightened image. Creating a focus point
*/
ffmpeg -i OrigionalImage.png -vf eq=brightness=0.06:saturation=2 -c:a copy BrightenedImage.png
ffmpeg -i WhiteSilhouette.png -filter_complex "[0]split[m][a]; [a]geq='if(lt(lum(X,Y),16),255,0)',hue=s=0[al]; [m][al]alphamerge" -c:a copy TransparentSilhouette.png
ffmpeg -i BrightenedImage.png -i TransparentSilhouette.png -filter_complex "[0:v][1:v] overlay=(W-w)/2:(H-h)/2" -c:a copy BrightnedSilhouette.png
Two original inputs and final output
Origional Image
White Silhouette
Brightned Silhouette
Use
ffmpeg -i OriginalImage.png -i WhiteSilhouette.png -filter_complex "[0]eq=brightness=0.06:saturation=2[img];[1]split[m][a];[a]geq='if(lt(lum(X,Y),16),255,0)',hue=s=0[al];[m][al]alphamerge[sil];[img][sil]overlay=(W-w)/2:(H-h)/2" BrightnedSilhouette.png
You can also just invert WhiteSilouhette to generate the alpha,
ffmpeg -i OriginalImage.png -i WhiteSilhouette.png -filter_complex "[0]eq=brightness=0.06:saturation=2[img];[1]split[m][a];[a]geq='255-lum(X,Y)',hue=s=0[al]; [m][al]alphamerge[sil];[img][sil]overlay=(W-w)/2:(H-h)/2" BrightnedSilhouette.png

Resources