How to join multiple ffmpeg command - ffmpeg

I am merging two .webm files using ffmpeg application(3.3.3) as follows,
ffmpeg -i input1.webm -c copy temp1.webm
ffmpeg -i input2.webm -c copy temp2.webm
ffmpeg -i temp1.webm -i temp2.webm -filter_complex [0:v]scale=640:360,setsar=1[l];[1:v]scale=640:360,setsar=1[r];[l][r]hstack;[0][1]amix" -vsync 0 -ac 2 -deadline realtime -cpu-used 8 output.webm
Above works fine for me but I want to do this in one step/ command. I am launching the ffmpeg.exe from my C++ application on windows so I have to do it for three times. I see that using | or && doesn't work for me. If I try from command prompt then it works using &&.
Please suggest.

Related

Calculate VMAF score while encoding a video with FFmpeg

I have an ffmpeg version built with VMAF library. I can use it to calculate the VMAF scores of a distorted video against a reference video using commands like this:
ffmpeg -i distorted.mp4 -i original.mp4 -filter_complex "[0:v]scale=640:480:flags=bicubic[main];[main][1:v]libvmaf=model_path=model/vmaf_v0.6.1.json:log_path=log.json" -f null -
Now, I remember there was a way to get VMAF scores while performing regular ffmpeg encoding. How can I do that at the same time?
I want to encode a video like this, while also calulate the VMAF of the output file:
ffmpeg -i original.mp4 -crf 27 -s 640x480 out.mp4
[edited]
Alright, scratch what I said earlier...
You should be able to use [the `tee` muxer](http://ffmpeg.org/ffmpeg-formats.html#tee-1) to save the file and pipe the encoded frames to another ffmpeg process. Something like this should work for you:
ffmpeg -i original.mp4 -crf 27 -s 640x480 -f tee "out.mp4 | [f=mp4]-" \
| ffmpeg -i - -i original.mp4 -filter_complex ...
(make them into 2 lines and remove \ for Windows)
Here is what works on my Windows PC (thanks to #Rotem for his help)
ffmpeg -i in.mp4 -vcodec libx264 -crf 27 -f nut pipe:
|
ffmpeg -i in.mp4 -f nut -i pipe:
-filter_complex "[0:v][1:v]libvmaf=log_fmt=json:log_path=log.json,nullsink"
-map 1 -c copy out.mp4
The main issue that #Rotem and I missed is that we need to terminate the libvmaf's output. Also, h264 raw format does not carry header info, and using `nut alleviates that issue.
There are a couple caveats:
testing with the testsrc example that #Rotem suggested in the comment below does not produce any libvmaf log, at least as far as I can see, but in debug mode, you can see the filter is getting initialized.
You'll likely see [nut # 0000026b123afb80] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8) message in the log. This just means that the frames are piped in faster than the 2nd ffmpeg is processing. FFmpeg does block on both ends, so no info should be lost.
For the full disclosure, I posted my Python test script on GitHub. It just runs the shell command, so it should be easy to follow even if you don't do Python.

merge audio and video with ffmpeg doesnt work correctly

I have ubuntu 20.04 and in past days I did this job(merge video and audio) well in terminal and with ffmpeg:
ffmpeg -i input.mp4 -i input2.mp3 -c copy output.mp4
so fast I have recived output.mp4, but now I tried this one and get output without any sound!
I try another ways to merge this ones(also with ffmpeg) but there are no diffrent...
ffmpeg -f concat -safe 0 -i <(for f in ./input*.mp4; do echo "file '$PWD/$f'"; done) -c copy output.mp4
Note -f concat will select a demuxer. This alters the way -i nput files are read.
So instead video-files 'concat expects a txt-file listing the files to concatenate.
However we somehow omit the creation of that text file and use process substitution to generate and pass that list on the fly to demux.
For more details go here:
https://trac.ffmpeg.org/wiki/Concatenate#demuxer
If you want to merge several video files, you can use these command.
merge two video files.
ffmpeg -f concat -i 1.mp4 -1 2.mp4 -codec copy out.mp4
merge multiple video files.
ffmpeg -i 1.mp4 -i 2.mp4 -i 3.mpt -vcodec copy -acodec copy out.mp4

Cannot repeat ffmpeg commnad in one bash script

I have this simple bash script for ffmpeg conversion. If there is only one call for ffmpeg, script works well. If there are 2 or more calls, only the last call for ffmpeg is successfull. For previous calls there is the error: unable to find suitable output format for /var/.../.../.jpg : Invalid argumentlic_html. It seems to me it is needed to call ffmpeg by different way. Any idea?
#!/bin/bas
ffmpeg -y -i "http://xx/22.m3u8" -qscale:v 2 -vframes 1 -vf "drawtext=fontfile=/usr/share/fonts/dejavu/DejaVuSans-Bold.ttf: text='%{localtime}': fontsize=20: fontcolor=white#0.9: x=20: y=20: box=1: boxcolor=red#0.2" /var/xxx/22.jpg
ffmpeg -y -i "http://xx/23.m3u8" -qscale:v 2 -vframes 1 -vf "drawtext=fontfile=/usr/share/fonts/dejavu/DejaVuSans-Bold.ttf: text='%{localtime}': fontsize=20: fontcolor=white#0.9: x=20: y=20: box=1: boxcolor=red#0.2" /var/xxx/23.jpg

from unix ffmpeg bash pipe to windows ''universe"

I am trying to "translate" some script from bash to windows powershell
I tried to pass the simplest
ffmpeg -i video.mp4 -vn -sn -map 0:a:0 -f flac - | ffmpeg -i - -c:a aac oiji.m4a
the result is a failure
using -f wav - another failure
there is a way to make it work?
thank you

Run ffmpeg multiple commands

Im using this ffmpeg command to convert mp3 to wav:
ffmpeg -i audio.mp3 -acodec libmp3lame -ab 64k -ar 16000 audio.wav
and this command to create waveform from audio.wav:
wav2png --foreground-color=ffb400aa --background-color=2e4562ff -o example4.png papa2.wav
I would love to know, how to run this commands multiple? For example, when conversion from .mp3 to .wav is done, then run the wav2png command.
Thank You!
You have several options here:
Option 1: Use &&
In Bash you can use an and list to concatenate commands. Each command will be executed one after the other. The and list will terminate when a command fails, or when all commands have been successfully executed.
ffmpeg -i audio.mp3 audio.wav && wav2png -o output.png audio.wav
Using -acodec libmp3lame when outputting to WAV makes no sense, so I removed that.
WAV ignores bitrate options, so I removed -ab.
Do you really need to change the audio rate (-ar)? Removed.
Option 2: Pipe from ffmpeg to wav2png
Instead of making a temporary WAV file you can pipe the output from ffmpeg directly to wav2png:
ffmpeg -i audio.mp3 -f wav - | wav2png -o output.png /dev/stdin
Option 3: Just use ffmpeg
Saving the best for last, you can try the showwavespic filter.
ffmpeg -i music.wav -filter_complex showwavespic=s=640x320 showwaves.png
If you want to make a video of the wave form, then try showwaves.
You can see a colored example at Generating a waveform using ffmpeg.

Resources