Im using this ffmpeg command to convert mp3 to wav:
ffmpeg -i audio.mp3 -acodec libmp3lame -ab 64k -ar 16000 audio.wav
and this command to create waveform from audio.wav:
wav2png --foreground-color=ffb400aa --background-color=2e4562ff -o example4.png papa2.wav
I would love to know, how to run this commands multiple? For example, when conversion from .mp3 to .wav is done, then run the wav2png command.
Thank You!
You have several options here:
Option 1: Use &&
In Bash you can use an and list to concatenate commands. Each command will be executed one after the other. The and list will terminate when a command fails, or when all commands have been successfully executed.
ffmpeg -i audio.mp3 audio.wav && wav2png -o output.png audio.wav
Using -acodec libmp3lame when outputting to WAV makes no sense, so I removed that.
WAV ignores bitrate options, so I removed -ab.
Do you really need to change the audio rate (-ar)? Removed.
Option 2: Pipe from ffmpeg to wav2png
Instead of making a temporary WAV file you can pipe the output from ffmpeg directly to wav2png:
ffmpeg -i audio.mp3 -f wav - | wav2png -o output.png /dev/stdin
Option 3: Just use ffmpeg
Saving the best for last, you can try the showwavespic filter.
ffmpeg -i music.wav -filter_complex showwavespic=s=640x320 showwaves.png
If you want to make a video of the wave form, then try showwaves.
You can see a colored example at Generating a waveform using ffmpeg.
Related
I have an ffmpeg version built with VMAF library. I can use it to calculate the VMAF scores of a distorted video against a reference video using commands like this:
ffmpeg -i distorted.mp4 -i original.mp4 -filter_complex "[0:v]scale=640:480:flags=bicubic[main];[main][1:v]libvmaf=model_path=model/vmaf_v0.6.1.json:log_path=log.json" -f null -
Now, I remember there was a way to get VMAF scores while performing regular ffmpeg encoding. How can I do that at the same time?
I want to encode a video like this, while also calulate the VMAF of the output file:
ffmpeg -i original.mp4 -crf 27 -s 640x480 out.mp4
[edited]
Alright, scratch what I said earlier...
You should be able to use [the `tee` muxer](http://ffmpeg.org/ffmpeg-formats.html#tee-1) to save the file and pipe the encoded frames to another ffmpeg process. Something like this should work for you:
ffmpeg -i original.mp4 -crf 27 -s 640x480 -f tee "out.mp4 | [f=mp4]-" \
| ffmpeg -i - -i original.mp4 -filter_complex ...
(make them into 2 lines and remove \ for Windows)
Here is what works on my Windows PC (thanks to #Rotem for his help)
ffmpeg -i in.mp4 -vcodec libx264 -crf 27 -f nut pipe:
|
ffmpeg -i in.mp4 -f nut -i pipe:
-filter_complex "[0:v][1:v]libvmaf=log_fmt=json:log_path=log.json,nullsink"
-map 1 -c copy out.mp4
The main issue that #Rotem and I missed is that we need to terminate the libvmaf's output. Also, h264 raw format does not carry header info, and using `nut alleviates that issue.
There are a couple caveats:
testing with the testsrc example that #Rotem suggested in the comment below does not produce any libvmaf log, at least as far as I can see, but in debug mode, you can see the filter is getting initialized.
You'll likely see [nut # 0000026b123afb80] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8) message in the log. This just means that the frames are piped in faster than the 2nd ffmpeg is processing. FFmpeg does block on both ends, so no info should be lost.
For the full disclosure, I posted my Python test script on GitHub. It just runs the shell command, so it should be easy to follow even if you don't do Python.
I have ubuntu 20.04 and in past days I did this job(merge video and audio) well in terminal and with ffmpeg:
ffmpeg -i input.mp4 -i input2.mp3 -c copy output.mp4
so fast I have recived output.mp4, but now I tried this one and get output without any sound!
I try another ways to merge this ones(also with ffmpeg) but there are no diffrent...
ffmpeg -f concat -safe 0 -i <(for f in ./input*.mp4; do echo "file '$PWD/$f'"; done) -c copy output.mp4
Note -f concat will select a demuxer. This alters the way -i nput files are read.
So instead video-files 'concat expects a txt-file listing the files to concatenate.
However we somehow omit the creation of that text file and use process substitution to generate and pass that list on the fly to demux.
For more details go here:
https://trac.ffmpeg.org/wiki/Concatenate#demuxer
If you want to merge several video files, you can use these command.
merge two video files.
ffmpeg -f concat -i 1.mp4 -1 2.mp4 -codec copy out.mp4
merge multiple video files.
ffmpeg -i 1.mp4 -i 2.mp4 -i 3.mpt -vcodec copy -acodec copy out.mp4
I used to change the bitrate of audio files by using
ffmpeg -i input.mp3 -ab 96k output.mp3
and it works perfectly. Now I want to pass the output as pipe in Ffmpeg and perform some other task. I have took the reference of this documentation and modified the above ffmpeg command into
ffmpeg -i input.mp3 -ab 96k pipe:1 | aws s3 cp - s3://mybucket/output.mp3
But this doesn't work.
Only if i use pipe as below then it works.
ffmpeg -i input.mp3 -f mp3 pipe:1 | aws s3 cp - s3://mybucket/output.mp3
But this doesn't change the bitrate of the audio. Can anyone please help me how can I achieve my target of changing the bitrate and passing the output as Pipe
You have to specify the output format manually. When outputting to file, ffmpeg guesses format based on extension, which can't be done when piping.
Use
ffmpeg -i input.mp3 -ab 96k -f mp3 pipe:1 | aws s3 cp - s3://mybucket/output.mp3
I'm using FFMPEG and I want to use a silent track as a template. I want to take two audio streams from a WEBM file and concatenate them together, but the second audio has a delayed start. I want an audio silence between them. How would I do that?
This is what I currently have:
ffmpeg -i W1.webm -itsoffset 10 -i W2.webm -f lavfi -t 600 -i anullsrc=cl=stereo -filter_complex '[0:1][1:1][2:1] amerge=inputs=3' output.webm
Furthermore, I want to end the output at the end of the second audio stream.
No need to use amerge; the concat filter will work.
ffmpeg -i W1.webm -i W2.webm -filter_complex '[1:a]adelay=10s|10s[a1];[0:a][a1]concat=n=2:v=0:a=1' -ac 2 output.webm
Use ffmpeg 4.2 or newer.
I want to loop same video 4 times and output as video using ffmpeg.
SO I create code like this in ffmpeg.
ffmpeg -loop 4 -i input.mp4 -c copy output.mp4
but when i run it it give the error like this.
Option Loop Not Found.
how to do this withour error. Please Help Me
In recent versions, it's
ffmpeg -stream_loop 4 -i input.mp4 -c copy output.mp4
Due to a bug, the above does not work with MP4s. But if you wrap to a MKV, it works for me.
ffmpeg -i input.mp4 -c copy output.mkv
then,
ffmpeg -stream_loop 4 -i output.mkv -c copy output.mp4
I've found an equivalent work-around with input concatenation for outdated/bugged versions -stream_loop:
ffmpeg -f concat -safe 0 -i "video-source.txt" -f concat -safe 0 -i "audio-source.txt" -c copy -map 0:0 -map 1:0 -fflags +genpts -t 10:00:00.0 /path/to/output.ext
This will loop video and audio independently of each other and force-stop the output at 10 hour mark.
Both text files consist of
file '/path/to/file.ext'
but you must make sure to repeat this line enough times to keep the output satisfied.
For example, if your total video time is less than total audio time then video output will stop earlier than intended and the audio will keep playing until either -t 10H is reached or audio ends prematurely.