Xamarin Android merge audio files with FFMpeg - xamarin

I am using this binding library for FFMpeg:
https://github.com/gperozzo/XamarinAndroidFFmpeg
My goal is to mix two audio files.
String s = "-i " + "test.wav" + " -i " + test2.mp3 + " -filter_complex amix=inputs=2:duration=first " + "result.mp3";
Device.BeginInvokeOnMainThread(async () =>
{
await FFMpeg.Xamarin.FFmpegLibrary.Run(Forms.Context, s);
});
So I have 2 input files: one is .mp3 and another one is .wav.
I've tried also next commands:
String s= "-i "+ "test.wav" +" -i "+ "test2.mp3" + " -filter_complex [0:0][1:0]concat=n=2:v=0:a=1[out] -map [out] " + "result.mp3";
String s = "-i " + "test.wav" + " -i " + "test2.mp3" + " -filter_complex [0:a][1:a]amerge=inputs=2[aout] -map [aout] -ac 2 " + "result.mp3";
1) Could I mix two different audio formats (in my case .mp3 & .wav) or they should be equivalent?
2) What is the correct command line for the mixing?
Thanks in advance.

1) Could I mix two different audio formats (in my case .mp3 & .wav)?
Yes. The input format does not matter because it will be fully decoded to PCM audio before being fed to the filter, but you have to be aware of how the various input channels will be mixed to create the channel layout for the output. Read the documentation on the amerge and amix filters for more info.
2) What is the correct command line for the mixing?
Your command using amerge should work:
ffmpeg -i test.wav -i test2.mp3 -filter_complex "[0:a][1:a]amerge=inputs=2[aout]" -map "[aout]" -ac 2 result.mp3
Or using amix:
ffmpeg -i test.wav -i test2.mp3 -filter_complex "[0:a][1:a]amix=inputs=2:duration=shortest[aout]" -map "[aout]" result.mp3

Related

Assistance required with filter graph construction(fast motion between times)

I am trying to re-create the following video, using ffmpeg.
https://youtu.be/eVQ9ysp0Pj0. "please check 0.19 minute for examples"
I have the following line of code which has most of the elements, except the part where is applies fast motion for 1s at certain outputs([vfr1][vfr2][vfr3])/times. currently it is only setpts=0.5*PTS[vboom] for the entire length of the video.
exe = "-i " + file + " -i " + frame + " -i " + framestart + " -i " + frameEnd + " -i " + audioOverlay + " -filter_complex \"[0:v]pad="+mVideoWidth+":"+mVideoHeight+":576:0[vpad]; [vpad][1]overlay[vframed]; [vframed]split=3[vfr1][vfr2][vfr3]; [vfr1]reverse[vrev]; [vfr2][vrev][vfr3]concat=n=3,setpts=0.5*PTS[vboom]; [vboom][2]overlay=enable='lte(t,2)'[vpreout]; [vpreout][3]overlay=enable='gte(t,"+msec+"*3*0.5-2)' \" -map 4:a -b:v 8000k -shortest -preset ultrafast -crf 23 " + file2.getAbsolutePath();
i have tried the following code snippets in various sections of the filter graph, with no luck!
//[0:v]trim=0:2,setpts=PTS-STARTPTS[v1];[0:v]trim=2:5,setpts=2*(PTS-STARTPTS)[v2];[0:v]trim=5,setpts=PTS-STARTPTS[v3];
//[0:v]trim=0:10,setpts=PTS-STARTPTS[vfr1];[0:v]trim=10:30,setpts=PTS-STARTPTS[vfr2];[0:v]trim=start=30,setpts=PTS-STARTPTS[vfr3];
//[0:v]trim=2:3,setpts=0.75*(PTS-STARTPTS); [0:v]trim=4:5,setpts=0.75*(PTS-STARTPTS); [0:v]trim=7:8,setpts=0.75*(PTS-STARTPTS);
//[0:v]select='between(t,1,4)+between(t,4,6)',setpts=0.87*PTS;
The trim and setpts is on the right track.
e.g.
[0:v]trim=0:2,setpts=PTS-STARTPTS[v1];
[0:v]trim=2:5,setpts=2*(PTS-STARTPTS)[v2];
[0:v]trim=5,setpts=PTS-STARTPTS[v3];
[v1][v2][v3]concat=n=3[vboom]
In this snippet, v2 will be sped up.

The audio of the file is distorted but in the clipped video - ffmpeg

Hello I am joining 3 video files. At the end the audio of the file is distorted but in the clipped video it is heard well.
I have tried recoding the audio to the video but I have not been able to find the solution.
Code that generates the video with the audio that sounds good
comand_audio_video = "ffmpeg -y -i " + _path_videos_principal_intro + "finaltext.mp4 -i " + _path_videos_principal_intro + "final.wav -map 0 -map 1:a -c:v copy -shortest " + _path_videos_principal_intro + "finalVideoAudio.mp4"
Code that generates the video concatenated with the audio that is not heard well
comand_all_video = "ffmpeg -y -f concat -safe 0 -i " + _path_videos_principal_intro + "videos.txt -acodec copy " + _path_videos_principal_intro + "finalVideoAudioConcat.mp4"
.txt File
file 'intro.mp4'
file 'finalVideoAudio.mp4'
file 'intro.mp4'
I will appreciate your help!

read Frames from Video using FFMPEG gpu python

import os
import subprocess
class FFMPEGFrames:
def __init__(self, output):
self.output = output
def extract_frames(self, input, fps):
output = input.split('/')[-1].split('.')[0]
if not os.path.exists(self.output + output):
os.makedirs(self.output + output)
query = "ffmpeg -y -hwaccel cuvid -c:v h264_cuvid -resize 1024x70 -i " + input + " -vf scale_npp=format=yuv420p,hwdownload,format=yuv420p -vf fps=" + str(fps) + " -pix_fmt yuvj420p -color_range 2 -vframes 1 -y " + self.output + output + "/output%06d.jpg"
response = subprocess.Popen(query, shell=True, stdout=subprocess.PIPE).stdout.read()
s = str(response).encode('utf-8')
i received this error :
Impossible to convert between the formats supported by the filter 'Parsed_fps_0' and the filter 'auto_scaler_0'
Error reinitializing filters!
Failed to inject frame into filter network: Function not implemented
Error while processing the decoded data for stream #0:0
Conversion failed!
You likely missed an error message:
Only '-vf fps' read, ignoring remaining -vf options: Use ',' to separate filters
Combine both of your -vf into one. Also, consider replacing -pix_fmt yuvj420p with the format filter so all filtering can be within one filtergraph and you can choose exactly where that occurs within the filtergraph.

Output file #0 does not contain any stream android

String cmd[] = {"ffmpeg -loop 1 -y -i " + imagePath + " -i " + newAudioPath + " -shortest " + outputVideo};
I want to create video from compile one image and one audio,
Here i pass image and audio path from device file location absolute file and last one pass the location needed to create video path.
Am i doing wrong? then guide me.
Thanks.

ffmpeg Sound going out of sync with -concat or -ss

I have a tool that spits out video from a 3D application and then concats the individual videos to make a sequence. But the sound seems to go out of sync in the sequence (the inividual files are fine) and it stutters in VLC and Quicktime. Windows media player seems to handle it bes to my supprise, yet it still goes out of sync. I have two senarios, one works and one doesn't but i need both working:
Working:
get already created out movs...
convert to avi:
os.system( ffmpeg + " -i C:\clip.mov -sameq -r 24 -y C:\clip.avi")
concat to avi sequence:
os.system( ffmpeg + ''' -i concat: C:\clip.avi|C:\clip1.avi|C:\clip2.avi -sameq -r 24 -y C:\sequence.avi''' )
convert sequence to mov:
os.system( ffmpeg + " -i C:\sequence.avi -sameq -r 24 -y C:\sequence.mov")
Not Working:
create individual avi's from 3D program...
cut down to correct length:
os.system(ffmpeg + " -i C:\clip.avi -sameq -r 24 -ss " + startTime + " -vframes " + totalFrames + " -y C:\clip.avi" )
concat to avi sequence:
os.system( ffmpeg + ''' -i concat: C:\clip.avi|C:\clip1.avi|C:\clip2.avi -sameq -r 24 -y C:\sequence.avi''' )
convert sequence to mov:
os.system( ffmpeg + " -i C:\sequence.avi -sameq -r 24 -y C:\sequence.mov")
convert individual avi's to mov:
os.system( ffmpeg + " -i C:\clip.avi-sameq -r 24 -y C:\clip.mov")
Please let me know where I've gone wrong?
Turns ou it was the "-sameq" flag during the cutting process. It was messing up the audio so I just changed
os.system(ffmpeg + " -i C:\clip.avi -sameq -r 24 -ss " + startTime + " -vframes " + totalFrames + " -y C:\clip.avi" )
to
os.system(ffmpeg + " -i C:\clip.avi -sameq -r 24 - acodec pcm_s16le -ss " + startTime + " -vframes " + totalFrames + " -y C:\clip.avi" )
- forcing ffmpeg to use pcm_s16le as the audio codec instead of the out of sync one the -sameq was using...and that fixed it!
Hope this can help someone else.

Resources