Output file #0 does not contain any stream android - image

String cmd[] = {"ffmpeg -loop 1 -y -i " + imagePath + " -i " + newAudioPath + " -shortest " + outputVideo};
I want to create video from compile one image and one audio,
Here i pass image and audio path from device file location absolute file and last one pass the location needed to create video path.
Am i doing wrong? then guide me.
Thanks.

Related

Assistance required with filter graph construction(fast motion between times)

I am trying to re-create the following video, using ffmpeg.
https://youtu.be/eVQ9ysp0Pj0. "please check 0.19 minute for examples"
I have the following line of code which has most of the elements, except the part where is applies fast motion for 1s at certain outputs([vfr1][vfr2][vfr3])/times. currently it is only setpts=0.5*PTS[vboom] for the entire length of the video.
exe = "-i " + file + " -i " + frame + " -i " + framestart + " -i " + frameEnd + " -i " + audioOverlay + " -filter_complex \"[0:v]pad="+mVideoWidth+":"+mVideoHeight+":576:0[vpad]; [vpad][1]overlay[vframed]; [vframed]split=3[vfr1][vfr2][vfr3]; [vfr1]reverse[vrev]; [vfr2][vrev][vfr3]concat=n=3,setpts=0.5*PTS[vboom]; [vboom][2]overlay=enable='lte(t,2)'[vpreout]; [vpreout][3]overlay=enable='gte(t,"+msec+"*3*0.5-2)' \" -map 4:a -b:v 8000k -shortest -preset ultrafast -crf 23 " + file2.getAbsolutePath();
i have tried the following code snippets in various sections of the filter graph, with no luck!
//[0:v]trim=0:2,setpts=PTS-STARTPTS[v1];[0:v]trim=2:5,setpts=2*(PTS-STARTPTS)[v2];[0:v]trim=5,setpts=PTS-STARTPTS[v3];
//[0:v]trim=0:10,setpts=PTS-STARTPTS[vfr1];[0:v]trim=10:30,setpts=PTS-STARTPTS[vfr2];[0:v]trim=start=30,setpts=PTS-STARTPTS[vfr3];
//[0:v]trim=2:3,setpts=0.75*(PTS-STARTPTS); [0:v]trim=4:5,setpts=0.75*(PTS-STARTPTS); [0:v]trim=7:8,setpts=0.75*(PTS-STARTPTS);
//[0:v]select='between(t,1,4)+between(t,4,6)',setpts=0.87*PTS;
The trim and setpts is on the right track.
e.g.
[0:v]trim=0:2,setpts=PTS-STARTPTS[v1];
[0:v]trim=2:5,setpts=2*(PTS-STARTPTS)[v2];
[0:v]trim=5,setpts=PTS-STARTPTS[v3];
[v1][v2][v3]concat=n=3[vboom]
In this snippet, v2 will be sped up.

The audio of the file is distorted but in the clipped video - ffmpeg

Hello I am joining 3 video files. At the end the audio of the file is distorted but in the clipped video it is heard well.
I have tried recoding the audio to the video but I have not been able to find the solution.
Code that generates the video with the audio that sounds good
comand_audio_video = "ffmpeg -y -i " + _path_videos_principal_intro + "finaltext.mp4 -i " + _path_videos_principal_intro + "final.wav -map 0 -map 1:a -c:v copy -shortest " + _path_videos_principal_intro + "finalVideoAudio.mp4"
Code that generates the video concatenated with the audio that is not heard well
comand_all_video = "ffmpeg -y -f concat -safe 0 -i " + _path_videos_principal_intro + "videos.txt -acodec copy " + _path_videos_principal_intro + "finalVideoAudioConcat.mp4"
.txt File
file 'intro.mp4'
file 'finalVideoAudio.mp4'
file 'intro.mp4'
I will appreciate your help!

Xamarin Android merge audio files with FFMpeg

I am using this binding library for FFMpeg:
https://github.com/gperozzo/XamarinAndroidFFmpeg
My goal is to mix two audio files.
String s = "-i " + "test.wav" + " -i " + test2.mp3 + " -filter_complex amix=inputs=2:duration=first " + "result.mp3";
Device.BeginInvokeOnMainThread(async () =>
{
await FFMpeg.Xamarin.FFmpegLibrary.Run(Forms.Context, s);
});
So I have 2 input files: one is .mp3 and another one is .wav.
I've tried also next commands:
String s= "-i "+ "test.wav" +" -i "+ "test2.mp3" + " -filter_complex [0:0][1:0]concat=n=2:v=0:a=1[out] -map [out] " + "result.mp3";
String s = "-i " + "test.wav" + " -i " + "test2.mp3" + " -filter_complex [0:a][1:a]amerge=inputs=2[aout] -map [aout] -ac 2 " + "result.mp3";
1) Could I mix two different audio formats (in my case .mp3 & .wav) or they should be equivalent?
2) What is the correct command line for the mixing?
Thanks in advance.
1) Could I mix two different audio formats (in my case .mp3 & .wav)?
Yes. The input format does not matter because it will be fully decoded to PCM audio before being fed to the filter, but you have to be aware of how the various input channels will be mixed to create the channel layout for the output. Read the documentation on the amerge and amix filters for more info.
2) What is the correct command line for the mixing?
Your command using amerge should work:
ffmpeg -i test.wav -i test2.mp3 -filter_complex "[0:a][1:a]amerge=inputs=2[aout]" -map "[aout]" -ac 2 result.mp3
Or using amix:
ffmpeg -i test.wav -i test2.mp3 -filter_complex "[0:a][1:a]amix=inputs=2:duration=shortest[aout]" -map "[aout]" result.mp3

How to convert .rtp file(recorded using RTP Proxy codec G711) to .wav file

I need to convert a .rtp file (which has been recorded using RTP proxy) to .wav file.
If any one knows how it can be done, give me your solutions.
Thanks in advance:)
A little late to the party perhaps but I recently had the same problem and thought I should share my solution to it here if someone else has this question. I also used RTP-proxy to capture audio streams which were saved as two .rtp files, one for each channel, where .o. is the output of the one initiating the call (caller) and .a. is the one receiving the call (callee).
Solution 1.
RTP-proxy has a built in module which does the wav conversion for you called "extractaudio". The documentation is lacking to say the least but you can use it from the command-line as follows:
extractaudio -F wav -B /path/to/rtp /path/of/outfile.wav
This will convert one RTP file at a time to a WAV file. The module encode created WAV files with GSM-encoding. If this is undesired you can pass in -D pcm_16 as an extra argument to it to switch the encoding to Linear PCM 16, which is a much better format for retaining audio quality. I extracted WAV files this way programatically through python with the means of subprocesses in order to make command-line calls.
Solution 2.
You can extract the raw RTP data directly and convert it to a WAV file using a 3rd-part software like SoX or FFmpeg. This solution requires SoX, FFmpeg and tshark as dependencies. You could do without tshark if you opened the RTP file yourself and extracted the UDP data but it can be done easily with tshark.
Here is my code for it (Python 2.7.9):
import os
import subprocess
import shlex
import binascii
FILENAME = "my_file"
WORKING_DIR = os.path.dirname(os.path.realpath(__file__))
IN_FILE_O = "%s/%s.o.rtp" % (WORKING_DIR, FILENAME)
IN_FILE_A = "%s/%s.a.rtp" % (WORKING_DIR, FILENAME)
conversion_list = {"PCMU" : "sox -t ul -r 8000 -c 1 %s %s",
"GSM" : "sox -t gsm -r 8000 -c 1 %s %s" ,
"PCMA" : "sox -t al -r 8000 -c 1 %s %s",
"G722" : "ffmpeg -f g722 -i %s -acodec pcm_s16le -ar 16000 -ac 1 %s",
"G729": "ffmpeg -f g729 -i %s -acodec pcm_s16le -ar 8000 -ac 1 %s"
}
if __name__ == "__main__":
args_o = "tshark -n -r " + IN_FILE_O + " -T fields -e data"
args_a = "tshark -n -r " + IN_FILE_A + " -T fields -e data"
f_o = WORKING_DIR + "/" + "payload_o.g722"
f_a = WORKING_DIR + "/" + "payload_a.g722"
payload_o = subprocess.Popen(shlex.split(args_o), stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True).communicate()[0]
payload_a = subprocess.Popen(shlex.split(args_a), stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True).communicate()[0]
if os.path.exists(f_o):
os.remove(f_o)
if os.path.exists(f_a):
os.remove(f_a)
with open(f_o, "ab") as new_codec:
payload = payload_o.split("\n")
for line in payload:
line = line.rstrip()
tmp = "%s.o: " % FILENAME
for index, (op, code) in enumerate(zip(line[0::2], line[1::2])):
if index > 11:
new_codec.write(binascii.unhexlify(op + code))
with open(f_a, "ab") as new_codec:
payload = payload_a.split("\n")
for line in payload:
line = line.rstrip()
tmp = "%s.a: " % FILENAME
for index, (op, code) in enumerate(zip(line[0::2], line[1::2])):
if index > 11:
new_codec.write(binascii.unhexlify(op + code))
owav = WORKING_DIR + "/" + "%s.o.wav" % FILENAME
awav = WORKING_DIR + "/" + "%s.a.wav" % FILENAME
if os.path.exists(owav):
os.remove(owav)
if os.path.exists(awav):
os.remove(awav)
print("Creating %s with %s" % (owav, f_o))
print("Creating %s with %s" % (awav, f_a))
subprocess.Popen(shlex.split(conversion_list["G722"] % (f_o, owav)), stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True).communicate()[0]
subprocess.Popen(shlex.split(conversion_list["G722"] % (f_a, awav)), stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True).communicate()[0]
I have G722 hardcoded as input data in my solution but it should work with any type of input encoding given you had the correct SoX/FFmpeg command for it. I've added a few different encodings in a predefined dict. The drawback with this solution is that you have to know the encoding of the call recorded in the RTP file. I tried to find an equivalent parameter in the RTP file to the rtp.p_type found in PCAP files which entails the codec used but didn't have any luck. I'm not familiar enough with RTP files though so it might be present in the data somewhere. Another drawback of this is that the produced audio files can sometimes be shorter than the original audio. I'm assuming this is due to Silence Suppression in which case it could be fixed by inserting silence yourself at the places where the timestamps indicate silence has been removed (not transmitted).
A great way to view information about RTP files is through the tshark-command:
tshark -n -r /path/to/file.rtp
Hope it will help someone!
EDIT:
I found another question about detecting the encoding within a RTP file.

ffmpeg Sound going out of sync with -concat or -ss

I have a tool that spits out video from a 3D application and then concats the individual videos to make a sequence. But the sound seems to go out of sync in the sequence (the inividual files are fine) and it stutters in VLC and Quicktime. Windows media player seems to handle it bes to my supprise, yet it still goes out of sync. I have two senarios, one works and one doesn't but i need both working:
Working:
get already created out movs...
convert to avi:
os.system( ffmpeg + " -i C:\clip.mov -sameq -r 24 -y C:\clip.avi")
concat to avi sequence:
os.system( ffmpeg + ''' -i concat: C:\clip.avi|C:\clip1.avi|C:\clip2.avi -sameq -r 24 -y C:\sequence.avi''' )
convert sequence to mov:
os.system( ffmpeg + " -i C:\sequence.avi -sameq -r 24 -y C:\sequence.mov")
Not Working:
create individual avi's from 3D program...
cut down to correct length:
os.system(ffmpeg + " -i C:\clip.avi -sameq -r 24 -ss " + startTime + " -vframes " + totalFrames + " -y C:\clip.avi" )
concat to avi sequence:
os.system( ffmpeg + ''' -i concat: C:\clip.avi|C:\clip1.avi|C:\clip2.avi -sameq -r 24 -y C:\sequence.avi''' )
convert sequence to mov:
os.system( ffmpeg + " -i C:\sequence.avi -sameq -r 24 -y C:\sequence.mov")
convert individual avi's to mov:
os.system( ffmpeg + " -i C:\clip.avi-sameq -r 24 -y C:\clip.mov")
Please let me know where I've gone wrong?
Turns ou it was the "-sameq" flag during the cutting process. It was messing up the audio so I just changed
os.system(ffmpeg + " -i C:\clip.avi -sameq -r 24 -ss " + startTime + " -vframes " + totalFrames + " -y C:\clip.avi" )
to
os.system(ffmpeg + " -i C:\clip.avi -sameq -r 24 - acodec pcm_s16le -ss " + startTime + " -vframes " + totalFrames + " -y C:\clip.avi" )
- forcing ffmpeg to use pcm_s16le as the audio codec instead of the out of sync one the -sameq was using...and that fixed it!
Hope this can help someone else.

Resources