Play say function playback file with G729 Format - freeswitch

I am using freeswitch and Playing number with NUMBER PRONOUNCED and it is play files in .wav format so shall we play with .G729 format ?
FYI : i am using say function playback number
For example : Number 100
Then i can see in freeswitch console :
string://digits/1.wav!digits/hundred.wav
so is it possible to play file
string://digits/1.G729!digits/hundred.G729

Please check say_string:
"Using this method you can set the desired file extension as well."
<action application="playback" data="${say_string en.wav en current_date_time pronounced ${strepoch()}}" />
https://freeswitch.org/confluence/display/FREESWITCH/mod_dptools%3A+say

Related

How to fix a webm file without audio?

I use mediarecorder to record video and audio from a user's browser. We record every 15 seconds and then upload that blog to S3. Then we combine all the files together to make one webm file. I believe the first file isn't right because when I combine the files, there is not any audio - only video.
Is there a way to alter the headers in the first file to use the audio in all of the subsequent files? OR is there an FFMPEG command to force using the audio? I know they exist in the other files.
I don't believe this is important but here is the code that I use to save and combine the webm blobs.
First I save the blobs from the media recorder
recorder = new MediaRecorder(local_media_stream.remoteStream, {
mimeType: encoding_options,
audioBitsPerSecond: 96000,
videoBitsPerSecond: bits_per_second,
});
recorder.ondataavailable = function(e) {
that.save_blob(e.data, blob_index);
}
Then later I combine each of those blobs.
bucket = Aws::S3::Resource.new(region:'us-east-1').bucket("files")
keys = bucket.objects(prefix: "files").collect(&:key)
temp_webm_file = Tempfile.new(['total', '.webm'])
keys.each_with_index do |key, index|
temp_webm_file.write bucket.object(key).get.body.read
end
temp_webm_file.close()
One thing I know that fixes the issue is if I combine a short webm file with audio to the very beginning. Then the audio all works.

What is the difference between "location" and "location-eng" metadata of a MP4 file?

I am trying to retrieve GPS information from media files using ffprobe, for example:
$ ffprobe -v quiet -show_format sample.mp4
[FORMAT]
filename=sample.mp4
nb_streams=2
nb_programs=0
format_name=mov,mp4,m4a,3gp,3g2,mj2
format_long_name=QuickTime / MOV
start_time=0.000000
duration=4.293000
size=11888152
bit_rate=22153556
probe_score=100
TAG:major_brand=mp42
TAG:minor_version=0
TAG:compatible_brands=isommp42
TAG:creation_time=2020-09-20T11:33:49.000000Z
TAG:location=+25.0731+121.3663/
TAG:location-eng=+25.0731+121.3663/
TAG:com.android.version=10
TAG:com.android.manufacturer=Google
TAG:com.android.model=Pixel
[/FORMAT]
We can see that there are 2 tags that look like ISO6709 representations, location and location-eng.
And here are my questions:
What is the difference between location and location-eng? It looks like they are always the same. Why do we need 2 different keys with same the value?
Are location and location-eng really in ISO6709 representations? Is there any specification or standard I can refer to?
I would really appreciate your help.
location-xyz is the localized version of the location, where xyz is an ISO 639-3 language code.
The need for localized location metadata entries comes from the the way this metadata is stored based on 3GPP Technical Specification 26.244.
The TS defines a loci sub-box type which can be present in the user-data box udta. The sub-box contains a ISO 639-2 language code that applies to fields like the place name.
So to answer your questions:
location-eng holds the English location data which includes the coordinates and, optionally, the place name in FFmpeg's case
Yes, they use ISO 6709. A place name is optionally specified after the trailing slash for the coordinates.

appium screen recording feature

I am trying to record the screen and referring to the following tutorial.
http://appium.io/docs/en/commands/device/recording-screen/start-recording-screen/
I tried the following piece of code but it doesn't save anything at this path.
#driver.start_recording_screen video_type: 'h264', time_limit: '260', remote_path: '/recordings'
I am putting it in before method so that it records everything for all the following 5 tests that I have in the particular spec file
Am I missing something here?
To start recording use the below c# code:
driver.StartRecordingScreen(AndroidStartScreenRecordingOptions
.GetAndroidStartScreenRecordingOptions()
.WithTimeLimit(TimeSpan.FromMinutes(1))
.EnableBugReport());
And then, to stop the recording, you need to use the following code. Since it creates recording in base64 format, you need to decode it to view.
String video = driver.StopRecordingScreen();
byte[] decode = Convert.FromBase64String(video);
String fileName = "VideoRecording_test.mp4";
File.WriteAllBytes(fileName, decode);
In order to start the recording, we just need to call the start_recording_screen method from the respective classes.
before(:all) or before(:each) do
#driver.start_recording_screen video_quality: 'low'
end
For IOS, please install ffmpeg (brew install ffmpeg).
We can add screen recording configurations like time limit, video size etc during the start of your video recording.
In order to stop the recording, we need to call the stop_recording_screen method from the respective classes.
Now, coming to the most important question! Where is our video?
The stopRecordingScreen() method returns a Base64 String. Using this string we need to build our video. There are many ways to do it, I have used decode64 method from Ruby Base64 module.
after(:all) do
record = #driver.stop_recording_screen
File.open('sample.mp4', 'wb') do |file|
file.write(Base64.decode64(record))
end
end
Finally you could find the recording under sample.mp4. I would recommend to play the video by VLC or Mplayer if you can not play the video with other video players.

Converting WAV to GSM using pysox

I'm experimenting with pysox and trying to simply convert a WAV file to GSM.
I'm currently using the following approach which works just fine:
infile = pysox.CSoxStream("input_file.wav")
outfile = pysox.CSoxStream('output_file.gsm','w',infile.get_signal())
chain = pysox.CEffectsChain(infile, outfile)
chain.flow_effects()
outfile.close()
I wonder if there's a better/builtin way without using effects (as i'm not applying any effects) .
thanks in advance
I found that i actually must use libsox effect as i'm changing the rate :
chain.add_effect(pysox.CEffect("rate", ["8k"]))
Without adding this line the output appears in slow motion (since my original file can have different rate)

how do i find out duration/total play time of an mp3 file using java?

I am developing an application in Java. Part of it includes playing an mp3 file via another application. I need to find the duration (total play time) of the mp3 file. How do I do that?
You can do this very easily using JAudioTagger:
Tag tag;
java.util.logging.Logger.getLogger("org.jaudiotagger").setLevel(Level.OFF);
audioFile = AudioFileIO.read(new File(filePath));
System.out.println("Track length = " + audioFile.getAudioHeader().getTrackLength());
That will print out the track length of the file at filePath. The logger line is to remove a lot of (probably) unwanted info/debugging logging from JAudioTagger. Besides this, JAudioTagger supports getting all sorts of metadata tags from different audio file types (MP3, MP4, WMA, FLAC, Ogg Vorbis), both from file-embedded tags. You can even get MusicBrainz info easily, but I haven't tried that yet. For more info:
http://www.jthink.net/jaudiotagger/examples_read.jsp
You can get the jar files for it here:
http://download.java.net/maven/2/org/jaudiotagger/
For small .mp3 files you can use:
AudioFile audioFile = AudioFileIO.read(MyFileChooser.fileName);
int duration= audioFile.getAudioHeader().getTrackLength();
System.out.println(duration);
This will give a string containing the minute and second duration in it. eg: 316 for 3:16.
Note that This method is not suitable with larger files.

Resources