Setting sample rate on AUHAL - macos

I'm using Audio Unit Framework to develop a VOIP app on mac os x.
In my program, I set up an input AUHAL and use the default stream format (44.1kHz,32bit/channel) to capture the audio from mic. In this case, my program works fine.
Here is the Code:
//The default setting in my program
CheckError(AudioUnitGetProperty(m_audCapUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output, //the value is 0
inputBus, //the value is 1
&m_audCapUnitOutputStreamFormat,
&propertySize),
"Couldn't get OutputSample ASBD from input unit") ;
//the inOutputSampleRate is 44100.0
m_audCapUnitOutputStreamFormat.mSampleRate = inOutputSampleRate ;
CheckError(AudioUnitSetProperty(m_audCapUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
inputBus,
&m_audCapUnitOutputStreamFormat,
propertySize),
"Couldn't set OutputSample ASBD on input unit");
//
Since I'm developing a VOIP app, the default format (44.1kHz, 32bits/Channel) isn't appropriate for my program, so I want to change the sample rate to 8kHz.
And I had written this code to change the format in my program:
//......
inOutputFormat.mSampleRate = 8000. ;
inOutputFormat.mFormatID = kAudioFormatLinearPCM ;
inOutputFormat.mChannelsPerFrame = 2 ;
inOutputFormat.mBitsPerChannel = 16 ;
inOutputFormat.mBytesPerFrame = 2 ;
inOutputFormat.mBytesPerPacket = 2 ;
inOutputFormat.mFramesPerPacket = 1 ;
inOutputFormat.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical ;
inOutputFormat.mReserved = 0 ;
CheckError(AudioUnitSetProperty(m_audCapUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
inputBus,
&inOutputFormat,
ui32PropSize),
"Couldn't set AUHAL Unit Output Format") ;
//.......
In this case, the program works fine until my program calls the AudioUnitRender in the callback function; it fails to call the AudioUnitRender with an error code -10876 that means
kAudioUnitErr_NoConnection in the documentation. The error code puzzled me so much, so I googled it but I couldn't find any useful information. Can someone tell me what the error actually means?
This is not the end, I changed the format again by this code:
//.....
inOutputFormat.mSampleRate = 8000. ;
inOutputFormat.mFormatID = kAudioFormatLinearPCM ;
inOutputFormat.mChannelsPerFrame = 2 ;
inOutputFormat.mBitsPerChannel = 32 ;
inOutputFormat.mBytesPerFrame = 4 ;
inOutputFormat.mBytesPerPacket = 4 ;
inOutputFormat.mFramesPerPacket = 1 ;
inOutputFormat.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical ;
inOutputFormat.mReserved = 0 ;
CheckError(AudioUnitSetProperty(m_audCapUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
inputBus,
&inOutputFormat,
ui32PropSize),
"Couldn't set AUHAL Unit Output Format") ;
//........
In this case, the program failed to call the AudioUnitRender again and returned another error code -10863(kAudioUnitErr_CannotDoInCurrentContext). I googled it, but I found
something useful. After reading the information there, I guess the sample rate or format that I set on the AUHAL may not be supported by the hardware.
So I wrote some code to check the available sample rates on the default input device:
//..........
UInt32 propertySize = sizeof(AudioDeviceID) ;
Boolean isWritable = false ;
CheckError(AudioDeviceGetPropertyInfo(inDeviceID, //the inDeviceID is the default input device
0,
true,
kAudioDevicePropertyAvailableNominalSampleRates,
&propertySize,
&isWritable),
"Get the Available Sample Rate Count Failed") ;
m_valueCount = propertySize / sizeof(AudioValueRange) ;
printf("Available %d Sample Rate\n",m_valueCount) ;
CheckError(AudioDeviceGetProperty(inDeviceID,
0,
false,
kAudioDevicePropertyAvailableNominalSampleRates,
&propertySize,
m_valueTabe),
"Get the Available Sample Rate Count Failed") ;
for(UInt32 i = 0 ; i < m_valueCount ; ++i)
{
printf("Available Sample Rate value : %ld\n",(long)m_valueTabe[i].mMinimum) ;
}
//..............
And then I found the available sample rates are 8000, 16000, 32000, 44100, 48000, 88200, and 96000.
The 8000 sample rate is what I set just before, but I get an error code by calling AudioUnitRender, I just want to say, why ?
I had google so much and also read many documentations, but I can't get the answer, can someone solve this problem what I encounter?
In other words; how do I change the sample rate or format on an input-only AUHAL?

Finally I fixed this problem yesterday by myself.
Here is my solution:
Firstly , I use AudioDeviceGetProperty to get the available format list on my defaut input device, then I found the available format list contain : 8khz, 16khz, 32khz, 44.1khz, 48khz, 88.2khz,96khz(I just list the sample rate field here ,but there are other
field in the list).
And then I select one of the available format which obtained in the first step. Take my program as an example , I select the format(8khz,32bits/Channel) and use AudioDeviceSetProperty to set it on the default device but not the AUHAL , this is the key that my program work fine after setinng the format on the AUHAL (OutputScope , inputBus).
The last step , I use the AudioUnitSetProperty to set the format I wanted , the program work fine.
Through this problem and solution , I guess if I want to set the format on the input-only AUHAL ,the format must be match or
can be shift to the available format which the input device is using. So what I need to do is setting the format on the input device firstly and set the format on the input-only AUHAL next.

In my experience, using settings other than 44.1kHz and 16 bit audio results in all sorts of weird errors. Some generic suggestions which might set you on the right path:
Try Novocaine (https://github.com/alexbw/novocaine). It really takes the pain out of working with AudioUnits.
Try getting your app working with 44.1kHz and then downsampling the audio yourself.
The bitrate you are setting may not be compatible with the desired sample rate.

Your answer was very helpful for me. However, the use of AudioDeviceGetProperty is depreciated.
The following listing may be helpful to get things up to date. As an example the sample rate is set to 32 kHz.
// Get the available sample rates of the default input device.
defaultDeviceProperty.mSelector = kAudioDevicePropertyAvailableNominalSampleRates;
CheckError (AudioObjectGetPropertyDataSize(defaultDevice,
&defaultDeviceProperty,
0,
NULL,
&propertySize),
"Couldn't get sample rate count");
int m_valueCount = propertySize / sizeof(AudioValueRange) ;
printf("Available %d Sample Rates\n",m_valueCount) ;
AudioValueRange m_valueTabe[m_valueCount];
CheckError (AudioObjectGetPropertyData(defaultDevice,
&defaultDeviceProperty,
0,
NULL,
&propertySize,
m_valueTabe),
"Couldn't get available sample rates");
for(UInt32 i = 0 ; i < m_valueCount ; ++i)
{
printf("Available Sample Rate value : %f\n", m_valueTabe[i].mMinimum) ;
}
// Set the sample rate to one of the available values.
AudioValueRange inputSampleRate;
inputSampleRate.mMinimum = 32000;
inputSampleRate.mMaximum = 32000;
defaultDeviceProperty.mSelector = kAudioDevicePropertyNominalSampleRate;
CheckError (AudioObjectSetPropertyData(defaultDevice,
&defaultDeviceProperty,
0,
NULL,
sizeof(inputSampleRate),
&inputSampleRate),
"Couldn't get available sample rates");

Related

Visual Studio 2013 SerialPort not receiving all data

I am currently working on a project that requires serial communication between PIC 24FV16KA302 and a PC software.
I have searched the Internet for the past 3 days and i cant find an answer for my problem so i decided to ask here . This is my first visual studio program so i dont have any experience with the software.
The PIC has few variables and two 8x16 tables that i need to view and modify on the PC side . The problem comes when i send the tables , all other information is received without a problem . I am using serial connection ( 38400/8-N-1 ) via uart to usb converter
FT232
When the PC send "AT+RTPM" to the PIC .
Button7.Click
SerialPort1.ReceivedBytesThreshold = 128
MachineState = MS.Receive_table
SerialPort1.Write("AT+RTPM")
End Sub
The PIC sends back 128 Bytes( the values in the table )
case read_table_pwm : // send pwm table
for (yy = 0 ; yy < 8 ; yy ++) {
for (xx = 0 ; xx < 16 ; xx++ ) {
uart_send_char(controll_by_pmw_map_lb[yy][xx]) ;
}
}
at_command = receive_state_idle ;
break ;
Which the software is suppose to get and display in a DataGrid.
Private Sub SerialPort1_DataReceived(sender As Object, e As IO.Ports.SerialDataReceivedEventArgs) Handles SerialPort1.DataReceived
If MachineState = MS.Receive_table Then
SerialPort1.Read(Buffer_array_received_data, 0, 128)
cellpos = 0
For grid_y As Int16 = 0 To 7 Step 1
For grid_x As Int16 = 0 To 15 Step 1
DataGridView1.Rows(grid_y).Cells(grid_x).Value = Buffer_array_received_data(cellpos)
cellpos += 1
Next
Next
End Sub
The problem is that most of the time ( 99 % ) it displays only part of the dataset and zeros to the end , and when i try to do it again it display the other part and it starts from the beginning .
First request
Second request
If i try the same thing with another program i always get the full dataset
Realterm
Termite
I have tried doing it cell by cell , but it only works if i request them one very second , other wise i get the same problem .
After that i need to use a timer ( 100 ms ) to request live data from the PIC .
this work better but still some of the time i get some random data. I haven't focused on that for the moment because without the dataset everything else is useless .
Am i missing something or has anyone encountered the same problem ?
I managed to solve the problem by replacing
SerialPort1.Read(Buffer_array_received_data, 0, 128)
with
For byte_pos As Int16 = 0 To 127 Step 1
Buffer_array_received_data(byte_pos) = SerialPort1.ReadByte()
Next
But aren't they supposed to be the same ?

How do you find the audio latency ? (Windows/OSX)

I'm writing a music application, but I have no clue how applications such as Ableton/Cubase/etc find the audio latency of a system (do they ?) so they can compensate for time difference when recording/playing.
Meaning, the audio input latency (from mic to useable buffer) and the audio output latency (from a buffer to sound in speakers).
It seems more complex than just a matter of buffer size, since an internal chain of events occurs in-between the analogic audio and the digital data the software has access to.
Any idea how to (gu)estimate that ?
CoreAudio:
property.mSelector = kAudioDevicePropertyLatency;
if ( AudioObjectHasProperty( id, &property ) == true ) {
result = AudioObjectGetPropertyData( id, &property, 0, NULL, &dataSize, &latency );
WASAPI:
IAudioClient::GetStreamLatency
IAudioClient*& captureAudioClient->GetStreamLatency( ( long long* ) &stream_.latency[mode] );
IAudioClient*& renderAudioClient->GetStreamLatency( ( long long* ) &stream_.latency[mode] );
ASIO:
long inputLatency, outputLatency;
ASIOGetLatencies( &inputLatency, &outputLatency );
ALSA:
snd_pcm_sframes_t frames;
snd_pcm_delay( handle, &frames );
OpenSL:
AudioManager am = (AudioManager)getSystemService(Context.AUDIO_SERVICE);
Method m = am.getClass().getMethod("getOutputLatency", int.class);
latency = (Integer)m.invoke(am, AudioManager.STREAM_MUSIC);
see also:
uint32_t afLatency;
android::AudioSystem::getOutputLatency(&afLatency, ANDROID_DEFAULT_OUTPUT_STREAM_TYPE);
https://android.googlesource.com/platform/system/media/+/4f924ff768d761f53db6fa2dbfb794ba7a65e776/opensles/libopensles/android_AudioPlayer.cpp
https://android.googlesource.com/platform/frameworks/av/+/006ceacb82f62a22945c7702c4c0d78f31eb2290/media/libmedia/AudioSystem.cpp
I'm not sure, but the output is the same as input, with a delay.
If you find the FFT of the input signal and the output one, at the same time, both must to be the very similar of the input.
Make a correlation of both signals, and you must find a pulse.
More tall and thin pulse, more similar input and output.
The time from 0 to the pulse, is the delay time between signals.
The correlation of the FFT of the same signal is a pulse at 0.
The correlation of the FFT of the same signal delayed 1 seg is a pulse at 1s.
So on
Check this link
https://www.dsprelated.com/showarticle/26.php

Pyaudio : how to check volume

I'm currently developping a VOIP tool in python working as a client-server. My problem is that i'm currently sending the Pyaudio input stream as follows even when there is no sound (well, when nobody talks or there is no noise, data is sent as well) :
CHUNK = 1024
p = pyaudio.PyAudio()
stream = p.open(format = pyaudio.paInt16,
channels = 1,
rate = 44100,
input = True,
frames_per_buffer = CHUNK)
while 1:
self.conn.sendVoice(stream.read(CHUNK))
I would like to check volume to get something like this :
data = stream.read(CHUNK)
if data.volume > 20%:
self.conn.sendVoice(data)
This way I could avoid sending useless data and spare connection/ increase performance. (Also, I'm looking for some kind of compression but I think I will have to ask it in another topic).
Its can be done using root mean square (RMS).
One way to build your own rms function using python is:
def rms( data ):
count = len(data)/2
format = "%dh"%(count)
shorts = struct.unpack( format, data )
sum_squares = 0.0
for sample in shorts:
n = sample * (1.0/32768)
sum_squares += n*n
return math.sqrt( sum_squares / count )
Another choice is use audioop to find rms:
data = stream.read(CHUNK)
rms = audioop.rms(data,2)
Now if do you want you can convert rms to decibel scale decibel = 20 * log10(rms)

Aparapi add sample

I'm studing Aparapi (https://code.google.com/p/aparapi/) and have a strange behaviour of one of the sample included.
The sample is the first, "add". Building and executing it, is ok. I also put the following code for testing if the GPU is really used
if(!kernel.getExecutionMode().equals(Kernel.EXECUTION_MODE.GPU)){
System.out.println("Kernel did not execute on the GPU!");
}
and it works fine.
But, if I try to change the size of the array from 512 to a number greater than 999 (for example 1000), I have the following output:
!!!!!!! clEnqueueNDRangeKernel() failed invalid work group size
after clEnqueueNDRangeKernel, globalSize[0] = 1000, localSize[0] = 128
Apr 18, 2013 1:31:01 PM com.amd.aparapi.KernelRunner executeOpenCL
WARNING: ### CL exec seems to have failed. Trying to revert to Java ###
JTP
Kernel did not execute on the GPU!
Here's my code:
final int size = 1000;
final float[] a = new float[size];
final float[] b = new float[size];
for (int i = 0; i < size; i++) {
a[i] = (float)(Math.random()*100);
b[i] = (float)(Math.random()*100);
}
final float[] sum = new float[size];
Kernel kernel = new Kernel(){
#Override public void run() {
int gid = getGlobalId();
sum[gid] = a[gid] + b[gid];
}
};
Range range = Range.create(size);
kernel.execute(range);
System.out.println(kernel.getExecutionMode());
if (!kernel.getExecutionMode().equals(Kernel.EXECUTION_MODE.GPU)){
System.out.println("Kernel did not execute on the GPU!");
}
kernel.dispose();
}
I tried specifying the size using
Range range = Range.create(size, 128);
as suggested in a Google group, but nothing changed.
I'm currently running on Mac OS X 10.8 with Java 1.6.0_43. Aparapi version is the latest (2012-01-23).
Am I missing something? Any ideas?
Thanks in advance
Aparapi inherits a 'Grid Style' of implementation from OpenCL. When you specify a range of execution (say 1024), OpenCL will break this 'range' into groups of equal size. Possibly 4 groups of 256, or 8 groups of 128.
The group size must be a factor of range (so assert(range%groupSize==0)).
By default Aparapi internally selects the group size.
But you are choosing to fully specify the range and group size to using
Range r= Range.range(n,128)
You are responsible for ensuring that n%128==0.
From the error, it looks like you chose Range.range(1000,128).
Sadly 1000 % 128 != 0 so this range will fail.
If you specifiy
Range r = Range.range(n)
Aparapi will choose a valid group size, by finding the highest common factor of n.
Try dropping the 128 as the the second arg.
Gary

Error in reading video file in Matlab

Hi I am getting a strange error while trying to read a video, frame wise in matlab. I am doing the following:
xyloObj = VideoReader(vid_name);
fps = xyloObj.FrameRate;
nFrames = xyloObj.NumberOfFrames;
vidHeight = xyloObj.Height;
vidWidth = xyloObj.Width;
% Preallocate movie structure.
mov(1:nFrames) = ...
struct('cdata', zeros(vidHeight, vidWidth, 3, 'uint8'),...
'colormap', []);
index =1;
for k = 1:nFrames
mov(index).cdata = read(xyloObj, k);
index = index+1;
end
I get the following error:
Error using VideoReader/read (line 80)
The file could not be read.
Haven't found solution to this error anywhere else.
EDIT: File format is avi. something like: D:\videos\drunk.avi.
How about using a mmread? I used VideoReader on Linux, but in my case, the length of frames is not correct.
Moreover, since I need the timestamp of videos, I have changed from VideoReader mmread.

Resources