CoreAudio get output sample rate - core-audio

I'm creating a Mac OS X CoreAudio command-line program with proprietary rendering of some alphanumeric terminal input into a live audio signal, by means of AudioUnits, trying to stay as simple as possible. All works fine up to matching output sample rate.
As a starting point I'm using the Chapter 07 tutorial code of Addisson Wesley's "Learning Core Audio", CH07_AUGraphSineWave.
I initialize the AudioComponent "by the book":
void CreateAndConnectOutputUnit (MyGenerator *generator)
{
AudioComponentDescription theoutput = {0};
theoutput.componentType = kAudioUnitType_Output;
theoutput.componentSubType = kAudioUnitSubType_DefaultOutput;
theoutput.componentManufacturer = kAudioUnitManufacturer_Apple;
AudioComponent comp = AudioComponentFindNext (NULL, &theoutput);
if (comp == NULL) {
printf ("can't get output unit");
exit (-1);
}
CheckError (AudioComponentInstanceNew(comp, &generator->outputUnit),
"Couldn't open component for outputUnit");
AURenderCallbackStruct input;
input.inputProc = MyRenderProc;
input.inputProcRefCon = generator;
CheckError(AudioUnitSetProperty(generator->outputUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
0,
&input,
sizeof(input)),
"AudioUnitSetProperty failed");
CheckError (AudioUnitInitialize(generator->outputUnit),
"Couldn't initialize output unit");
}
My main problem is in my not knowing how to retreive the output hardware sample rate for the rendering AURenderCallbackStruct
since it does play a vital part in the signal generating process. I can't afford having the sample rate hard-coded into the rendering callback, although knowing it's the easiest way, since rate mismatch causes the signal being played at wrong pitch.
Is there a way of getting the default output's sample rate on such a low-level API?
Is there a way of matching it somehow, without getting overly complicated?
Have I missed something?
Thanks in advance.
Regards,
Tom

When calling AudioUnitGetProperty, the 6th parameter must be a pointer to a variable that will get the size of the answer.
Float64 sampleRate;
UInt32 sampleRateSize;
CheckError(AudioUnitGetProperty(generator->outputUnit,
kAudioUnitProperty_SampleRate,
kAudioUnitScope_Input,
0,
&sampleRate,
&sampleRateSize),
"AudioUnitGetProperty failed");
However, as long as the sample rate has not been set, the function does not return a value (but there is also no error!)
You can however set the sample rate with for instance:
Float64 sampleRate = 48000;
CheckError(AudioUnitSetProperty(generator->outputUnit,
kAudioUnitProperty_SampleRate,
kAudioUnitScope_Input,
0,
&sampleRate,
sizeof(sampleRate)),
"AudioUnitGetProperty failed");
From now on you can also read the value with the Get-call.
This does not answer the question, what the default value is. As far as I know that is always 44100 Hz.

The sample-rate is a property of all AudioUnits - see kAudioUnitProperty_SampleRate (documentation here) - although ultimately it's the IO Unit (RemoteIO on iOS or HAL unit on MacOSX) that drives the sample-rate at the audio interface. This is not available in the call-back structure; you need to read this property with AudioUnitGetProperty() in your initialisation code.
In your case, the following would probably do it:
Float64 sampleRate;
CheckError(AudioUnitGetProperty(generator->outputUnit,
kAudioUnitProperty_SampleRate,
kAudioUnitScope_Input,
0,
&sampleRate,
sizeof(sampleRate)),
If you're targeting iOS, you also need to interact with the Audio Session.

Related

Setting sampling rate of default audio output device programmatically

I'm working on an application that plays sounds through the default audio device on a Mac. I want to change the output sampling rate and bit depth of the default output device but it always gives me a kAudioUnitErr_PropertyNotWritable error code.
Here is my test code:
AudioStreamBasicDescription streamFormat;
AudioStreamBasicDescription newStreamFormat;
newStreamFormat.mSampleRate = 96000; // the sample rate of the audio stream
newStreamFormat.mFormatID = kAudioFormatLinearPCM; // the specific encoding type of audio stream
newStreamFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger;//kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsNonMixable;
newStreamFormat.mFramesPerPacket = 1;
newStreamFormat.mChannelsPerFrame = 1;
newStreamFormat.mBitsPerChannel = 24;
newStreamFormat.mBytesPerPacket = 2;
newStreamFormat.mBytesPerFrame = 2;
UInt32 size = sizeof(AudioStreamBasicDescription);
result = AudioUnitGetProperty(myUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &streamFormat, &size);
result = AudioOutputUnitStop(myUnit);
result = AudioUnitUninitialize(myUnit);
result = AudioUnitSetProperty(myUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &newStreamFormat, size);
result = AudioUnitInitialize(myUnit);
result = AudioOutputUnitStart(myUnit);
result = AudioUnitGetProperty(myUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &streamFormat, &size);
result = AudioUnitGetProperty(myUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &streamFormat, &size);
When I make the call to set the stream format on kAudioUnitScope_Input I don't get any error but when I set it on kAudioUnitScope_Output if fails with the property not writable error.
It must be possible to do this programmatically (Audio MIDI Setup does it) but I have searched and searched but I haven't been able to find any solution.
I did find this post that implies that setting the input sampling rate of the device will update the output as well. I tried this but when I read back the property the output doesn't match what I set on the input.
I'm pretty sure it's not the output AudioUnit's job to configure devices. It's more of an intermediary between clients and audio devices. Which means AudioUnitSetProperty() is the wrong API for the job.
So if you want to configure the device, try setting kAudioDevicePropertyNominalSampleRate on it using the AudioObjectSetPropertyData() function.
Then, unless you want a gratuitous rate conversion, you probably want to make sure your audio unit input format matches the new device sample rate by doing what you're already doing: calling AudioUnitSetProperty() on the input (data going into the audio unit) scope.

BAD_ACCESS on RemoteIO callback only when headphone jack plugged in

I have this following render callback for a remoteIO audio unit. Simply accessing the 0th element of the ioData parameter results in a crash. Very simply put, this works with no headphone jack connection but as soon as I plug a jack into my iphone6+, I get a bad access error when accessing the buffer.
If I plug it in while the app is running it crashes. If I plug it in first, and then build and run the app, it still crashes. I checked to see if inNumberFrames perhaps is changing based on a line out connection but it remains consistently at 512 frames.
OSStatus playbackCallback(void * inRefCon,
AudioUnitRenderActionFlags * ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData) {
float * output = (float*)ioData->mBuffers[0].mData;
output[0] = 1;
return noErr;
}
Apparently an AVAudioSession route change callback is called even if the headphones are plugged in before app launch. One of the things I tried was to delay remoteIO start until that point. However, the following code produces an error:
- (void)handleRouteChange:(NSNotification *)notification {
OSStatus err = AudioOutputUnitStart(remoteIOUnit);
}
The error:
Error: should alloc (-10849)
If ioData is nil in an Audio Unit callback, you need to have allocated your own AudioBufferList and buffer data memory (* mData), and use that memory instead. Some audio routes may have their own buffers, but this is not guaranteed. So your code needs to account for the case of a nil ioData.
Another possible cause of a nil ioData parameter is that your callback is an input callback, not an output callback, due to a bug in setting the callback for the correct RemoteIO bus.
The -10849 error might mean you are trying to start an audio unit that is already started and not stopped yet (it takes awhile after any audio stop command).

Format of microphone audio passed to call back in mac OS X core audio example

I need access to audio data from microphone on macbook. I have the an example program for recording microphone data based on the one in "Learning Core Audio". When I run this program and break on the call back routine I see the inBuffer pointer and the mAudioData pointer. However I am having a heck of a time making sense of the data. I've tried casting the void* pointer to mAudioData to SInt16, to SInt32 and to float and tried a number of endian conversions all with nonsense looking results. What I need to know definitively is the number format for the data in the buffer. The example actually works writing microphone data to a file which I can play so I know that real audio is being recorded.
AudioStreamBasicDescription recordFormat;
memset(&recordFormat,0,sizeof(recordFormat));
//recordFormat.mFormatID = kAudioFormatMPEG4AAC;
recordFormat.mFormatID = kAudioFormatLinearPCM;
recordFormat.mChannelsPerFrame = 2;
recordFormat.mBitsPerChannel = 16;
recordFormat.mBytesPerPacket = recordFormat.mBytesPerFrame = recordFormat.mChannelsPerFrame * sizeof(SInt16);
recordFormat.mFramesPerPacket = 1;
MyGetDefaultInputDeviceSampleRate(&recordFormat.mSampleRate);
UInt32 propSize = sizeof(recordFormat);
CheckError(AudioFormatGetProperty(kAudioFormatProperty_FormatInfo,
0,
NULL,
&propSize,
&recordFormat),
"AudioFormatProperty failed");
//set up queue
AudioQueueRef queue = {0};
CheckError(AudioQueueNewInput(&recordFormat,
MyAQInputCallback,
&recorder,
NULL,
kCFRunLoopCommonModes,
0,
&queue),
"AudioQueueNewInput failed");
UInt32 size = sizeof(recordFormat);
CheckError(AudioQueueGetProperty(queue,
kAudioConverterCurrentOutputStreamDescription,
&recordFormat,
&size), "Couldn't get queue's format");

playback raw pcm from network using AudioQueue in CoreAudio

I need to play raw PCM data (16 bit signed) using CoreAudio on OS X. I get it from network using UDP socket (on sender side data is captured from microphone).
The problem is that all I hear now is some short cracking noise and then only silence.
I'm trying to play data using AudioQueue. I setup it like this:
// Set up stream format fields
AudioStreamBasicDescription streamFormat;
streamFormat.mSampleRate = 44100;
streamFormat.mFormatID = kAudioFormatLinearPCM;
streamFormat.mFormatFlags = kLinearPCMFormatFlagIsBigEndian | kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
streamFormat.mBitsPerChannel = 16;
streamFormat.mChannelsPerFrame = 1;
streamFormat.mBytesPerPacket = 2 * streamFormat.mChannelsPerFrame;
streamFormat.mBytesPerFrame = 2 * streamFormat.mChannelsPerFrame;
streamFormat.mFramesPerPacket = 1;
streamFormat.mReserved = 0;
OSStatus err = noErr;
// create the audio queue
err = AudioQueueNewOutput(&streamFormat, MyAudioQueueOutputCallback, myData, NULL, NULL, 0, &myData->audioQueue);
if (err)
{ PRINTERROR("AudioQueueNewOutput"); myData->failed = true; result = false;}
// allocate audio queue buffers
for (unsigned int i = 0; i < kNumAQBufs; ++i) {
err = AudioQueueAllocateBuffer(myData->audioQueue, kAQBufSize, &myData->audioQueueBuffer[i]);
if (err)
{ PRINTERROR("AudioQueueAllocateBuffer"); myData->failed = true; break; result = false;}
}
// listen for kAudioQueueProperty_IsRunning
err = AudioQueueAddPropertyListener(myData->audioQueue, kAudioQueueProperty_IsRunning, MyAudioQueueIsRunningCallback, myData);
if (err)
{ PRINTERROR("AudioQueueAddPropertyListener"); myData->failed = true; result = false;}
MyAudioQueueOutputCallback is:
void MyAudioQueueOutputCallback(void* inClientData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer)
{
// this is called by the audio queue when it has finished decoding our data.
// The buffer is now free to be reused.
MyData* myData = (MyData*)inClientData;
unsigned int bufIndex = MyFindQueueBuffer(myData, inBuffer);
// signal waiting thread that the buffer is free.
pthread_mutex_lock(&myData->mutex);
myData->inuse[bufIndex] = false;
pthread_cond_signal(&myData->cond);
pthread_mutex_unlock(&myData->mutex);
}
MyAudioQueueIsRunningCallback is:
void MyAudioQueueIsRunningCallback(void* inClientData,
AudioQueueRef inAQ,
AudioQueuePropertyID inID)
{
MyData* myData = (MyData*)inClientData;
UInt32 running;
UInt32 size;
OSStatus err = AudioQueueGetProperty(inAQ, kAudioQueueProperty_IsRunning, &running, &size);
if (err) { PRINTERROR("get kAudioQueueProperty_IsRunning"); return; }
if (!running) {
pthread_mutex_lock(&myData->mutex);
pthread_cond_signal(&myData->done);
pthread_mutex_unlock(&myData->mutex);
}
}
and MyData is:
struct MyData
{
AudioQueueRef audioQueue; // the audio queue
AudioQueueBufferRef audioQueueBuffer[kNumAQBufs]; // audio queue buffers
AudioStreamPacketDescription packetDescs[kAQMaxPacketDescs]; // packet descriptions for enqueuing audio
unsigned int fillBufferIndex; // the index of the audioQueueBuffer that is being filled
size_t bytesFilled; // how many bytes have been filled
size_t packetsFilled; // how many packets have been filled
bool inuse[kNumAQBufs]; // flags to indicate that a buffer is still in use
bool started; // flag to indicate that the queue has been started
bool failed; // flag to indicate an error occurred
bool finished; // flag to inidicate that termination is requested
pthread_mutex_t mutex; // a mutex to protect the inuse flags
pthread_mutex_t mutex2; // a mutex to protect the AudioQueue buffer
pthread_cond_t cond; // a condition varable for handling the inuse flags
pthread_cond_t done; // a condition varable for handling the inuse flags
};
I'm sorry if I posted too much code - hope it helps anyone to understand what exactly I do.
Mostly my code based on this code which is version of AudioFileStreamExample from Mac Developer Library adapted to work with CBR data.
Also I looked at this post and tried AudioStreamBasicDescription desribed there. And tried to change my flags to Little or Big Endian. It didn't work.
I looked at some another posts here and in the other resources while finding similar problem, I checked the order of my PCM data, for example. I just can't post more than two links.
Please anyone help me to understand what I'm doing wrong! Maybe I should abandon this way and use Audio Units right away? I'm just very newbie in CoreAudio and hoped that mid-level of CoreAudio will help me to solve this problem.
P.S. Sorry for my English, I tried as I can.
I hope you've solved this one on your own already, but for the benefit of other people who are having this problem, I'll post up an answer.
The problem is most likely because once an Audio Queue is started, time continues moving forward, even if you stop enqueueing buffers. But when you enqueue a buffer, it is enqueued with a timestamp that is right after the previously enqueued buffer. This means that if you don't stay ahead of the where the audio queue is playing, you will end up enqueuing buffers with a timestamp in the past, therefore the audio queue will go silent and the isRunning property will still be true.
To work around this, you have a couple of options. The simplest in theory would be to never fall behind on submitting buffers. But since you are using UDP, there is no guarantee that you will always have data to submit.
Another option is that you can keep track of what sample you should be playing and submit an empty buffer of silence whenever you need to have a gap. This option works good if your source data has timestamps that you can can use to calculate how much silence you need. But ideally, you wouldn't need to do this.
Instead you should be calculating the timestamp for the buffer using the system time. Instead of AudioQueueEnqueueBuffer, you'll need to use AudioQueueEnqueueBufferWithParameters instead. You just need to make sure the timestamp is ahead of where the queue is currently at. You'll also have to keep track what the system time was when you started the queue, so you can calculate the correct timestamp for each buffer you are submitting. If you have timestamp values on your source data, you should be able to use them to calculate the buffer timestamps as well.

AudioQueue, Callback is not hitting uniformly to playback audio

In my app, i would be receiving audio data over socket in Linear PCM Format, in uniform interval of time, 50 ms approx.,
I am using AudioQueue to play the same, i referred most of the code from AudioQueue SpeakHere Example only the difference is i need to run it on the Mac OS,
Following is the relevant piece of code,
Setup AudioBufferDescription format,
FillOutASBDForLPCM (sRecordFormat,
16000,
1,
16,
16,
false,
false
);
Allocate Buffers to hold and play data
for (int i = 0; i < kNumberBuffersPLyer; ++i) {
XThrowIfError(AudioQueueAllocateBuffer(mQueue, bufferByteSize, &mBuffers[i]),
"AudioQueueAllocateBuffer failed");
}
Where bufferByteSize is 640 and Number of buffer is 3
To Start the Queue,
OSStatus errorCode = AudioQueueStart(mQueue,NULL);
Now, the thing is, i was expecting, it should hit the Callback automatically When it played buffer, but it was't happening,
So as and when i am getting buffer, i am enqueue buffer , this is the code
void AudioStream::startQueueIfNeeded(){
SetLooping(true);
// prime the queue with some data before starting
for (int i = 0; i < kNumberBuffersPLyer; ++i)
{
AQBufferCallback (this, mQueue, mBuffers[0]);
//enQueueBuffer(this,mQueue,mBuffers[i]);
}
// AudioSessionSetActive( true );
OSStatus errorCode = AudioQueueStart(mQueue,NULL);
mIsDone = false;
mIsStarted = true;
}
i feel the buffer is proper but i can't hear the sound, can anyone guide me, what i am doing wrong.
Thanks in advance
Thanks for have a look on it, the problem was, i was running AudioQueue in C++ based thread class and it was terminated , when i take it out of thread class it worked fine.

Resources