AudioQueue, Callback is not hitting uniformly to playback audio - cocoa

In my app, i would be receiving audio data over socket in Linear PCM Format, in uniform interval of time, 50 ms approx.,
I am using AudioQueue to play the same, i referred most of the code from AudioQueue SpeakHere Example only the difference is i need to run it on the Mac OS,
Following is the relevant piece of code,
Setup AudioBufferDescription format,
FillOutASBDForLPCM (sRecordFormat,
16000,
1,
16,
16,
false,
false
);
Allocate Buffers to hold and play data
for (int i = 0; i < kNumberBuffersPLyer; ++i) {
XThrowIfError(AudioQueueAllocateBuffer(mQueue, bufferByteSize, &mBuffers[i]),
"AudioQueueAllocateBuffer failed");
}
Where bufferByteSize is 640 and Number of buffer is 3
To Start the Queue,
OSStatus errorCode = AudioQueueStart(mQueue,NULL);
Now, the thing is, i was expecting, it should hit the Callback automatically When it played buffer, but it was't happening,
So as and when i am getting buffer, i am enqueue buffer , this is the code
void AudioStream::startQueueIfNeeded(){
SetLooping(true);
// prime the queue with some data before starting
for (int i = 0; i < kNumberBuffersPLyer; ++i)
{
AQBufferCallback (this, mQueue, mBuffers[0]);
//enQueueBuffer(this,mQueue,mBuffers[i]);
}
// AudioSessionSetActive( true );
OSStatus errorCode = AudioQueueStart(mQueue,NULL);
mIsDone = false;
mIsStarted = true;
}
i feel the buffer is proper but i can't hear the sound, can anyone guide me, what i am doing wrong.
Thanks in advance

Thanks for have a look on it, the problem was, i was running AudioQueue in C++ based thread class and it was terminated , when i take it out of thread class it worked fine.

Related

Issues with CAPlayThrough Example

I am trying to learn Xcode Core Audio and stumbled upon this example:
https://developer.apple.com/library/mac/samplecode/CAPlayThrough/Introduction/Intro.html#//apple_ref/doc/uid/DTS10004443
My intention is to capture the raw audio. Everytime I hit a break point, I lose the audio. Since it is using CARingBuffer.
How would you remove the time factor.I don't need real-time audio.
Since it is using CARingBuffer it should keep on writing to same memory location? So why don't I hear the audio? If I have a breakpoint?
I am reading the Learning Core Audio book. But, so far I cannot figure out this part of the following code:
CARingBufferError CARingBuffer::Store(const AudioBufferList *abl, UInt32 framesToWrite, SampleTime startWrite)
{
if (framesToWrite == 0)
return kCARingBufferError_OK;
if (framesToWrite > mCapacityFrames)
return kCARingBufferError_TooMuch; // too big!
SampleTime endWrite = startWrite + framesToWrite;
if (startWrite < EndTime()) {
// going backwards, throw everything out
SetTimeBounds(startWrite, startWrite);
} else if (endWrite - StartTime() <= mCapacityFrames) {
// the buffer has not yet wrapped and will not need to
} else {
// advance the start time past the region we are about to overwrite
SampleTime newStart = endWrite - mCapacityFrames; // one buffer of time behind where we're writing
SampleTime newEnd = std::max(newStart, EndTime());
SetTimeBounds(newStart, newEnd);
}
// write the new frames
Byte **buffers = mBuffers;
int nchannels = mNumberChannels;
int offset0, offset1, nbytes;
SampleTime curEnd = EndTime();
if (startWrite > curEnd) {
// we are skipping some samples, so zero the range we are skipping
offset0 = FrameOffset(curEnd);
offset1 = FrameOffset(startWrite);
if (offset0 < offset1)
ZeroRange(buffers, nchannels, offset0, offset1 - offset0);
else {
ZeroRange(buffers, nchannels, offset0, mCapacityBytes - offset0);
ZeroRange(buffers, nchannels, 0, offset1);
}
offset0 = offset1;
} else {
offset0 = FrameOffset(startWrite);
}
offset1 = FrameOffset(endWrite);
if (offset0 < offset1)
StoreABL(buffers, offset0, abl, 0, offset1 - offset0);
else {
nbytes = mCapacityBytes - offset0;
StoreABL(buffers, offset0, abl, 0, nbytes);
StoreABL(buffers, 0, abl, nbytes, offset1);
}
// now update the end time
SetTimeBounds(StartTime(), endWrite);
return kCARingBufferError_OK; // success
}
Thanks!
If I understood the question well, the signal is lost while input unit (producer) being halted on a breakpoint. I presume this may be the expected behavior. CoreAudio is a pull-model engine running of the real time thread. This means under some conditions your producer hits a breakpoint, the ring buffer empties, the output unit (consumer) keeps on running, but gets nothing from the buffer while the playthrough chain is interrupted, hence the possible silence.
Perhaps this code from the example is not really the simplest one: I see it also zeroes audio buffers if ring buffer gets overrun/underrun, AFAICT.
The term "raw audio" in the question is also not self-explanatory, I'm not sure what does it mean. I would suggest trying to learn async i/o using simpler circular buffers. There are few of them (without obligatory time values) on GitHub.
Please also be so kind to format the source code for easier reading.

Format of microphone audio passed to call back in mac OS X core audio example

I need access to audio data from microphone on macbook. I have the an example program for recording microphone data based on the one in "Learning Core Audio". When I run this program and break on the call back routine I see the inBuffer pointer and the mAudioData pointer. However I am having a heck of a time making sense of the data. I've tried casting the void* pointer to mAudioData to SInt16, to SInt32 and to float and tried a number of endian conversions all with nonsense looking results. What I need to know definitively is the number format for the data in the buffer. The example actually works writing microphone data to a file which I can play so I know that real audio is being recorded.
AudioStreamBasicDescription recordFormat;
memset(&recordFormat,0,sizeof(recordFormat));
//recordFormat.mFormatID = kAudioFormatMPEG4AAC;
recordFormat.mFormatID = kAudioFormatLinearPCM;
recordFormat.mChannelsPerFrame = 2;
recordFormat.mBitsPerChannel = 16;
recordFormat.mBytesPerPacket = recordFormat.mBytesPerFrame = recordFormat.mChannelsPerFrame * sizeof(SInt16);
recordFormat.mFramesPerPacket = 1;
MyGetDefaultInputDeviceSampleRate(&recordFormat.mSampleRate);
UInt32 propSize = sizeof(recordFormat);
CheckError(AudioFormatGetProperty(kAudioFormatProperty_FormatInfo,
0,
NULL,
&propSize,
&recordFormat),
"AudioFormatProperty failed");
//set up queue
AudioQueueRef queue = {0};
CheckError(AudioQueueNewInput(&recordFormat,
MyAQInputCallback,
&recorder,
NULL,
kCFRunLoopCommonModes,
0,
&queue),
"AudioQueueNewInput failed");
UInt32 size = sizeof(recordFormat);
CheckError(AudioQueueGetProperty(queue,
kAudioConverterCurrentOutputStreamDescription,
&recordFormat,
&size), "Couldn't get queue's format");

playback raw pcm from network using AudioQueue in CoreAudio

I need to play raw PCM data (16 bit signed) using CoreAudio on OS X. I get it from network using UDP socket (on sender side data is captured from microphone).
The problem is that all I hear now is some short cracking noise and then only silence.
I'm trying to play data using AudioQueue. I setup it like this:
// Set up stream format fields
AudioStreamBasicDescription streamFormat;
streamFormat.mSampleRate = 44100;
streamFormat.mFormatID = kAudioFormatLinearPCM;
streamFormat.mFormatFlags = kLinearPCMFormatFlagIsBigEndian | kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
streamFormat.mBitsPerChannel = 16;
streamFormat.mChannelsPerFrame = 1;
streamFormat.mBytesPerPacket = 2 * streamFormat.mChannelsPerFrame;
streamFormat.mBytesPerFrame = 2 * streamFormat.mChannelsPerFrame;
streamFormat.mFramesPerPacket = 1;
streamFormat.mReserved = 0;
OSStatus err = noErr;
// create the audio queue
err = AudioQueueNewOutput(&streamFormat, MyAudioQueueOutputCallback, myData, NULL, NULL, 0, &myData->audioQueue);
if (err)
{ PRINTERROR("AudioQueueNewOutput"); myData->failed = true; result = false;}
// allocate audio queue buffers
for (unsigned int i = 0; i < kNumAQBufs; ++i) {
err = AudioQueueAllocateBuffer(myData->audioQueue, kAQBufSize, &myData->audioQueueBuffer[i]);
if (err)
{ PRINTERROR("AudioQueueAllocateBuffer"); myData->failed = true; break; result = false;}
}
// listen for kAudioQueueProperty_IsRunning
err = AudioQueueAddPropertyListener(myData->audioQueue, kAudioQueueProperty_IsRunning, MyAudioQueueIsRunningCallback, myData);
if (err)
{ PRINTERROR("AudioQueueAddPropertyListener"); myData->failed = true; result = false;}
MyAudioQueueOutputCallback is:
void MyAudioQueueOutputCallback(void* inClientData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer)
{
// this is called by the audio queue when it has finished decoding our data.
// The buffer is now free to be reused.
MyData* myData = (MyData*)inClientData;
unsigned int bufIndex = MyFindQueueBuffer(myData, inBuffer);
// signal waiting thread that the buffer is free.
pthread_mutex_lock(&myData->mutex);
myData->inuse[bufIndex] = false;
pthread_cond_signal(&myData->cond);
pthread_mutex_unlock(&myData->mutex);
}
MyAudioQueueIsRunningCallback is:
void MyAudioQueueIsRunningCallback(void* inClientData,
AudioQueueRef inAQ,
AudioQueuePropertyID inID)
{
MyData* myData = (MyData*)inClientData;
UInt32 running;
UInt32 size;
OSStatus err = AudioQueueGetProperty(inAQ, kAudioQueueProperty_IsRunning, &running, &size);
if (err) { PRINTERROR("get kAudioQueueProperty_IsRunning"); return; }
if (!running) {
pthread_mutex_lock(&myData->mutex);
pthread_cond_signal(&myData->done);
pthread_mutex_unlock(&myData->mutex);
}
}
and MyData is:
struct MyData
{
AudioQueueRef audioQueue; // the audio queue
AudioQueueBufferRef audioQueueBuffer[kNumAQBufs]; // audio queue buffers
AudioStreamPacketDescription packetDescs[kAQMaxPacketDescs]; // packet descriptions for enqueuing audio
unsigned int fillBufferIndex; // the index of the audioQueueBuffer that is being filled
size_t bytesFilled; // how many bytes have been filled
size_t packetsFilled; // how many packets have been filled
bool inuse[kNumAQBufs]; // flags to indicate that a buffer is still in use
bool started; // flag to indicate that the queue has been started
bool failed; // flag to indicate an error occurred
bool finished; // flag to inidicate that termination is requested
pthread_mutex_t mutex; // a mutex to protect the inuse flags
pthread_mutex_t mutex2; // a mutex to protect the AudioQueue buffer
pthread_cond_t cond; // a condition varable for handling the inuse flags
pthread_cond_t done; // a condition varable for handling the inuse flags
};
I'm sorry if I posted too much code - hope it helps anyone to understand what exactly I do.
Mostly my code based on this code which is version of AudioFileStreamExample from Mac Developer Library adapted to work with CBR data.
Also I looked at this post and tried AudioStreamBasicDescription desribed there. And tried to change my flags to Little or Big Endian. It didn't work.
I looked at some another posts here and in the other resources while finding similar problem, I checked the order of my PCM data, for example. I just can't post more than two links.
Please anyone help me to understand what I'm doing wrong! Maybe I should abandon this way and use Audio Units right away? I'm just very newbie in CoreAudio and hoped that mid-level of CoreAudio will help me to solve this problem.
P.S. Sorry for my English, I tried as I can.
I hope you've solved this one on your own already, but for the benefit of other people who are having this problem, I'll post up an answer.
The problem is most likely because once an Audio Queue is started, time continues moving forward, even if you stop enqueueing buffers. But when you enqueue a buffer, it is enqueued with a timestamp that is right after the previously enqueued buffer. This means that if you don't stay ahead of the where the audio queue is playing, you will end up enqueuing buffers with a timestamp in the past, therefore the audio queue will go silent and the isRunning property will still be true.
To work around this, you have a couple of options. The simplest in theory would be to never fall behind on submitting buffers. But since you are using UDP, there is no guarantee that you will always have data to submit.
Another option is that you can keep track of what sample you should be playing and submit an empty buffer of silence whenever you need to have a gap. This option works good if your source data has timestamps that you can can use to calculate how much silence you need. But ideally, you wouldn't need to do this.
Instead you should be calculating the timestamp for the buffer using the system time. Instead of AudioQueueEnqueueBuffer, you'll need to use AudioQueueEnqueueBufferWithParameters instead. You just need to make sure the timestamp is ahead of where the queue is currently at. You'll also have to keep track what the system time was when you started the queue, so you can calculate the correct timestamp for each buffer you are submitting. If you have timestamp values on your source data, you should be able to use them to calculate the buffer timestamps as well.

Cannot get OpenAL to play sound

I've searched the net, I've searched here. I've found code that I could compile and it works fine, but for some reason my code won't produce any sound. I'm porting an old game to the PC (Windows,) and I'm trying to make it as authentic as possible, so I'm wanting to use generated wave forms. I've pretty much copied and pasted the working code (only adding in multiple voices,) and it still won't work (even thought the exact same code for a single voice works fine.) I know I'm missing something obvious, but I just cannot figure out what. Any help would be appreciated thank you.
First some notes... I was looking for something that would allow me to use the original methodology. The original system used paired bytes for music (sound effects - only 2 - were handled in code.) A time byte that counted down every time the routine was called, and a note byte that was played until time reached zero. this was done by patching into the interrupt vector, windows doesn't allow that, so I set up a timer that routing that accomplished the same thing. The timer kicks in, updates the display, and then runs the music sequence. I set this up with a defined time so that I only have one place to adjust the timing at (to get it as close as possible to the original sequence. The music is a generated wave form (and I've double checked the math, and even examined the generated data in debug mode,) and it looks good. The sequence looks good, but doesn't actually produce sound. I tried SDL2 first, and it's method of only playing 1 sound doesn't work for me, also, unless I make the sample duration extremely short (and the sound produced this way is awful,) I can't match the timing (it plays the entire sample through it's own interrupt without letting me make adjustments.) Also, blending the 3 voices together (when they all run with different timings,) is a mess. Most of the other engines I examined work in much the same way, they want to use their own callback interrupt and won't allow me to tweak it appropriately. This is why I started working with OpenAL. It allows multiple voices (sources,) and allows me to set the timings myself. On advice from several forums, I set it up so that the sample lengths are all multiples of full cycles.
Anyway, here's the code.
int main(int argc, char* argv[])
{
FreeConsole(); //Get rid of the DOS console, don't need it
if (InitLog() < 0) return -1; //Start logging
UINT_PTR tim = NULL;
SDL_Event event;
InitVideo(false); //Set to window for now, will put options in later
curmusic = 5;
InitAudio();
SetTimer(NULL,tim,_FREQ_,TimerProc);
SDL_PollEvent(&event);
while (event.type != SDL_KEYDOWN) SDL_PollEvent(&event);
SDL_Quit();
return 0;
}
void CALLBACK TimerProc(HWND hWind, UINT Msg, UINT_PTR idEvent, DWORD dwTime)
{
RenderOutput();
PlayMusic();
//UpdateTimer();
//RotateGate();
return;
}
void InitAudio(void)
{
ALCdevice *dev;
ALCcontext *cxt;
Log("Initializing OpenAL Audio\r\n");
dev = alcOpenDevice(NULL);
if (!dev) {
Log("Failed to open an audio device\r\n");
exit(-1);
}
cxt = alcCreateContext(dev, NULL);
alcMakeContextCurrent(cxt);
if(!cxt) {
Log("Failed to create audio context\r\n");
exit(-1);
}
alGenBuffers(4,Buffer);
if (alGetError() != AL_NO_ERROR) {
Log("Error during buffer creation\r\n");
exit(-1);
}
alGenSources(4, Source);
if (alGetError() != AL_NO_ERROR) {
Log("Error during source creation\r\n");
exit(-1);
}
return;
}
void PlayMusic()
{
static int oldsong, ofset, mtime[4];
double freq;
ALuint srate = 44100;
ALuint voice, i, note, len, hold;
short buf[4][_BUFFSIZE_];
bool test[4] = {false, false, false, false};
if (curmusic != oldsong) {
oldsong = (int)curmusic;
if (curmusic > 0)
ofset = moffset[(curmusic - 1)];
for (voice = 1; voice < 4; voice++)
alSourceStop(Source[voice]);
mtime[voice] = 0;
return;
}
if (curmusic == 0) return;
//Only 3 voices for music, but have
for (voice = 0; voice < 3; voice ++) { // 4 set asside for eventual sound effects
if (mtime[voice] == 0) { //is note finished
alSourceStop(Source[voice]); //It is, so stop the channel (source)
mtime[voice] = music[ofset++]; //Get the next duration
if (mtime[voice] == 0) {oldsong = 0; return;} //zero marks end, so restart
note = music[ofset++]; //Get the next note
if (note > 127) { //Old HW data was designed for could only
if (note == 255) note = 127; //use values 128 - 255 (255 = 127)
freq = (15980 / (voice + (int)(voice / 3))) / (256 - note); //freq of note
len = (ALuint)(srate / freq); //A single cycle of that freq.
hold = len;
while (len < (srate / (1000 / _FREQ_))) len += hold; //Multiply till 1 interrup cycle
while (len > _BUFFSIZE_) len -= hold; //Don't overload buffer
if (len == 0) len = _BUFFSIZE_; //Just to be safe
for (i = 0; i < len; i++) //calculate sine wave and put in buffer
buf[voice][i] = (short)((32760 * sin((2 * M_PI * i * freq) / srate)));
alBufferData(Buffer[voice], AL_FORMAT_MONO16, buf[voice], len, srate);
alSourcei(openAL.Source[i], AL_LOOPING, AL_TRUE);
alSourcei(Source[i], AL_BUFFER, Buffer[i]);
alSourcePlay(Source[voice]);
}
} else --mtime[voice];
}
}
Well, it turns out there were 3 problems with my code. First, you have to link the built wave buffer to the AL generated buffer "before" you link the buffer to the source:
alBufferData(buffer,AL_FORMAT_MONO16,&wave_sample,sample_lenght * sizeof(short),frequency);
alSourcei(source,AL_BUFFER,buffer);
Also in the above example, I multiplied the sample_length by how many bytes are in each sample (in this case "sizeof(short)".
The final problem was that you need to un-link a buffer from the source before you change the buffer data
alSourcei(source,AL_BUFFER,NULL);
The music would play, but not correctly until I added that line to the note change code.

CoreAudio get output sample rate

I'm creating a Mac OS X CoreAudio command-line program with proprietary rendering of some alphanumeric terminal input into a live audio signal, by means of AudioUnits, trying to stay as simple as possible. All works fine up to matching output sample rate.
As a starting point I'm using the Chapter 07 tutorial code of Addisson Wesley's "Learning Core Audio", CH07_AUGraphSineWave.
I initialize the AudioComponent "by the book":
void CreateAndConnectOutputUnit (MyGenerator *generator)
{
AudioComponentDescription theoutput = {0};
theoutput.componentType = kAudioUnitType_Output;
theoutput.componentSubType = kAudioUnitSubType_DefaultOutput;
theoutput.componentManufacturer = kAudioUnitManufacturer_Apple;
AudioComponent comp = AudioComponentFindNext (NULL, &theoutput);
if (comp == NULL) {
printf ("can't get output unit");
exit (-1);
}
CheckError (AudioComponentInstanceNew(comp, &generator->outputUnit),
"Couldn't open component for outputUnit");
AURenderCallbackStruct input;
input.inputProc = MyRenderProc;
input.inputProcRefCon = generator;
CheckError(AudioUnitSetProperty(generator->outputUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
0,
&input,
sizeof(input)),
"AudioUnitSetProperty failed");
CheckError (AudioUnitInitialize(generator->outputUnit),
"Couldn't initialize output unit");
}
My main problem is in my not knowing how to retreive the output hardware sample rate for the rendering AURenderCallbackStruct
since it does play a vital part in the signal generating process. I can't afford having the sample rate hard-coded into the rendering callback, although knowing it's the easiest way, since rate mismatch causes the signal being played at wrong pitch.
Is there a way of getting the default output's sample rate on such a low-level API?
Is there a way of matching it somehow, without getting overly complicated?
Have I missed something?
Thanks in advance.
Regards,
Tom
When calling AudioUnitGetProperty, the 6th parameter must be a pointer to a variable that will get the size of the answer.
Float64 sampleRate;
UInt32 sampleRateSize;
CheckError(AudioUnitGetProperty(generator->outputUnit,
kAudioUnitProperty_SampleRate,
kAudioUnitScope_Input,
0,
&sampleRate,
&sampleRateSize),
"AudioUnitGetProperty failed");
However, as long as the sample rate has not been set, the function does not return a value (but there is also no error!)
You can however set the sample rate with for instance:
Float64 sampleRate = 48000;
CheckError(AudioUnitSetProperty(generator->outputUnit,
kAudioUnitProperty_SampleRate,
kAudioUnitScope_Input,
0,
&sampleRate,
sizeof(sampleRate)),
"AudioUnitGetProperty failed");
From now on you can also read the value with the Get-call.
This does not answer the question, what the default value is. As far as I know that is always 44100 Hz.
The sample-rate is a property of all AudioUnits - see kAudioUnitProperty_SampleRate (documentation here) - although ultimately it's the IO Unit (RemoteIO on iOS or HAL unit on MacOSX) that drives the sample-rate at the audio interface. This is not available in the call-back structure; you need to read this property with AudioUnitGetProperty() in your initialisation code.
In your case, the following would probably do it:
Float64 sampleRate;
CheckError(AudioUnitGetProperty(generator->outputUnit,
kAudioUnitProperty_SampleRate,
kAudioUnitScope_Input,
0,
&sampleRate,
sizeof(sampleRate)),
If you're targeting iOS, you also need to interact with the Audio Session.

Resources