Reading chunks of audio with ringbuffer - core-audio

I would like to analyze chunks of audio data of one second. For this purpose I implemented an audio unit that fills a ringbuffer (TPCircularBuffer by Michael Tyson). In another file I try to read chunks of one second using a NStimer. Unfortunately, I receive errors with consuming these data.
The buffer is filled in a kAudioOutputUnitProperty_SetInputCallback and works fine
Device * THIS = (__bridge Device *)inRefCon;
// Render audio into buffer
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0].mNumberChannels = 2;
bufferList.mBuffers[0].mData = NULL;
bufferList.mBuffers[0].mDataByteSize = inNumberFrames * sizeof(SInt16) * 2;
CheckError(AudioUnitRender(THIS -> rioUnit, ioActionFlags, inTimeStamp, 1, inNumberFrames, &bufferList), "AudioUnitRender");
// Put audio into circular buffer
TPCircularBufferProduceBytes(&circBuffer, bufferList.mBuffers[0].mData, inNumberFrames * 2 * sizeof(SInt16));
To read one second of samples I implemented te following code:
- (void)initializeTimer {
timer = [NSTimer scheduledTimerWithTimeInterval:1
target:self
selector:#selector(timerFired:)
userInfo:nil
repeats:YES];
}
- (void) timerFired:(NSTimer *)theTimer {
NSLog(#"Reading %i second(s) from ring",1);
int32_t availableBytes;
SInt16 *tail = TPCircularBufferTail(&circBuffer, &availableBytes);
int availableSamples = availableBytes / sizeof(SInt16);
NSLog(#"Available samples %i", availableSamples);
for (int i = 0; i < availableSamples; i++) {
printf("%i\n",tail[i]);
}
TPCircularBufferConsume(&circBuffer, sizeof(SInt16) * availableBytes);
}
However, when I run this code the number of samples is printed but then I receive the following error:
Assertion failed: (buffer->fillCount >= 0), function TPCircularBufferConsume, file …/TPCircularBuffer.h, line 142.
Unfortunately, I don’t know what is going wrong with consuming the data. The buffer length is set to samplerate * 2 to be long enough.
I would be very happy if someone knows what is going wrong here.

Your circular buffer isn't long enough. You don't check that the available size is positive before emptying, and the time all your print statements take let the buffer get over-filled.
Make the buffer at least 2X larger than your timer, check before emptying, empty the buffer before printing, and use far fewer print statements.

Related

How to know when MIDI file has finished with OS X MusicPlayer

I want to be able to play MIDI files that are included as resources in my app. I have a very simple function to do this, given the name of the resource (minus the .MID) file extension:
MusicPlayer musicPlayer;
MusicSequence sequence;
int MusicPlaying=0;
void PlayMusic(char *fname)
{
OSStatus res=noErr;
res = NewMusicPlayer(&musicPlayer);
res = NewMusicSequence(&sequence);
strcpy(TmpPath, "MUSIC/");
strcat(TmpPath, fname);
strcat(TmpPath, ".MID");
NSString *iName = [NSString stringWithUTF8String:TmpPath];
NSURL *url = [[NSBundle mainBundle] URLForResource:iName withExtension:nil];
res = MusicSequenceFileLoad (sequence, (__bridge CFURLRef _Nonnull)(url), 0, kMusicSequenceLoadSMF_ChannelsToTracks);
res = MusicPlayerSetSequence(musicPlayer, sequence);
res = MusicPlayerStart(musicPlayer);
if( res==noErr ) MusicPlaying = 1;
}
This all works fine and dandy, takes very little code... the problem is that I can't figure out how to know when the MIDI file has finished playing. I've tried MusicPlayerIsPlaying() (it ALWAYS returns true, LONG after the file has finished). I've tried checking MusicPlayerGetTime(), but the time count keeps on going after the MIDI finishes. I can't find any way to get a notification from this or any other way to determine that the actual MIDI data has finished playing.
Any ideas?
Apple's PlaySequence example shows how to do this:
You have to determine the length of the sequence by getting the length of each track:
MusicSequenceGetTrackCount(sequence, &ntracks);
for (UInt32 i = 0; i < ntracks; ++i) {
result = MusicSequenceGetIndTrack(sequence, i, &track);
result = MusicTrackGetProperty(track, kSequenceTrackProperty_TrackLength,
&trackLength, &propsize);
if (trackLength > sequenceLength)
sequenceLength = trackLength;
}
Then wait until you have reached that time:
while (1) {
usleep (2 * 1000 * 1000);
result = MusicPlayerGetTime(player, &time);
if (time >= sequenceLength)
break;
}

m4a audio files not playing on iOS 9

I have an audio related app that has multichannel mixer to play m4a files at a time.
I'm using the AudioToolBox framework to stream audio, but on iOS9 the framework throws me exception in mixer rendering callback where i am streaming the audio files.
Interestingly apps compiled with the iOS9 SDK continue to stream the same file perfectly on iOS7/8 devices, but not iOS9.
Now i can't figure out if Apple broke something in iOS9, or we have the files encoded wrong on our end, but they play just fine on both iOS 7/8 but not 9.
Exception:
malloc: *** error for object 0x7fac74056e08: incorrect checksum for freed object - object was probably modified after being freed.
*** set a breakpoint in malloc_error_break to debug
It works for all other formats does not give any exception or any kind of memory errors but does not work for m4a format which is very surprising.
Here is a code to load files which works for wav,aif etc formats but not for m4a:
- (void)loadFiles{
AVAudioFormat *clientFormat = [[AVAudioFormat alloc] initWithCommonFormat:AVAudioPCMFormatFloat32
sampleRate:kGraphSampleRate
channels:1
interleaved:NO];
for (int i = 0; i < numFiles && i < maxBufs; i++) {
ExtAudioFileRef xafref = 0;
// open one of the two source files
OSStatus result = ExtAudioFileOpenURL(sourceURL[i], &xafref);
if (result || !xafref) {break; }
// get the file data format, this represents the file's actual data format
AudioStreamBasicDescription fileFormat;
UInt32 propSize = sizeof(fileFormat);
result = ExtAudioFileGetProperty(xafref, kExtAudioFileProperty_FileDataFormat, &propSize, &fileFormat);
if (result) { break; }
// set the client format - this is the format we want back from ExtAudioFile and corresponds to the format
// we will be providing to the input callback of the mixer, therefore the data type must be the same
double rateRatio = kGraphSampleRate / fileFormat.mSampleRate;
propSize = sizeof(AudioStreamBasicDescription);
result = ExtAudioFileSetProperty(xafref, kExtAudioFileProperty_ClientDataFormat, propSize, clientFormat.streamDescription);
if (result) { break; }
// get the file's length in sample frames
UInt64 numFrames = 0;
propSize = sizeof(numFrames);
result = ExtAudioFileGetProperty(xafref, kExtAudioFileProperty_FileLengthFrames, &propSize, &numFrames);
if (result) { break; }
if(i==metronomeBusIndex)
numFrames = (numFrames+6484)*4;
//numFrames = (numFrames * rateRatio); // account for any sample rate conversion
numFrames *= rateRatio;
// set up our buffer
mSoundBuffer[i].numFrames = (UInt32)numFrames;
mSoundBuffer[i].asbd = *(clientFormat.streamDescription);
UInt32 samples = (UInt32)numFrames * mSoundBuffer[i].asbd.mChannelsPerFrame;
mSoundBuffer[i].data = (Float32 *)calloc(samples, sizeof(Float32));
mSoundBuffer[i].sampleNum = 0;
// set up a AudioBufferList to read data into
AudioBufferList bufList;
bufList.mNumberBuffers = 1;
bufList.mBuffers[0].mNumberChannels = 1;
bufList.mBuffers[0].mData = mSoundBuffer[i].data;
bufList.mBuffers[0].mDataByteSize = samples * sizeof(Float32);
// perform a synchronous sequential read of the audio data out of the file into our allocated data buffer
UInt32 numPackets = (UInt32)numFrames;
result = ExtAudioFileRead(xafref, &numPackets, &bufList);
if (result) {
free(mSoundBuffer[i].data);
mSoundBuffer[i].data = 0;
}
// close the file and dispose the ExtAudioFileRef
ExtAudioFileDispose(xafref);
}
// [clientFormat release];
}
If anyone could point me in the right direction, how do i go about debugging the issue?
Do we need to re-encode our files in some specific way?
I tried it on iOS 9.1.beta3 yesterday and things seem to be back to normal.
Try it out. Let us know if it works out for you too.

Cannot get OpenAL to play sound

I've searched the net, I've searched here. I've found code that I could compile and it works fine, but for some reason my code won't produce any sound. I'm porting an old game to the PC (Windows,) and I'm trying to make it as authentic as possible, so I'm wanting to use generated wave forms. I've pretty much copied and pasted the working code (only adding in multiple voices,) and it still won't work (even thought the exact same code for a single voice works fine.) I know I'm missing something obvious, but I just cannot figure out what. Any help would be appreciated thank you.
First some notes... I was looking for something that would allow me to use the original methodology. The original system used paired bytes for music (sound effects - only 2 - were handled in code.) A time byte that counted down every time the routine was called, and a note byte that was played until time reached zero. this was done by patching into the interrupt vector, windows doesn't allow that, so I set up a timer that routing that accomplished the same thing. The timer kicks in, updates the display, and then runs the music sequence. I set this up with a defined time so that I only have one place to adjust the timing at (to get it as close as possible to the original sequence. The music is a generated wave form (and I've double checked the math, and even examined the generated data in debug mode,) and it looks good. The sequence looks good, but doesn't actually produce sound. I tried SDL2 first, and it's method of only playing 1 sound doesn't work for me, also, unless I make the sample duration extremely short (and the sound produced this way is awful,) I can't match the timing (it plays the entire sample through it's own interrupt without letting me make adjustments.) Also, blending the 3 voices together (when they all run with different timings,) is a mess. Most of the other engines I examined work in much the same way, they want to use their own callback interrupt and won't allow me to tweak it appropriately. This is why I started working with OpenAL. It allows multiple voices (sources,) and allows me to set the timings myself. On advice from several forums, I set it up so that the sample lengths are all multiples of full cycles.
Anyway, here's the code.
int main(int argc, char* argv[])
{
FreeConsole(); //Get rid of the DOS console, don't need it
if (InitLog() < 0) return -1; //Start logging
UINT_PTR tim = NULL;
SDL_Event event;
InitVideo(false); //Set to window for now, will put options in later
curmusic = 5;
InitAudio();
SetTimer(NULL,tim,_FREQ_,TimerProc);
SDL_PollEvent(&event);
while (event.type != SDL_KEYDOWN) SDL_PollEvent(&event);
SDL_Quit();
return 0;
}
void CALLBACK TimerProc(HWND hWind, UINT Msg, UINT_PTR idEvent, DWORD dwTime)
{
RenderOutput();
PlayMusic();
//UpdateTimer();
//RotateGate();
return;
}
void InitAudio(void)
{
ALCdevice *dev;
ALCcontext *cxt;
Log("Initializing OpenAL Audio\r\n");
dev = alcOpenDevice(NULL);
if (!dev) {
Log("Failed to open an audio device\r\n");
exit(-1);
}
cxt = alcCreateContext(dev, NULL);
alcMakeContextCurrent(cxt);
if(!cxt) {
Log("Failed to create audio context\r\n");
exit(-1);
}
alGenBuffers(4,Buffer);
if (alGetError() != AL_NO_ERROR) {
Log("Error during buffer creation\r\n");
exit(-1);
}
alGenSources(4, Source);
if (alGetError() != AL_NO_ERROR) {
Log("Error during source creation\r\n");
exit(-1);
}
return;
}
void PlayMusic()
{
static int oldsong, ofset, mtime[4];
double freq;
ALuint srate = 44100;
ALuint voice, i, note, len, hold;
short buf[4][_BUFFSIZE_];
bool test[4] = {false, false, false, false};
if (curmusic != oldsong) {
oldsong = (int)curmusic;
if (curmusic > 0)
ofset = moffset[(curmusic - 1)];
for (voice = 1; voice < 4; voice++)
alSourceStop(Source[voice]);
mtime[voice] = 0;
return;
}
if (curmusic == 0) return;
//Only 3 voices for music, but have
for (voice = 0; voice < 3; voice ++) { // 4 set asside for eventual sound effects
if (mtime[voice] == 0) { //is note finished
alSourceStop(Source[voice]); //It is, so stop the channel (source)
mtime[voice] = music[ofset++]; //Get the next duration
if (mtime[voice] == 0) {oldsong = 0; return;} //zero marks end, so restart
note = music[ofset++]; //Get the next note
if (note > 127) { //Old HW data was designed for could only
if (note == 255) note = 127; //use values 128 - 255 (255 = 127)
freq = (15980 / (voice + (int)(voice / 3))) / (256 - note); //freq of note
len = (ALuint)(srate / freq); //A single cycle of that freq.
hold = len;
while (len < (srate / (1000 / _FREQ_))) len += hold; //Multiply till 1 interrup cycle
while (len > _BUFFSIZE_) len -= hold; //Don't overload buffer
if (len == 0) len = _BUFFSIZE_; //Just to be safe
for (i = 0; i < len; i++) //calculate sine wave and put in buffer
buf[voice][i] = (short)((32760 * sin((2 * M_PI * i * freq) / srate)));
alBufferData(Buffer[voice], AL_FORMAT_MONO16, buf[voice], len, srate);
alSourcei(openAL.Source[i], AL_LOOPING, AL_TRUE);
alSourcei(Source[i], AL_BUFFER, Buffer[i]);
alSourcePlay(Source[voice]);
}
} else --mtime[voice];
}
}
Well, it turns out there were 3 problems with my code. First, you have to link the built wave buffer to the AL generated buffer "before" you link the buffer to the source:
alBufferData(buffer,AL_FORMAT_MONO16,&wave_sample,sample_lenght * sizeof(short),frequency);
alSourcei(source,AL_BUFFER,buffer);
Also in the above example, I multiplied the sample_length by how many bytes are in each sample (in this case "sizeof(short)".
The final problem was that you need to un-link a buffer from the source before you change the buffer data
alSourcei(source,AL_BUFFER,NULL);
The music would play, but not correctly until I added that line to the note change code.

CGDataProvider doesn't free up data on callback

I am creating a very big buffer (called buffer2 in the code) using CGDataProviderRef with the following code:
-(UIImage *) glToUIImage {
NSInteger myDataLength = 768 * 1024 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 768, 1024, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y <1024; y++)
{
for(int x = 0; x <768 * 4; x++)
{
buffer2[(1023 - y) * 768 * 4 + x] = buffer[y * 4 * 768 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, &releaseBufferData);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 768;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(768, 1024, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
free(buffer);
//[provider autorelease];
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(imageRef);
return myImage;
}
I expect CGProvider to call back the releaseBufferData method when it is done with buffer2 so that I can free up the memory it's taken. The code for this method is:
static void releaseBufferData (void *info, const void *data, size_t size){
free(data);
}
However, even though my callback method is called, the memory that data (buffer2) takes is never freed and hence it results in massive memory leaks. What am I doing wrong?
Have you ever CGDataProviderRelease your provider? The callback will not be called if you don't release the data provider.
For some peculiar reason this is not an issue anymore.
Just in case this helps someone else. I was having the same problem. It started working once I called
CGImageRelease(imageRef);
right before the
CGDataProviderRelease(provider);
malloc isn't freed in a "release" callback when it allocates on one thread but the callback that deallocates it is executed on another. Wrap both your allocation and deallocation in this:
dispatch_async(dispatch_get_main_queue(), ^{
// *malloc* and *free* go here; don't call &releaseCallBack or some such anywhere
});
A second thing to try is a completion block. Instead of returning an image in the traditional way (via a method return property), use a completion block. The UIImage will be freed as soon as the completion block is closed.
For example, if you're trying to save multiple images to the Photos library, but the malloc'd data isn't freeing after each image is created, then pass the image back via a completion block, making sure you create no new instance of the image that is passed back, and it will be gone as soon as it hits the };
A third thing is calloc instead of malloc:
GLubyte *buffer = (GLubyte *)calloc(myDataLength, sizeof(GLubyte));
That's what I use now where I once had malloc, which obviates the need for the prior two suggestions. I use OpenGL to populate a collection view consisting of a single row of cells, each with one frame from a video. To skim the video, you slide the collection view, if you see a frame you want to save as an image, you long press it; if you want to advance to that frame in the video, you tap it. As you know, even short videos have a lot of frames; the calloc solution knocks about 256 MB off total memory usage every call to the release callback, to which it builds when you scroll blurry fast.

How to check current time and duration in AudioQueue

How to get total time duration of music in audioQueue. I am using
NSTimeInterval AQPlayer::getCurrentTime()
{
NSTimeInterval timeInterval = 0.0;
AudioQueueTimelineRef timeLine;
OSStatus status = AudioQueueCreateTimeline(mQueue, &timeLine);
if(status == noErr)
{
AudioTimeStamp timeStamp;
AudioQueueGetCurrentTime(mQueue, timeLine, &timeStamp, NULL);
timeInterval = timeStamp.mSampleTime;
}
return timeInterval;
}
AudioQueueGetCurrentTime(mQueue, timeLine, &timeStamp, NULL); for getting current playing time, it gives some large value is it valid and how to get duration of music file.
For future reference, I am getting a correct time in seconds using a slight modification of Chandan's code:
int AQPlayer::GetCurrentTime() {
int timeInterval = 0;
AudioQueueTimelineRef timeLine;
OSStatus status = AudioQueueCreateTimeline(mQueue, &timeLine);
if(status == noErr) {
AudioTimeStamp timeStamp;
AudioQueueGetCurrentTime(mQueue, timeLine, &timeStamp, NULL);
timeInterval = timeStamp.mSampleTime / mDataFormat.mSampleRate; // modified
}
return timeInterval;
}
AudioQueueGetCurrentTime(mQueue, timeLine, &timeStamp, NULL); for getting current playing time, it gives some large value is it valid
Probably, but it's not what you think. It's not in seconds; the docs don't really say what it is in, but Googling around, it appears to be in frames, for whatever reason. (For one example, this technote includes a snippet that treats it as frames.) Try dividing by the sample rate and dividing by the (source's) frame rate, and see which one gets you sane numbers.
how to get duration of music file.
There isn't one. An audio queue is just that: a queue of audio samples to be played or recorded. The only length the queue has is the number of samples you can have queued in it; the queue does not know the length of anything that might be feeding into it, if those sources even have a finite length.
The audio queue calls a function you create to get the audio samples from you. Wherever your function gets the samples from (e.g., an AudioFile) is where you need to get the length from. If you're generating the samples yourself (as in a tone or noise generator), then the length, if any, is up to you.
NSTimeInterval AQPlayer::getTotalDuration()
{
UInt64 nPackets;
UInt32 propsize = sizeof(nPackets);
XThrowIfError (AudioFileGetProperty(mAudioFile, kAudioFilePropertyAudioDataPacketCount, &propsize, &nPackets), "kAudioFilePropertyAudioDataPacketCount");
Float64 fileDuration = (nPackets * mDataFormat.mFramesPerPacket) / mDataFormat.mSampleRate;
return fileDuration;
}

Resources