How to know when MIDI file has finished with OS X MusicPlayer - macos

I want to be able to play MIDI files that are included as resources in my app. I have a very simple function to do this, given the name of the resource (minus the .MID) file extension:
MusicPlayer musicPlayer;
MusicSequence sequence;
int MusicPlaying=0;
void PlayMusic(char *fname)
{
OSStatus res=noErr;
res = NewMusicPlayer(&musicPlayer);
res = NewMusicSequence(&sequence);
strcpy(TmpPath, "MUSIC/");
strcat(TmpPath, fname);
strcat(TmpPath, ".MID");
NSString *iName = [NSString stringWithUTF8String:TmpPath];
NSURL *url = [[NSBundle mainBundle] URLForResource:iName withExtension:nil];
res = MusicSequenceFileLoad (sequence, (__bridge CFURLRef _Nonnull)(url), 0, kMusicSequenceLoadSMF_ChannelsToTracks);
res = MusicPlayerSetSequence(musicPlayer, sequence);
res = MusicPlayerStart(musicPlayer);
if( res==noErr ) MusicPlaying = 1;
}
This all works fine and dandy, takes very little code... the problem is that I can't figure out how to know when the MIDI file has finished playing. I've tried MusicPlayerIsPlaying() (it ALWAYS returns true, LONG after the file has finished). I've tried checking MusicPlayerGetTime(), but the time count keeps on going after the MIDI finishes. I can't find any way to get a notification from this or any other way to determine that the actual MIDI data has finished playing.
Any ideas?

Apple's PlaySequence example shows how to do this:
You have to determine the length of the sequence by getting the length of each track:
MusicSequenceGetTrackCount(sequence, &ntracks);
for (UInt32 i = 0; i < ntracks; ++i) {
result = MusicSequenceGetIndTrack(sequence, i, &track);
result = MusicTrackGetProperty(track, kSequenceTrackProperty_TrackLength,
&trackLength, &propsize);
if (trackLength > sequenceLength)
sequenceLength = trackLength;
}
Then wait until you have reached that time:
while (1) {
usleep (2 * 1000 * 1000);
result = MusicPlayerGetTime(player, &time);
if (time >= sequenceLength)
break;
}

Related

Reading chunks of audio with ringbuffer

I would like to analyze chunks of audio data of one second. For this purpose I implemented an audio unit that fills a ringbuffer (TPCircularBuffer by Michael Tyson). In another file I try to read chunks of one second using a NStimer. Unfortunately, I receive errors with consuming these data.
The buffer is filled in a kAudioOutputUnitProperty_SetInputCallback and works fine
Device * THIS = (__bridge Device *)inRefCon;
// Render audio into buffer
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0].mNumberChannels = 2;
bufferList.mBuffers[0].mData = NULL;
bufferList.mBuffers[0].mDataByteSize = inNumberFrames * sizeof(SInt16) * 2;
CheckError(AudioUnitRender(THIS -> rioUnit, ioActionFlags, inTimeStamp, 1, inNumberFrames, &bufferList), "AudioUnitRender");
// Put audio into circular buffer
TPCircularBufferProduceBytes(&circBuffer, bufferList.mBuffers[0].mData, inNumberFrames * 2 * sizeof(SInt16));
To read one second of samples I implemented te following code:
- (void)initializeTimer {
timer = [NSTimer scheduledTimerWithTimeInterval:1
target:self
selector:#selector(timerFired:)
userInfo:nil
repeats:YES];
}
- (void) timerFired:(NSTimer *)theTimer {
NSLog(#"Reading %i second(s) from ring",1);
int32_t availableBytes;
SInt16 *tail = TPCircularBufferTail(&circBuffer, &availableBytes);
int availableSamples = availableBytes / sizeof(SInt16);
NSLog(#"Available samples %i", availableSamples);
for (int i = 0; i < availableSamples; i++) {
printf("%i\n",tail[i]);
}
TPCircularBufferConsume(&circBuffer, sizeof(SInt16) * availableBytes);
}
However, when I run this code the number of samples is printed but then I receive the following error:
Assertion failed: (buffer->fillCount >= 0), function TPCircularBufferConsume, file …/TPCircularBuffer.h, line 142.
Unfortunately, I don’t know what is going wrong with consuming the data. The buffer length is set to samplerate * 2 to be long enough.
I would be very happy if someone knows what is going wrong here.
Your circular buffer isn't long enough. You don't check that the available size is positive before emptying, and the time all your print statements take let the buffer get over-filled.
Make the buffer at least 2X larger than your timer, check before emptying, empty the buffer before printing, and use far fewer print statements.

m4a audio files not playing on iOS 9

I have an audio related app that has multichannel mixer to play m4a files at a time.
I'm using the AudioToolBox framework to stream audio, but on iOS9 the framework throws me exception in mixer rendering callback where i am streaming the audio files.
Interestingly apps compiled with the iOS9 SDK continue to stream the same file perfectly on iOS7/8 devices, but not iOS9.
Now i can't figure out if Apple broke something in iOS9, or we have the files encoded wrong on our end, but they play just fine on both iOS 7/8 but not 9.
Exception:
malloc: *** error for object 0x7fac74056e08: incorrect checksum for freed object - object was probably modified after being freed.
*** set a breakpoint in malloc_error_break to debug
It works for all other formats does not give any exception or any kind of memory errors but does not work for m4a format which is very surprising.
Here is a code to load files which works for wav,aif etc formats but not for m4a:
- (void)loadFiles{
AVAudioFormat *clientFormat = [[AVAudioFormat alloc] initWithCommonFormat:AVAudioPCMFormatFloat32
sampleRate:kGraphSampleRate
channels:1
interleaved:NO];
for (int i = 0; i < numFiles && i < maxBufs; i++) {
ExtAudioFileRef xafref = 0;
// open one of the two source files
OSStatus result = ExtAudioFileOpenURL(sourceURL[i], &xafref);
if (result || !xafref) {break; }
// get the file data format, this represents the file's actual data format
AudioStreamBasicDescription fileFormat;
UInt32 propSize = sizeof(fileFormat);
result = ExtAudioFileGetProperty(xafref, kExtAudioFileProperty_FileDataFormat, &propSize, &fileFormat);
if (result) { break; }
// set the client format - this is the format we want back from ExtAudioFile and corresponds to the format
// we will be providing to the input callback of the mixer, therefore the data type must be the same
double rateRatio = kGraphSampleRate / fileFormat.mSampleRate;
propSize = sizeof(AudioStreamBasicDescription);
result = ExtAudioFileSetProperty(xafref, kExtAudioFileProperty_ClientDataFormat, propSize, clientFormat.streamDescription);
if (result) { break; }
// get the file's length in sample frames
UInt64 numFrames = 0;
propSize = sizeof(numFrames);
result = ExtAudioFileGetProperty(xafref, kExtAudioFileProperty_FileLengthFrames, &propSize, &numFrames);
if (result) { break; }
if(i==metronomeBusIndex)
numFrames = (numFrames+6484)*4;
//numFrames = (numFrames * rateRatio); // account for any sample rate conversion
numFrames *= rateRatio;
// set up our buffer
mSoundBuffer[i].numFrames = (UInt32)numFrames;
mSoundBuffer[i].asbd = *(clientFormat.streamDescription);
UInt32 samples = (UInt32)numFrames * mSoundBuffer[i].asbd.mChannelsPerFrame;
mSoundBuffer[i].data = (Float32 *)calloc(samples, sizeof(Float32));
mSoundBuffer[i].sampleNum = 0;
// set up a AudioBufferList to read data into
AudioBufferList bufList;
bufList.mNumberBuffers = 1;
bufList.mBuffers[0].mNumberChannels = 1;
bufList.mBuffers[0].mData = mSoundBuffer[i].data;
bufList.mBuffers[0].mDataByteSize = samples * sizeof(Float32);
// perform a synchronous sequential read of the audio data out of the file into our allocated data buffer
UInt32 numPackets = (UInt32)numFrames;
result = ExtAudioFileRead(xafref, &numPackets, &bufList);
if (result) {
free(mSoundBuffer[i].data);
mSoundBuffer[i].data = 0;
}
// close the file and dispose the ExtAudioFileRef
ExtAudioFileDispose(xafref);
}
// [clientFormat release];
}
If anyone could point me in the right direction, how do i go about debugging the issue?
Do we need to re-encode our files in some specific way?
I tried it on iOS 9.1.beta3 yesterday and things seem to be back to normal.
Try it out. Let us know if it works out for you too.

CoreAudio AudioQueue stop issue

I'm making a CoreAudio based FLAC player, and ran into a naughty issue with AudioQueues.
I'm initializing my stuff like this (variables beginning with an underscore are instance variables):
_flacDecoder = FLAC__stream_decoder_new();
AudioStreamBasicDescription asbd = {
.mFormatID = kAudioFormatLinearPCM,
.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked,
.mSampleRate = 44100,
.mChannelsPerFrame = 2,
.mBitsPerChannel = 16,
.mBytesPerPacket = 4,
.mFramesPerPacket = 1,
.mBytesPerFrame = 4,
.mReserved = 0
};
AudioQueueNewOutput(&asbd, HandleOutputBuffer, (__bridge void *)(self), CFRunLoopGetCurrent(), kCFRunLoopDefaultMode, 0, &_audioQueue);
for (int i = 0; i < kNumberBuffers; ++i) {
AudioQueueAllocateBuffer(_audioQueue, 0x10000, &_audioQueueBuffers[i]);
}
AudioQueueSetParameter(_audioQueue, kAudioQueueParam_Volume, 1.0);
16 bit stereo PCM at 44.1 kHz, pretty basic setup. kNumberBuffers is 3, and each buffer is 0x10000 bytes.
I populate the buffers with these callbacks:
static void HandleOutputBuffer(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer){
FLACPlayer * self = (__bridge FLACPlayer*)inUserData;
UInt32 largestBlockSizeInBytes = self->_currentStreamInfo.max_blocksize * self->_currentStreamInfo.channels * self->_currentStreamInfo.bits_per_sample/8;
inBuffer->mAudioDataByteSize = 0;
self->_buffer = inBuffer;
while(inBuffer->mAudioDataByteSize <= inBuffer->mAudioDataBytesCapacity - largestBlockSizeInBytes){
FLAC__bool result = FLAC__stream_decoder_process_single(self->_flacDecoder);
assert(result);
if(FLAC__stream_decoder_get_state(self->_flacDecoder) == FLAC__STREAM_DECODER_END_OF_STREAM){
AudioQueueStop(self->_audioQueue, false);
break;
}
}
AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL);
}
static FLAC__StreamDecoderWriteStatus flacDecoderWriteCallback(const FLAC__StreamDecoder *decoder, const FLAC__Frame *frame, const FLAC__int32 * const buffer[], void *client_data){
FLACPlayer * self = (__bridge FLACPlayer *)client_data;
assert(frame->header.bits_per_sample == 16); // TODO
int16_t * bufferWritePosition = (int16_t*)((uint8_t*)self->_buffer->mAudioData + self->_buffer->mAudioDataByteSize);
for(int i = 0; i < frame->header.blocksize; i++){
for(int j = 0; j < frame->header.channels; j++){
*bufferWritePosition = (int16_t)buffer[j][i];
bufferWritePosition++;
}
}
int totalFramePayloadInBytes = frame->header.channels * frame->header.blocksize * frame->header.bits_per_sample/8;
self->_buffer->mAudioDataByteSize += totalFramePayloadInBytes;
return FLAC__STREAM_DECODER_WRITE_STATUS_CONTINUE;
}
static void flacDecoderMetadataCallback(const FLAC__StreamDecoder *decoder, const FLAC__StreamMetadata *metadata, void *client_data){
FLACPlayer * self = (__bridge FLACPlayer*) client_data;
if(metadata->type == FLAC__METADATA_TYPE_STREAMINFO){
self->_currentStreamInfo = metadata->data.stream_info;
}
}
Basically when the queue requests a new buffer, I fill the buffer from the FLAC__stream_decoder, then I enqueue it. Just like everyone else would do. When libFLAC tells me that I've reached the end of my file, I tell the AudioQueue to stop asynchronously, until it had consumed all the buffers' contents. However, instead of playing through the end, the playback stops a tiny bit before it should. If I remove this line:
AudioQueueStop(self->_audioQueue, false);
everything works fine; the audio plays end-to-end, although my queue keeps running till the end of time. If I change that line to this:
AudioQueueStop(self->_audioQueue, true);
then the playback stops immediately/synchronously, as you'd expect from Apple's documentation:
If you pass true, stopping occurs immediately (that is,
synchronously). If you pass false, the function returns immediately,
but the audio queue does not stop until its queued buffers are played
or recorded (that is, the stop occurs asynchronously). Audio queue
callbacks are invoked as necessary until the queue actually stops.
My questions are:
- am I doing anything wrong?
- how can I play my audio until the end, and shut down the queue appropriately?
Of course, after struggling with this stuff for hours, I've found the solution minutes after posting this question...
The problem was that the AudioQueue doesn't care about buffers enqueued after calling AudioQueueStop(..., false). So now I'm feeding the queue like this, and everything works like charm:
static void HandleOutputBuffer(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer){
FLACPlayer * self = (__bridge FLACPlayer*)inUserData;
UInt32 largestBlockSizeInBytes = self->_currentStreamInfo.max_blocksize * self->_currentStreamInfo.channels * self->_currentStreamInfo.bits_per_sample/8;
inBuffer->mAudioDataByteSize = 0;
self->_buffer = inBuffer;
bool shouldStop = false;
while(inBuffer->mAudioDataByteSize <= inBuffer->mAudioDataBytesCapacity - largestBlockSizeInBytes){
FLAC__bool result = FLAC__stream_decoder_process_single(self->_flacDecoder);
assert(result);
if(FLAC__stream_decoder_get_state(self->_flacDecoder) == FLAC__STREAM_DECODER_END_OF_STREAM){
shouldStop = true;
break;
}
}
AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL);
if(shouldStop){
AudioQueueStop(self->_audioQueue, false);
}
}

Images saved with D3DXSaveSurfaceToFile will open in Paint, not Photoshop

I'm using D3DXSaveSurfaceToFile to save windowed Direct3D 9 surfaces to PNG, BMP and JPG files. There are no errors returned from the D3DXSaveSurfaceToFile call and all files open fine in Windows Photo Viewer and Paint. But they will not open in a higher end image editing program such as Paint Shop Pro or Photoshop. The error messages from these programs basically say that the file is corrupted. If I open the files in Paint and then save them in the same file format with a different file name, then they'll open fine in the other programs.
This leads me to believe that D3DXSaveSurfaceToFile is writing out non-standard versions of these file formats. Is there some way I can get this function to write out files that can be opened in programs like Photoshop without the intermediate step of resaving the files in Paint? Or is there another function I should be using that does a better job of saving a Direct3D surfaces to an image?
Take a look at the file in a image meta viewer. What does it tell you?
Unfortunately D3DXSaveSurfaceToFile() isn't the most stable (it's also exceptionally slow). Personally I do something like the below code. It works even on Anti-aliased displays by doing an offscreen render to take the screenshot then getting it into a buffer. It also supports only the most common of the pixel formats. Sorry for any errors in it, pulled it out of an app I used to work on.
You can then, in your code and probably in another thread, then convert said 'bitmap' to anything you like using a variety of different code.
void HandleScreenshot(IDirect3DDevice9* device)
{
DWORD tcHandleScreenshot = GetTickCount();
LPDIRECT3DSURFACE9 pd3dsBack = NULL;
LPDIRECT3DSURFACE9 pd3dsTemp = NULL;
// Grab the back buffer into a surface
if ( SUCCEEDED ( device->GetBackBuffer(0, 0, D3DBACKBUFFER_TYPE_MONO, &pd3dsBack) ))
{
D3DSURFACE_DESC desc;
pd3dsBack->GetDesc(&desc);
LPDIRECT3DSURFACE9 pd3dsCopy = NULL;
if (desc.MultiSampleType != D3DMULTISAMPLE_NONE)
{
if (SUCCEEDED(device->CreateRenderTarget(desc.Width, desc.Height, desc.Format, D3DMULTISAMPLE_NONE, 0, FALSE, &pd3dsCopy, NULL)))
{
if (SUCCEEDED(device->StretchRect(pd3dsBack, NULL, pd3dsCopy, NULL, D3DTEXF_NONE)))
{
pd3dsBack->Release();
pd3dsBack = pd3dsCopy;
}
else
{
pd3dsCopy->Release();
}
}
}
if (SUCCEEDED(device->CreateOffscreenPlainSurface(desc.Width, desc.Height, desc.Format, D3DPOOL_SYSTEMMEM, &pd3dsTemp, NULL)))
{
DWORD tmpTimeGRTD = GetTickCount();
if (SUCCEEDED(device->GetRenderTargetData(pd3dsBack, pd3dsTemp)))
{
D3DLOCKED_RECT lockedSrcRect;
if (SUCCEEDED(pd3dsTemp->LockRect(&lockedSrcRect, NULL, D3DLOCK_READONLY | D3DLOCK_NOSYSLOCK | D3DLOCK_NO_DIRTY_UPDATE)))
{
int nSize = desc.Width * desc.Height * 3;
BYTE* pixels = new BYTE[nSize +1];
int iSrcPitch = lockedSrcRect.Pitch;
BYTE* pSrcRow = (BYTE*)lockedSrcRect.pBits;
LPBYTE lpDest = pixels;
LPDWORD lpSrc;
switch (desc.Format)
{
case D3DFMT_A8R8G8B8:
case D3DFMT_X8R8G8B8:
for (int y = desc.Height - 1; y >= 0; y--)
{
lpSrc = reinterpret_cast<LPDWORD>(lockedSrcRect.pBits) + y * desc.Width;
for (unsigned int x = 0; x < desc.Width; x++)
{
*reinterpret_cast<LPDWORD>(lpDest) = *lpSrc;
lpSrc++; // increment source pointer by 1 DWORD
lpDest += 3; // increment destination pointer by 3 bytes
}
}
break;
default:
ZeroMemory(pixels, nSize);
}
pd3dsTemp->UnlockRect();
BITMAPINFOHEADER header;
header.biWidth = desc.Width;
header.biHeight = desc.Height;
header.biSizeImage = nSize;
header.biSize = sizeof(BITMAPINFOHEADER);
header.biPlanes = 1;
header.biBitCount = 3 * 8; // RGB
header.biCompression = 0;
header.biXPelsPerMeter = 0;
header.biYPelsPerMeter = 0;
header.biClrUsed = 0;
header.biClrImportant = 0;
BITMAPFILEHEADER bfh = {0};
bfh.bfType = 0x4d42;
bfh.bfOffBits = sizeof(BITMAPFILEHEADER) + sizeof(BITMAPINFOHEADER);
bfh.bfSize = bfh.bfOffBits + nSize;
unsigned int rough_size = sizeof(BITMAPINFOHEADER) + sizeof(BITMAPFILEHEADER) + nSize;
unsigned char* p = new unsigned char[rough_size]
memcpy(p, &bfh, sizeof(BITMAPFILEHEADER));
p += sizeof(BITMAPFILEHEADER);
memcpy(p, &header, sizeof(BITMAPINFOHEADER));
p += sizeof(BITMAPINFOHEADER);
memcpy(p, pixels, nSize);
delete [] pixels;
/**********************************************/
// p now has a full BMP file, write it out here
}
}
pd3dsTemp->Release();
}
pd3dsBack->Release();
}
}
Turns out that it was a combination of a bug in my code and Paint being more forgiving than Photoshop when it comes to reading files. The bug in my code caused the files to be saved with the wrong extension (i.e. Image.bmp was actually saved using D3DXIFF_JPG). When opening a file that contained a JPG image, but had a BMP extension, Photoshop just failed the file. I guess Paint worked since it ignored the file extension and just decoded the file contents.
Looking at a file in an image meta viewer helped me to see the problem.

Audio Unit and Writing to file

I'm creating real-time audio sequencer app on OS X.
Real-time synth part is implemented by using AURenderCallback.
Now I'm making function to write rendered result to Wave File (44100Hz 16bit Stereo).
Format for render-callback function is 44100Hz 32bit float Stereo interleaved.
I'm using ExtAudioFileWrite to write to file.
But ExtAudioFileWrite function returns error code 1768846202;
I searched 1768846202 but I couldn't get information.
Would you give me some hints?
Thank you.
Here is code.
outFileFormat.mSampleRate = 44100;
outFileFormat.mFormatID = kAudioFormatLinearPCM;
outFileFormat.mFormatFlags =
kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
outFileFormat.mBitsPerChannel = 16;
outFileFormat.mChannelsPerFrame = 2;
outFileFormat.mFramesPerPacket = 1;
outFileFormat.mBytesPerFrame =
outFileFormat.mBitsPerChannel / 8 * outFileFormat.mChannelsPerFrame;
outFileFormat.mBytesPerPacket =
outFileFormat.mBytesPerFrame * outFileFormat.mFramesPerPacket;
AudioBufferList *ioList;
ioList = (AudioBufferList*)calloc(1, sizeof(AudioBufferList)
+ 2 * sizeof(AudioBuffer));
ioList->mNumberBuffers = 2;
ioList->mBuffers[0].mNumberChannels = 1;
ioList->mBuffers[0].mDataByteSize = allocByteSize / 2;
ioList->mBuffers[0].mData = ioDataL;
ioList->mBuffers[1].mNumberChannels = 1;
ioList->mBuffers[1].mDataByteSize = allocByteSize / 2;
ioList->mBuffers[1].mData = ioDataR;
...
while (1) {
//Fill buffer by using render callback func.
RenderCallback(self, nil, nil, 0, frames, ioList);
//i want to create one sec file.
if (renderedFrames >= 44100) break;
err = ExtAudioFileWrite(outAudioFileRef, frames , ioList);
if (err != noErr){
NSLog(#"ERROR AT WRITING TO FILE");
goto errorExit;
}
}
Some of the error codes are actually four character strings. The Core Audio book provides a nice function to handle errors.
static void CheckError(OSStatus error, const char *operation)
{
if (error == noErr) return;
char str[20];
// see if it appears to be a 4-char-code
*(UInt32 *)(str + 1) = CFSwapInt32HostToBig(error);
if (isprint(str[1]) && isprint(str[2]) && isprint(str[3]) && isprint(str[4])) {
str[0] = str[5] = '\'';
str[6] = '\0';
} else
// no, format it as an integer
sprintf(str, "%d", (int)error);
fprintf(stderr, "Error: %s (%s)\n", operation, str);
exit(1);
}
Use it like this:
CheckError(ExtAudioFileSetProperty(outputFile,
kExtAudioFileProperty_CodecManufacturer,
sizeof(codec),
&codec), "Setting codec.");
Before you can do any sort of debugging, you probably need to figure out what that error message actually means. Have you tried passing that status code to GetMacOSStatusErrorString() or GetMacOSStatusCommentString()? They aren't documented so well, but they are declared in CoreServices/CarbonCore/Debugging.h.

Resources