Where is the callback function in Apple's PlaySequence project? - core-audio

I want to bounce a midi file offline, and as the PlaySequence example does exactly this, I am trying to understand it.
I keep reading everywhere that you need a callback function to do anything in CoreAudio, yet I cannot see any in this project.
I paste the loop containing the AudioUnitRender, thanks for your help!
CAStreamBasicDescription clientFormat = CAStreamBasicDescription();
size = sizeof(clientFormat);
FailIf ((result = AudioUnitGetProperty (outputUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output, 0,
&clientFormat, &size)), fail, "AudioUnitGetProperty: kAudioUnitProperty_StreamFormat");
size = sizeof(clientFormat);
FailIf ((result = ExtAudioFileSetProperty(outfile, kExtAudioFileProperty_ClientDataFormat, size, &clientFormat)), fail, "ExtAudioFileSetProperty: kExtAudioFileProperty_ClientDataFormat");
{
MusicTimeStamp currentTime;
AUOutputBL outputBuffer (clientFormat, numFrames);
AudioTimeStamp tStamp;
memset (&tStamp, 0, sizeof(AudioTimeStamp));
tStamp.mFlags = kAudioTimeStampSampleTimeValid;
int i = 0;
int numTimesFor10Secs = (int)(10. / (numFrames / srate));
do {
outputBuffer.Prepare();
AudioUnitRenderActionFlags actionFlags = 0;
FailIf ((result = AudioUnitRender (outputUnit, &actionFlags, &tStamp, 0, numFrames, outputBuffer.ABL())), fail, "AudioUnitRender");
tStamp.mSampleTime += numFrames;
FailIf ((result = ExtAudioFileWrite(outfile, numFrames, outputBuffer.ABL())), fail, "ExtAudioFileWrite");
FailIf ((result = MusicPlayerGetTime (player, &currentTime)), fail, "MusicPlayerGetTime");
if (shouldPrint && (++i % numTimesFor10Secs == 0))
printf ("current time: %6.2f beats\n", currentTime);
} while (currentTime < sequenceLength);
}

I had a look at the project and you're right, it does not have a rendercallback. Render callbacks have their place withing coreaudio audiounit effect processing. The call to setup a render callback looks like this:
inline OSStatus SetInputCallback (CAAudioUnit &inUnit, AURenderCallbackStruct &inInputCallback)
{
return inUnit.SetProperty (kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
0,
&inInputCallback,
sizeof(inInputCallback));
}
However this code is just one big main() which setups up the sequence, augraph, music player and then the filewriter WriteOutputFile in case there's an outputfile to bounce to.
I recommend you set some breakpoints at key methods, and walk through the code, watching what it does and looking at variables.
EDIT: Note, that in setting up your rendercallback on iOS on your RemoteIO (which doubles as input & output units), getting the correct stream format on the correct scope & bus element numbers in your setproperty calls can be tricky. Refer to this from the Apple docs.

Related

Distortion from output Audio Unit

I am hearing a very loud and harsh distortion sound when I run this simple application. I am simply instantiating a default output unit and assign a render callback. And letting the program run in the runloop. I have detected no errors from Core Audio and everything works as usual except for this distortion.
#import <AudioToolbox/AudioToolbox.h>
OSStatus render1(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData)
{
return noErr;
}
int main(int argc, const char * argv[]) {
AudioUnit timerAU;
UInt32 propsize = 0;
AudioComponentDescription outputUnitDesc;
outputUnitDesc.componentType = kAudioUnitType_Output;
outputUnitDesc.componentSubType = kAudioUnitSubType_DefaultOutput;
outputUnitDesc.componentManufacturer = kAudioUnitManufacturer_Apple;
outputUnitDesc.componentFlags = 0;
outputUnitDesc.componentFlagsMask = 0;
//Get RemoteIO AU from Audio Unit Component Manager
AudioComponent outputComp = AudioComponentFindNext(NULL, &outputUnitDesc);
if (outputComp == NULL) exit (-1);
CheckError(AudioComponentInstanceNew(outputComp, &timerAU), "comp");
//Set up render callback function for the RemoteIO AU.
AURenderCallbackStruct renderCallbackStruct;
renderCallbackStruct.inputProc = render1;
renderCallbackStruct.inputProcRefCon = nil;//(__bridge void *)(self);
propsize = sizeof(renderCallbackStruct);
CheckError(AudioUnitSetProperty(timerAU,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
0,
&renderCallbackStruct,
propsize), "set render");
CheckError(AudioUnitInitialize(timerAU), "init");
// tickMethod = completion;
CheckError(AudioOutputUnitStart(timerAU), "start");
CFRunLoopRunInMode(kCFRunLoopDefaultMode, 1000, false);
}
Your question does not seem complete. I don't know about the side effects of silencing the output noise which is probably just undefined behavior. I also don't know what your code would serve for as such. There is an unfinished render callback on the kAudioUnitSubType_DefaultOutput which does nothing (it is not generating silence!). I know for two ways of silencing it.
In the callback the ioData buffers have to be explicitly filled with zeroes, because there's no guarantee they will be initialized empty:
Float32 * lBuffer0;
Float32 * lBuffer1;
lBuffer0 = (Float32 *)ioData->mBuffers[0].mData;
lBuffer1 = (Float32 *)ioData->mBuffers[1].mData;
memset(lBuffer0, 0, inNumberFrames*sizeof(Float32));
memset(lBuffer1, 0, inNumberFrames*sizeof(Float32));
Other possibility is to leave the unfinished callback as it is, but declare the timerAU to be of outputUnitDesc.componentSubType = kAudioUnitSubType_HALOutput; instead of
outputUnitDesc.componentSubType = kAudioUnitSubType_DefaultOutput;
and explicity disable I/O before setting the render callback by means of following code:
UInt32 lEnableIO = 0;
CheckError(AudioUnitSetProperty(timerAU,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
0, //output element
&lEnableIO,
sizeof(lEnableIO)),
"couldn't disable output");
I would strongly encourage into studying thoroughly the CoreAudio API and understanding how to set up an audio unit. This is crucial in understanding the matter. I've seen in your code a comment mentioning a RemoteIO AU. There is nothing like a RemoteIO AU in OSX. In case you're attempting a port from iOS code, please try learning the differences. They are well documented.

Cannot get OpenAL to play sound

I've searched the net, I've searched here. I've found code that I could compile and it works fine, but for some reason my code won't produce any sound. I'm porting an old game to the PC (Windows,) and I'm trying to make it as authentic as possible, so I'm wanting to use generated wave forms. I've pretty much copied and pasted the working code (only adding in multiple voices,) and it still won't work (even thought the exact same code for a single voice works fine.) I know I'm missing something obvious, but I just cannot figure out what. Any help would be appreciated thank you.
First some notes... I was looking for something that would allow me to use the original methodology. The original system used paired bytes for music (sound effects - only 2 - were handled in code.) A time byte that counted down every time the routine was called, and a note byte that was played until time reached zero. this was done by patching into the interrupt vector, windows doesn't allow that, so I set up a timer that routing that accomplished the same thing. The timer kicks in, updates the display, and then runs the music sequence. I set this up with a defined time so that I only have one place to adjust the timing at (to get it as close as possible to the original sequence. The music is a generated wave form (and I've double checked the math, and even examined the generated data in debug mode,) and it looks good. The sequence looks good, but doesn't actually produce sound. I tried SDL2 first, and it's method of only playing 1 sound doesn't work for me, also, unless I make the sample duration extremely short (and the sound produced this way is awful,) I can't match the timing (it plays the entire sample through it's own interrupt without letting me make adjustments.) Also, blending the 3 voices together (when they all run with different timings,) is a mess. Most of the other engines I examined work in much the same way, they want to use their own callback interrupt and won't allow me to tweak it appropriately. This is why I started working with OpenAL. It allows multiple voices (sources,) and allows me to set the timings myself. On advice from several forums, I set it up so that the sample lengths are all multiples of full cycles.
Anyway, here's the code.
int main(int argc, char* argv[])
{
FreeConsole(); //Get rid of the DOS console, don't need it
if (InitLog() < 0) return -1; //Start logging
UINT_PTR tim = NULL;
SDL_Event event;
InitVideo(false); //Set to window for now, will put options in later
curmusic = 5;
InitAudio();
SetTimer(NULL,tim,_FREQ_,TimerProc);
SDL_PollEvent(&event);
while (event.type != SDL_KEYDOWN) SDL_PollEvent(&event);
SDL_Quit();
return 0;
}
void CALLBACK TimerProc(HWND hWind, UINT Msg, UINT_PTR idEvent, DWORD dwTime)
{
RenderOutput();
PlayMusic();
//UpdateTimer();
//RotateGate();
return;
}
void InitAudio(void)
{
ALCdevice *dev;
ALCcontext *cxt;
Log("Initializing OpenAL Audio\r\n");
dev = alcOpenDevice(NULL);
if (!dev) {
Log("Failed to open an audio device\r\n");
exit(-1);
}
cxt = alcCreateContext(dev, NULL);
alcMakeContextCurrent(cxt);
if(!cxt) {
Log("Failed to create audio context\r\n");
exit(-1);
}
alGenBuffers(4,Buffer);
if (alGetError() != AL_NO_ERROR) {
Log("Error during buffer creation\r\n");
exit(-1);
}
alGenSources(4, Source);
if (alGetError() != AL_NO_ERROR) {
Log("Error during source creation\r\n");
exit(-1);
}
return;
}
void PlayMusic()
{
static int oldsong, ofset, mtime[4];
double freq;
ALuint srate = 44100;
ALuint voice, i, note, len, hold;
short buf[4][_BUFFSIZE_];
bool test[4] = {false, false, false, false};
if (curmusic != oldsong) {
oldsong = (int)curmusic;
if (curmusic > 0)
ofset = moffset[(curmusic - 1)];
for (voice = 1; voice < 4; voice++)
alSourceStop(Source[voice]);
mtime[voice] = 0;
return;
}
if (curmusic == 0) return;
//Only 3 voices for music, but have
for (voice = 0; voice < 3; voice ++) { // 4 set asside for eventual sound effects
if (mtime[voice] == 0) { //is note finished
alSourceStop(Source[voice]); //It is, so stop the channel (source)
mtime[voice] = music[ofset++]; //Get the next duration
if (mtime[voice] == 0) {oldsong = 0; return;} //zero marks end, so restart
note = music[ofset++]; //Get the next note
if (note > 127) { //Old HW data was designed for could only
if (note == 255) note = 127; //use values 128 - 255 (255 = 127)
freq = (15980 / (voice + (int)(voice / 3))) / (256 - note); //freq of note
len = (ALuint)(srate / freq); //A single cycle of that freq.
hold = len;
while (len < (srate / (1000 / _FREQ_))) len += hold; //Multiply till 1 interrup cycle
while (len > _BUFFSIZE_) len -= hold; //Don't overload buffer
if (len == 0) len = _BUFFSIZE_; //Just to be safe
for (i = 0; i < len; i++) //calculate sine wave and put in buffer
buf[voice][i] = (short)((32760 * sin((2 * M_PI * i * freq) / srate)));
alBufferData(Buffer[voice], AL_FORMAT_MONO16, buf[voice], len, srate);
alSourcei(openAL.Source[i], AL_LOOPING, AL_TRUE);
alSourcei(Source[i], AL_BUFFER, Buffer[i]);
alSourcePlay(Source[voice]);
}
} else --mtime[voice];
}
}
Well, it turns out there were 3 problems with my code. First, you have to link the built wave buffer to the AL generated buffer "before" you link the buffer to the source:
alBufferData(buffer,AL_FORMAT_MONO16,&wave_sample,sample_lenght * sizeof(short),frequency);
alSourcei(source,AL_BUFFER,buffer);
Also in the above example, I multiplied the sample_length by how many bytes are in each sample (in this case "sizeof(short)".
The final problem was that you need to un-link a buffer from the source before you change the buffer data
alSourcei(source,AL_BUFFER,NULL);
The music would play, but not correctly until I added that line to the note change code.

CoreAudio AudioQueue stop issue

I'm making a CoreAudio based FLAC player, and ran into a naughty issue with AudioQueues.
I'm initializing my stuff like this (variables beginning with an underscore are instance variables):
_flacDecoder = FLAC__stream_decoder_new();
AudioStreamBasicDescription asbd = {
.mFormatID = kAudioFormatLinearPCM,
.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked,
.mSampleRate = 44100,
.mChannelsPerFrame = 2,
.mBitsPerChannel = 16,
.mBytesPerPacket = 4,
.mFramesPerPacket = 1,
.mBytesPerFrame = 4,
.mReserved = 0
};
AudioQueueNewOutput(&asbd, HandleOutputBuffer, (__bridge void *)(self), CFRunLoopGetCurrent(), kCFRunLoopDefaultMode, 0, &_audioQueue);
for (int i = 0; i < kNumberBuffers; ++i) {
AudioQueueAllocateBuffer(_audioQueue, 0x10000, &_audioQueueBuffers[i]);
}
AudioQueueSetParameter(_audioQueue, kAudioQueueParam_Volume, 1.0);
16 bit stereo PCM at 44.1 kHz, pretty basic setup. kNumberBuffers is 3, and each buffer is 0x10000 bytes.
I populate the buffers with these callbacks:
static void HandleOutputBuffer(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer){
FLACPlayer * self = (__bridge FLACPlayer*)inUserData;
UInt32 largestBlockSizeInBytes = self->_currentStreamInfo.max_blocksize * self->_currentStreamInfo.channels * self->_currentStreamInfo.bits_per_sample/8;
inBuffer->mAudioDataByteSize = 0;
self->_buffer = inBuffer;
while(inBuffer->mAudioDataByteSize <= inBuffer->mAudioDataBytesCapacity - largestBlockSizeInBytes){
FLAC__bool result = FLAC__stream_decoder_process_single(self->_flacDecoder);
assert(result);
if(FLAC__stream_decoder_get_state(self->_flacDecoder) == FLAC__STREAM_DECODER_END_OF_STREAM){
AudioQueueStop(self->_audioQueue, false);
break;
}
}
AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL);
}
static FLAC__StreamDecoderWriteStatus flacDecoderWriteCallback(const FLAC__StreamDecoder *decoder, const FLAC__Frame *frame, const FLAC__int32 * const buffer[], void *client_data){
FLACPlayer * self = (__bridge FLACPlayer *)client_data;
assert(frame->header.bits_per_sample == 16); // TODO
int16_t * bufferWritePosition = (int16_t*)((uint8_t*)self->_buffer->mAudioData + self->_buffer->mAudioDataByteSize);
for(int i = 0; i < frame->header.blocksize; i++){
for(int j = 0; j < frame->header.channels; j++){
*bufferWritePosition = (int16_t)buffer[j][i];
bufferWritePosition++;
}
}
int totalFramePayloadInBytes = frame->header.channels * frame->header.blocksize * frame->header.bits_per_sample/8;
self->_buffer->mAudioDataByteSize += totalFramePayloadInBytes;
return FLAC__STREAM_DECODER_WRITE_STATUS_CONTINUE;
}
static void flacDecoderMetadataCallback(const FLAC__StreamDecoder *decoder, const FLAC__StreamMetadata *metadata, void *client_data){
FLACPlayer * self = (__bridge FLACPlayer*) client_data;
if(metadata->type == FLAC__METADATA_TYPE_STREAMINFO){
self->_currentStreamInfo = metadata->data.stream_info;
}
}
Basically when the queue requests a new buffer, I fill the buffer from the FLAC__stream_decoder, then I enqueue it. Just like everyone else would do. When libFLAC tells me that I've reached the end of my file, I tell the AudioQueue to stop asynchronously, until it had consumed all the buffers' contents. However, instead of playing through the end, the playback stops a tiny bit before it should. If I remove this line:
AudioQueueStop(self->_audioQueue, false);
everything works fine; the audio plays end-to-end, although my queue keeps running till the end of time. If I change that line to this:
AudioQueueStop(self->_audioQueue, true);
then the playback stops immediately/synchronously, as you'd expect from Apple's documentation:
If you pass true, stopping occurs immediately (that is,
synchronously). If you pass false, the function returns immediately,
but the audio queue does not stop until its queued buffers are played
or recorded (that is, the stop occurs asynchronously). Audio queue
callbacks are invoked as necessary until the queue actually stops.
My questions are:
- am I doing anything wrong?
- how can I play my audio until the end, and shut down the queue appropriately?
Of course, after struggling with this stuff for hours, I've found the solution minutes after posting this question...
The problem was that the AudioQueue doesn't care about buffers enqueued after calling AudioQueueStop(..., false). So now I'm feeding the queue like this, and everything works like charm:
static void HandleOutputBuffer(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer){
FLACPlayer * self = (__bridge FLACPlayer*)inUserData;
UInt32 largestBlockSizeInBytes = self->_currentStreamInfo.max_blocksize * self->_currentStreamInfo.channels * self->_currentStreamInfo.bits_per_sample/8;
inBuffer->mAudioDataByteSize = 0;
self->_buffer = inBuffer;
bool shouldStop = false;
while(inBuffer->mAudioDataByteSize <= inBuffer->mAudioDataBytesCapacity - largestBlockSizeInBytes){
FLAC__bool result = FLAC__stream_decoder_process_single(self->_flacDecoder);
assert(result);
if(FLAC__stream_decoder_get_state(self->_flacDecoder) == FLAC__STREAM_DECODER_END_OF_STREAM){
shouldStop = true;
break;
}
}
AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL);
if(shouldStop){
AudioQueueStop(self->_audioQueue, false);
}
}

How to get current display mode (resolution, refresh rate) of a monitor/output in DXGI?

I am creating a multi-monitor full screen DXGI/D3D application. I am enumerating through the available outputs and adapters in preparation of creating their swap chains.
When creating my swap chain using DXGI's IDXGIFactory::CreateSwapChain method, I need to provide a swap chain description which includes a buffer description of type DXGI_MODE_DESC that details the width, height, refresh rate, etc. How can I find out what the output is currently set to (or how can I find out what the display mode of the output currently is)? I don't want to change the user's resolution or refresh rate when I go to full screen with this swap chain.
After looking around some more I stumbled upon the EnumDisplaySettings legacy GDI function, which allows me to access the current resolution and refresh rate. Combining this with the IDXGIOutput::FindClosestMatchingMode function I can get pretty close to the current display mode:
void getClosestDisplayModeToCurrent(IDXGIOutput* output, DXGI_MODE_DESC* outCurrentDisplayMode)
{
DXGI_OUTPUT_DESC outputDesc;
output->GetDesc(&outputDesc);
HMONITOR hMonitor = outputDesc.Monitor;
MONITORINFOEX monitorInfo;
monitorInfo.cbSize = sizeof(MONITORINFOEX);
GetMonitorInfo(hMonitor, &monitorInfo);
DEVMODE devMode;
devMode.dmSize = sizeof(DEVMODE);
devMode.dmDriverExtra = 0;
EnumDisplaySettings(monitorInfo.szDevice, ENUM_CURRENT_SETTINGS, &devMode);
DXGI_MODE_DESC current;
current.Width = devMode.dmPelsWidth;
current.Height = devMode.dmPelsHeight;
bool useDefaultRefreshRate = 1 == devMode.dmDisplayFrequency || 0 == devMode.dmDisplayFrequency;
current.RefreshRate.Numerator = useDefaultRefreshRate ? 0 : devMode.dmDisplayFrequency;
current.RefreshRate.Denominator = useDefaultRefreshRate ? 0 : 1;
current.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
current.ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_UNSPECIFIED;
current.Scaling = DXGI_MODE_SCALING_UNSPECIFIED;
output->FindClosestMatchingMode(&current, outCurrentDisplayMode, NULL);
}
...But I don't think that this is really the correct answer because I'm needing to use legacy functions. Is there any way to do this with DXGI to get the exact current display mode rather than using this method?
I saw solution here:
http://www.rastertek.com/dx11tut03.html
In folow part:
// Now go through all the display modes and find the one that matches the screen width and height.
// When a match is found store the numerator and denominator of the refresh rate for that monitor.
for(i=0; i<numModes; i++)
{
if(displayModeList[i].Width == (unsigned int)screenWidth)
{
if(displayModeList[i].Height == (unsigned int)screenHeight)
{
numerator = displayModeList[i].RefreshRate.Numerator;
denominator = displayModeList[i].RefreshRate.Denominator;
}
}
}
Is my understanding correct, the available resolution is in the displayModeList.
This might be what you are looking for:
// Get display mode list
std::vector<DXGI_MODE_DESC*> modeList = GetDisplayModeList(*outputItor);
for(std::vector<DXGI_MODE_DESC*>::iterator modeItor = modeList.begin(); modeItor != modeList.end(); ++modeItor)
{
// PrintDisplayModeInfo(*modeItor);
}
}
std::vector<DXGI_MODE_DESC*> GetDisplayModeList(IDXGIOutput* output)
{
UINT num = 0;
DXGI_FORMAT format = DXGI_FORMAT_R32G32B32A32_TYPELESS;
UINT flags = DXGI_ENUM_MODES_INTERLACED | DXGI_ENUM_MODES_SCALING;
// Get number of display modes
output->GetDisplayModeList(format, flags, &num, 0);
// Get display mode list
DXGI_MODE_DESC * pDescs = new DXGI_MODE_DESC[num];
output->GetDisplayModeList(format, flags, &num, pDescs);
std::vector<DXGI_MODE_DESC*> displayList;
for(int i = 0; i < num; ++i)
{
displayList.push_back(&pDescs[i]);
}
return displayList;
}

Audio Unit and Writing to file

I'm creating real-time audio sequencer app on OS X.
Real-time synth part is implemented by using AURenderCallback.
Now I'm making function to write rendered result to Wave File (44100Hz 16bit Stereo).
Format for render-callback function is 44100Hz 32bit float Stereo interleaved.
I'm using ExtAudioFileWrite to write to file.
But ExtAudioFileWrite function returns error code 1768846202;
I searched 1768846202 but I couldn't get information.
Would you give me some hints?
Thank you.
Here is code.
outFileFormat.mSampleRate = 44100;
outFileFormat.mFormatID = kAudioFormatLinearPCM;
outFileFormat.mFormatFlags =
kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
outFileFormat.mBitsPerChannel = 16;
outFileFormat.mChannelsPerFrame = 2;
outFileFormat.mFramesPerPacket = 1;
outFileFormat.mBytesPerFrame =
outFileFormat.mBitsPerChannel / 8 * outFileFormat.mChannelsPerFrame;
outFileFormat.mBytesPerPacket =
outFileFormat.mBytesPerFrame * outFileFormat.mFramesPerPacket;
AudioBufferList *ioList;
ioList = (AudioBufferList*)calloc(1, sizeof(AudioBufferList)
+ 2 * sizeof(AudioBuffer));
ioList->mNumberBuffers = 2;
ioList->mBuffers[0].mNumberChannels = 1;
ioList->mBuffers[0].mDataByteSize = allocByteSize / 2;
ioList->mBuffers[0].mData = ioDataL;
ioList->mBuffers[1].mNumberChannels = 1;
ioList->mBuffers[1].mDataByteSize = allocByteSize / 2;
ioList->mBuffers[1].mData = ioDataR;
...
while (1) {
//Fill buffer by using render callback func.
RenderCallback(self, nil, nil, 0, frames, ioList);
//i want to create one sec file.
if (renderedFrames >= 44100) break;
err = ExtAudioFileWrite(outAudioFileRef, frames , ioList);
if (err != noErr){
NSLog(#"ERROR AT WRITING TO FILE");
goto errorExit;
}
}
Some of the error codes are actually four character strings. The Core Audio book provides a nice function to handle errors.
static void CheckError(OSStatus error, const char *operation)
{
if (error == noErr) return;
char str[20];
// see if it appears to be a 4-char-code
*(UInt32 *)(str + 1) = CFSwapInt32HostToBig(error);
if (isprint(str[1]) && isprint(str[2]) && isprint(str[3]) && isprint(str[4])) {
str[0] = str[5] = '\'';
str[6] = '\0';
} else
// no, format it as an integer
sprintf(str, "%d", (int)error);
fprintf(stderr, "Error: %s (%s)\n", operation, str);
exit(1);
}
Use it like this:
CheckError(ExtAudioFileSetProperty(outputFile,
kExtAudioFileProperty_CodecManufacturer,
sizeof(codec),
&codec), "Setting codec.");
Before you can do any sort of debugging, you probably need to figure out what that error message actually means. Have you tried passing that status code to GetMacOSStatusErrorString() or GetMacOSStatusCommentString()? They aren't documented so well, but they are declared in CoreServices/CarbonCore/Debugging.h.

Resources