Trying to setup an audio unit graph with a buffer of samples as the input - core-audio

I am trying to implement a simple audio unit graph that goes:
buffer of samples->low pass filter->generic output
Where the generic output would be copied into a new buffer that could then be processed further, saved to disk, etc.
All of the examples I can find online having to do with setting up an audio unit graph involve using a generator with the kAudioUnitSubType_AudioFilePlayer as the input source... I am dealing with a buffer of samples already acquired, so those examples do not help... Based on looking around in the AudioUnitProperties.h file, it looks like I should be using using is kAudioUnitSubType_ScheduledSoundPlayer?
I can't seem to much documentation on how to hook this up, so I am quite stuck and am hoping someone here can help me out.
To simplify things, I just started out by trying to get my buffer of samples to go straight to the system output, but am unable to make this work...
#import "EffectMachine.h"
#import <AudioToolbox/AudioToolbox.h>
#import "AudioHelpers.h"
#import "Buffer.h"
#interface EffectMachine ()
#property (nonatomic, strong) Buffer *buffer;
#end
typedef struct EffectMachineGraph {
AUGraph graph;
AudioUnit input;
AudioUnit lowpass;
AudioUnit output;
} EffectMachineGraph;
#implementation EffectMachine {
EffectMachineGraph machine;
}
-(instancetype)initWithBuffer:(Buffer *)buffer {
if (self = [super init]) {
self.buffer = buffer;
// buffer is a simple wrapper object that holds two properties:
// a pointer to the array of samples (as doubles) and the size (number of samples)
}
return self;
}
-(void)process {
struct EffectMachineGraph initialized = {0};
machine = initialized;
CheckError(NewAUGraph(&machine.graph),
"NewAUGraph failed");
AudioComponentDescription outputCD = {0};
outputCD.componentType = kAudioUnitType_Output;
outputCD.componentSubType = kAudioUnitSubType_DefaultOutput;
outputCD.componentManufacturer = kAudioUnitManufacturer_Apple;
AUNode outputNode;
CheckError(AUGraphAddNode(machine.graph,
&outputCD,
&outputNode),
"AUGraphAddNode[kAudioUnitSubType_GenericOutput] failed");
AudioComponentDescription inputCD = {0};
inputCD.componentType = kAudioUnitType_Generator;
inputCD.componentSubType = kAudioUnitSubType_ScheduledSoundPlayer;
inputCD.componentManufacturer = kAudioUnitManufacturer_Apple;
AUNode inputNode;
CheckError(AUGraphAddNode(machine.graph,
&inputCD,
&inputNode),
"AUGraphAddNode[kAudioUnitSubType_ScheduledSoundPlayer] failed");
CheckError(AUGraphOpen(machine.graph),
"AUGraphOpen failed");
CheckError(AUGraphNodeInfo(machine.graph,
inputNode,
NULL,
&machine.input),
"AUGraphNodeInfo failed");
CheckError(AUGraphConnectNodeInput(machine.graph,
inputNode,
0,
outputNode,
0),
"AUGraphConnectNodeInput");
CheckError(AUGraphInitialize(machine.graph),
"AUGraphInitialize failed");
// prepare input
AudioBufferList ioData = {0};
ioData.mNumberBuffers = 1;
ioData.mBuffers[0].mNumberChannels = 1;
ioData.mBuffers[0].mDataByteSize = (UInt32)(2 * self.buffer.size);
ioData.mBuffers[0].mData = self.buffer.samples;
ScheduledAudioSlice slice = {0};
AudioTimeStamp timeStamp = {0};
slice.mTimeStamp = timeStamp;
slice.mNumberFrames = (UInt32)self.buffer.size;
slice.mBufferList = &ioData;
CheckError(AudioUnitSetProperty(machine.input,
kAudioUnitProperty_ScheduleAudioSlice,
kAudioUnitScope_Global,
0,
&slice,
sizeof(slice)),
"AudioUnitSetProperty[kAudioUnitProperty_ScheduleStartTimeStamp] failed");
AudioTimeStamp startTimeStamp = {0};
startTimeStamp.mFlags = kAudioTimeStampSampleTimeValid;
startTimeStamp.mSampleTime = -1;
CheckError(AudioUnitSetProperty(machine.input,
kAudioUnitProperty_ScheduleStartTimeStamp,
kAudioUnitScope_Global,
0,
&startTimeStamp,
sizeof(startTimeStamp)),
"AudioUnitSetProperty[kAudioUnitProperty_ScheduleStartTimeStamp] failed");
CheckError(AUGraphStart(machine.graph),
"AUGraphStart failed");
// AUGraphStop(machine.graph); <-- commented out to make sure it wasn't stopping before actually finishing playing.
// AUGraphUninitialize(machine.graph);
// AUGraphClose(machine.graph);
}
Does anyone know what I am doing wrong here?

I think this is the documentation you're looking for.
To summarize: setup your augraph, setup your audio units & add them to the graph, write & attach a rendercallback function on the first node in your graph. Run the graph. Note that the rendercallback is where your app will be asked to provide buffers of samples to the augraph. This is where you'll need to read from your buffers and fill the buffers supplied by the rendercallback. I think this is what you're missing.
If you're on iOS8, i recommend AVAudioEngine, which helps conceal some of the grungier boiler-platey details of graphs and effects
Extras:
Complete pre-iOS8 example code on github
iOS Music player app that reads audio from your MP3 library into a circular buffer and then processes it via an augraph (using a mixer & eq AU). You can see how a rendercallback is setup to read from a buffer, etc.
Amazing Audio Engine
Novocaine Audio library

Related

How to exclude input or output channels from an aggregate CoreAudio device?

I've got a CoreAudio-based MacOS/X program that allows the user to select an input-audio-device and an output-audio-device, and (if the user didn't choose the same device for both input and output) my program creates a private aggregate-audio-device and uses that to receive audio the audio, process it, and then send it out for playback.
That's all working great, but there is one minor problem -- if the selected input-device also has some outputs associated with its hardware, those outputs show up as part of the aggregate device's output-channels, which isn't the behavior I want. Similarly, if the selected output-device also has some inputs associated with its hardware, those inputs will show up as input channels in the aggregate device's inputs, which I also don't want.
My question is, is there any way to tell CoreAudio not to include the inputs or outputs of a sub-device in the aggregate device I'm constructing? (my fallback solution would be to modify my audio-rendering callback to ignore the unwanted audio channels, but that seems less than elegant, so I'm curious if there is a better way to handle it)
My function that creates the aggregate device is below, in case it is relevant:
// This code was adapted from the example code at : https://web.archive.org/web/20140716012404/http://daveaddey.com/?p=51
ConstCoreAudioDeviceRef CoreAudioDevice :: CreateAggregateDevice(const ConstCoreAudioDeviceInfoRef & inputCadi, const ConstCoreAudioDeviceInfoRef & outputCadi, bool require96kHz, int32 optRequiredBufferSizeFrames)
{
OSStatus osErr = noErr;
UInt32 outSize;
Boolean outWritable;
//-----------------------
// Start to create a new aggregate by getting the base audio hardware plugin
//-----------------------
osErr = AudioHardwareGetPropertyInfo(kAudioHardwarePropertyPlugInForBundleID, &outSize, &outWritable);
if (osErr != noErr) return ConstCoreAudioDeviceRef();
AudioValueTranslation pluginAVT;
CFStringRef inBundleRef = CFSTR("com.apple.audio.CoreAudio");
AudioObjectID pluginID;
pluginAVT.mInputData = &inBundleRef;
pluginAVT.mInputDataSize = sizeof(inBundleRef);
pluginAVT.mOutputData = &pluginID;
pluginAVT.mOutputDataSize = sizeof(pluginID);
osErr = AudioHardwareGetProperty(kAudioHardwarePropertyPlugInForBundleID, &outSize, &pluginAVT);
if (osErr != noErr) return ConstCoreAudioDeviceRef();
//-----------------------
// Create a CFDictionary for our aggregate device
//-----------------------
CFMutableDictionaryRef aggDeviceDict = CFDictionaryCreateMutable(NULL, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
CFStringRef aggregateDeviceNameRef = CFSTR("My Aggregate Device");
CFStringRef aggregateDeviceUIDRef = CFSTR("com.mycomapany.myaggregatedevice");
// add the name of the device to the dictionary
CFDictionaryAddValue(aggDeviceDict, CFSTR(kAudioAggregateDeviceNameKey), aggregateDeviceNameRef);
// add our choice of UID for the aggregate device to the dictionary
CFDictionaryAddValue(aggDeviceDict, CFSTR(kAudioAggregateDeviceUIDKey), aggregateDeviceUIDRef);
if (IsDebugFlagEnabled("public_cad_device") == false)
{
// make it private so that we don't have the user messing with it
int value = 1;
CFDictionaryAddValue(aggDeviceDict, CFSTR(kAudioAggregateDeviceIsPrivateKey), CFNumberCreate(NULL, kCFNumberIntType, &value));
}
//-----------------------
// Create a CFMutableArray for our sub-device list
//-----------------------
// we need to append the UID for each device to a CFMutableArray, so create one here
CFMutableArrayRef subDevicesArray = CFArrayCreateMutable(NULL, 0, &kCFTypeArrayCallBacks);
// add the sub-devices to our aggregate device
const CFStringRef inputDeviceUID = inputCadi()->GetPersistentUID().ToCFStringRef();
const CFStringRef outputDeviceUID = outputCadi()->GetPersistentUID().ToCFStringRef();
CFArrayAppendValue(subDevicesArray, inputDeviceUID);
CFArrayAppendValue(subDevicesArray, outputDeviceUID);
//-----------------------
// Feed the dictionary to the plugin, to create a blank aggregate device
//-----------------------
AudioObjectPropertyAddress pluginAOPA;
pluginAOPA.mSelector = kAudioPlugInCreateAggregateDevice;
pluginAOPA.mScope = kAudioObjectPropertyScopeGlobal;
pluginAOPA.mElement = kAudioObjectPropertyElementMaster;
UInt32 outDataSize;
osErr = AudioObjectGetPropertyDataSize(pluginID, &pluginAOPA, 0, NULL, &outDataSize);
if (osErr != noErr) return ConstCoreAudioDeviceRef();
AudioDeviceID outAggregateDevice;
osErr = AudioObjectGetPropertyData(pluginID, &pluginAOPA, sizeof(aggDeviceDict), &aggDeviceDict, &outDataSize, &outAggregateDevice);
if (osErr != noErr) return ConstCoreAudioDeviceRef();
//-----------------------
// Set the sub-device list
//-----------------------
pluginAOPA.mSelector = kAudioAggregateDevicePropertyFullSubDeviceList;
pluginAOPA.mScope = kAudioObjectPropertyScopeGlobal;
pluginAOPA.mElement = kAudioObjectPropertyElementMaster;
outDataSize = sizeof(CFMutableArrayRef);
osErr = AudioObjectSetPropertyData(outAggregateDevice, &pluginAOPA, 0, NULL, outDataSize, &subDevicesArray);
if (osErr != noErr) return ConstCoreAudioDeviceRef();
//-----------------------
// Set the master device
//-----------------------
// set the master device manually (this is the device which will act as the master clock for the aggregate device)
// pass in the UID of the device you want to use
pluginAOPA.mSelector = kAudioAggregateDevicePropertyMasterSubDevice;
pluginAOPA.mScope = kAudioObjectPropertyScopeGlobal;
pluginAOPA.mElement = kAudioObjectPropertyElementMaster;
outDataSize = sizeof(outputDeviceUID);
osErr = AudioObjectSetPropertyData(outAggregateDevice, &pluginAOPA, 0, NULL, outDataSize, &outputDeviceUID);
if (osErr != noErr) return ConstCoreAudioDeviceRef();
//-----------------------
// Clean up
//-----------------------
// release the CF objects we have created - we don't need them any more
CFRelease(aggDeviceDict);
CFRelease(subDevicesArray);
// release the device UID CFStringRefs
CFRelease(inputDeviceUID);
CFRelease(outputDeviceUID);
ConstCoreAudioDeviceInfoRef infoRef = CoreAudioDeviceInfo::GetAudioDeviceInfo(outAggregateDevice);
if (infoRef())
{
ConstCoreAudioDeviceRef ret(new CoreAudioDevice(infoRef, true));
return ((ret())&&(SetupSimpleCoreAudioDeviceAux(ret()->GetDeviceInfo(), require96kHz, optRequiredBufferSizeFrames, false).IsOK())) ? ret : ConstCoreAudioDeviceRef();
}
else return ConstCoreAudioDeviceRef();
}
There are ways to handle the channel mapping (which you're basically describing), but I doubt if it is a "better" way in your case.
Such functionality is covered in the AudioToolbox framework using Audio Units. Especially the kAudioUnitSubType_HALOutput AudioUnit (AUComponent.h) is interesting in this case.
Using this type of AudioUnit you can send and receive audio to and from a specific audio device in a specified channel format. When the desired channel layout doesn't match the channel layout of the device you can do channel mapping.
To get some technical details have a look at:
https://developer.apple.com/library/archive/technotes/tn2091/_index.html
Please not that a lot of the AudioToolbox is in the process of being replaced by AVAudioEngine.
So, in your case I think it would be easier to do manual channel mapping by just ignoring the samples you don't need.
Also, I'm not sure if CoreAudio provides 'slicenced' output buffers. To be sure consider silencing them yourself.
EDIT
Looking at the docs in AudioHardware.h there seems to be a way of enabling and disabling streams of a particular IOProc.
When OS X creates an aggregate, it puts all the channels of the different subdevices in different streams, so in your case you should be able to disable the stream which contains the inputs of the output device and and vice versa disable the stream which contains the outputs of the input device.
For this have a look at AudioHardwareIOProcStreamUsage and kAudioDevicePropertyIOProcStreamUsage both in AudioHardware.h
I found the HALLab utility from Apple very useful in finding out about the actual streams.
(https://developer.apple.com/download/more/ and search for "Audio Tools for Xcode")

Distortion from output Audio Unit

I am hearing a very loud and harsh distortion sound when I run this simple application. I am simply instantiating a default output unit and assign a render callback. And letting the program run in the runloop. I have detected no errors from Core Audio and everything works as usual except for this distortion.
#import <AudioToolbox/AudioToolbox.h>
OSStatus render1(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData)
{
return noErr;
}
int main(int argc, const char * argv[]) {
AudioUnit timerAU;
UInt32 propsize = 0;
AudioComponentDescription outputUnitDesc;
outputUnitDesc.componentType = kAudioUnitType_Output;
outputUnitDesc.componentSubType = kAudioUnitSubType_DefaultOutput;
outputUnitDesc.componentManufacturer = kAudioUnitManufacturer_Apple;
outputUnitDesc.componentFlags = 0;
outputUnitDesc.componentFlagsMask = 0;
//Get RemoteIO AU from Audio Unit Component Manager
AudioComponent outputComp = AudioComponentFindNext(NULL, &outputUnitDesc);
if (outputComp == NULL) exit (-1);
CheckError(AudioComponentInstanceNew(outputComp, &timerAU), "comp");
//Set up render callback function for the RemoteIO AU.
AURenderCallbackStruct renderCallbackStruct;
renderCallbackStruct.inputProc = render1;
renderCallbackStruct.inputProcRefCon = nil;//(__bridge void *)(self);
propsize = sizeof(renderCallbackStruct);
CheckError(AudioUnitSetProperty(timerAU,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
0,
&renderCallbackStruct,
propsize), "set render");
CheckError(AudioUnitInitialize(timerAU), "init");
// tickMethod = completion;
CheckError(AudioOutputUnitStart(timerAU), "start");
CFRunLoopRunInMode(kCFRunLoopDefaultMode, 1000, false);
}
Your question does not seem complete. I don't know about the side effects of silencing the output noise which is probably just undefined behavior. I also don't know what your code would serve for as such. There is an unfinished render callback on the kAudioUnitSubType_DefaultOutput which does nothing (it is not generating silence!). I know for two ways of silencing it.
In the callback the ioData buffers have to be explicitly filled with zeroes, because there's no guarantee they will be initialized empty:
Float32 * lBuffer0;
Float32 * lBuffer1;
lBuffer0 = (Float32 *)ioData->mBuffers[0].mData;
lBuffer1 = (Float32 *)ioData->mBuffers[1].mData;
memset(lBuffer0, 0, inNumberFrames*sizeof(Float32));
memset(lBuffer1, 0, inNumberFrames*sizeof(Float32));
Other possibility is to leave the unfinished callback as it is, but declare the timerAU to be of outputUnitDesc.componentSubType = kAudioUnitSubType_HALOutput; instead of
outputUnitDesc.componentSubType = kAudioUnitSubType_DefaultOutput;
and explicity disable I/O before setting the render callback by means of following code:
UInt32 lEnableIO = 0;
CheckError(AudioUnitSetProperty(timerAU,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
0, //output element
&lEnableIO,
sizeof(lEnableIO)),
"couldn't disable output");
I would strongly encourage into studying thoroughly the CoreAudio API and understanding how to set up an audio unit. This is crucial in understanding the matter. I've seen in your code a comment mentioning a RemoteIO AU. There is nothing like a RemoteIO AU in OSX. In case you're attempting a port from iOS code, please try learning the differences. They are well documented.

EXC_BAS_ACCESS in Core Audio - writing mic data to file w/ Extended AudioFile Services

I am attempting to write incoming mic audio to a file. Because the audio samples are delivered 4096 frames (the set frame rate for my project) at a time in a time-critical callback I cannot simply write the bytes to a file with AudioFileWriteBytes. I also did not wish to go through the effort and complexity of setting up my own ring buffer to store samples to write elsewhere. So I am using Extended Audio File API for its ExtAudioFileWriteAsync function.
As per instructed by the documentation I create the ExtAudioFileRef with a CFURL and than run it once with a null buffer and 0 frames in main. Then I initiate my AUHAL unit and the input callback begins to be called.
ExtAudioFileWriteAsync(player.recordFile, 0, NULL);
There I have my code to write to this file asynchronously. I have the call nested in a dispatch queue so that it runs after the callback function exits scope (tho not sure if that is necessary but I get this error with all without the enclosing dispatch block). This is the callback as it is right now.
OSStatus InputRenderProc(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData)
{
MyAUGraphPlayer *player = (MyAUGraphPlayer*) inRefCon;
// rendering incoming mic samples to player->inputBuffer
OSStatus inputProcErr = noErr;
inputProcErr = AudioUnitRender(player->inputUnit,
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
player->inputBuffer);
printf("%i", inNumberFrames);
dispatch_async(player->fileWritingQueue, ^{
ExtAudioFileWriteAsync(player->recordFile, 4096, player->inputBuffer);
});
return inputProcErr;
}
It immediately bails out with the bad access exception on the first callback invocation. For clarity these are the settings I have for creating the file to begin with.
// describe a PCM format for audio file
AudioStreamBasicDescription format = { 0 };
format.mBytesPerFrame = 4;
format.mBytesPerPacket = 4;
format.mChannelsPerFrame = 2;
format.mBitsPerChannel = 16;
format.mFramesPerPacket = 1;
format.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsFloat;
format.mFormatID = kAudioFormatLinearPCM;
CFURLRef myFileURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, CFSTR("./test2.wav"), kCFURLPOSIXPathStyle, false);
ExtAudioFileCreateWithURL(myFileURL,
kAudioFileWAVEType,
&format,
NULL,
kAudioFileFlags_EraseFile,
&player.recordFile);
player.fileWritingQueue = dispatch_queue_create("myQueue", NULL);
ExtAudioFileWriteAsync(player.recordFile, 0, NULL);

m4a audio files not playing on iOS 9

I have an audio related app that has multichannel mixer to play m4a files at a time.
I'm using the AudioToolBox framework to stream audio, but on iOS9 the framework throws me exception in mixer rendering callback where i am streaming the audio files.
Interestingly apps compiled with the iOS9 SDK continue to stream the same file perfectly on iOS7/8 devices, but not iOS9.
Now i can't figure out if Apple broke something in iOS9, or we have the files encoded wrong on our end, but they play just fine on both iOS 7/8 but not 9.
Exception:
malloc: *** error for object 0x7fac74056e08: incorrect checksum for freed object - object was probably modified after being freed.
*** set a breakpoint in malloc_error_break to debug
It works for all other formats does not give any exception or any kind of memory errors but does not work for m4a format which is very surprising.
Here is a code to load files which works for wav,aif etc formats but not for m4a:
- (void)loadFiles{
AVAudioFormat *clientFormat = [[AVAudioFormat alloc] initWithCommonFormat:AVAudioPCMFormatFloat32
sampleRate:kGraphSampleRate
channels:1
interleaved:NO];
for (int i = 0; i < numFiles && i < maxBufs; i++) {
ExtAudioFileRef xafref = 0;
// open one of the two source files
OSStatus result = ExtAudioFileOpenURL(sourceURL[i], &xafref);
if (result || !xafref) {break; }
// get the file data format, this represents the file's actual data format
AudioStreamBasicDescription fileFormat;
UInt32 propSize = sizeof(fileFormat);
result = ExtAudioFileGetProperty(xafref, kExtAudioFileProperty_FileDataFormat, &propSize, &fileFormat);
if (result) { break; }
// set the client format - this is the format we want back from ExtAudioFile and corresponds to the format
// we will be providing to the input callback of the mixer, therefore the data type must be the same
double rateRatio = kGraphSampleRate / fileFormat.mSampleRate;
propSize = sizeof(AudioStreamBasicDescription);
result = ExtAudioFileSetProperty(xafref, kExtAudioFileProperty_ClientDataFormat, propSize, clientFormat.streamDescription);
if (result) { break; }
// get the file's length in sample frames
UInt64 numFrames = 0;
propSize = sizeof(numFrames);
result = ExtAudioFileGetProperty(xafref, kExtAudioFileProperty_FileLengthFrames, &propSize, &numFrames);
if (result) { break; }
if(i==metronomeBusIndex)
numFrames = (numFrames+6484)*4;
//numFrames = (numFrames * rateRatio); // account for any sample rate conversion
numFrames *= rateRatio;
// set up our buffer
mSoundBuffer[i].numFrames = (UInt32)numFrames;
mSoundBuffer[i].asbd = *(clientFormat.streamDescription);
UInt32 samples = (UInt32)numFrames * mSoundBuffer[i].asbd.mChannelsPerFrame;
mSoundBuffer[i].data = (Float32 *)calloc(samples, sizeof(Float32));
mSoundBuffer[i].sampleNum = 0;
// set up a AudioBufferList to read data into
AudioBufferList bufList;
bufList.mNumberBuffers = 1;
bufList.mBuffers[0].mNumberChannels = 1;
bufList.mBuffers[0].mData = mSoundBuffer[i].data;
bufList.mBuffers[0].mDataByteSize = samples * sizeof(Float32);
// perform a synchronous sequential read of the audio data out of the file into our allocated data buffer
UInt32 numPackets = (UInt32)numFrames;
result = ExtAudioFileRead(xafref, &numPackets, &bufList);
if (result) {
free(mSoundBuffer[i].data);
mSoundBuffer[i].data = 0;
}
// close the file and dispose the ExtAudioFileRef
ExtAudioFileDispose(xafref);
}
// [clientFormat release];
}
If anyone could point me in the right direction, how do i go about debugging the issue?
Do we need to re-encode our files in some specific way?
I tried it on iOS 9.1.beta3 yesterday and things seem to be back to normal.
Try it out. Let us know if it works out for you too.

from xcode5 to xcode6-beta6, sounds are now distorted on iOS7

I have a remoteIO application that loads and plays samples on iOS. It works fine when built with xcode5. I use iOS7 as a deployment target.
My application was originally built using the AudioUnitSampleType audio format and the kAudioFormatFlagsCanonical format flags. My sample files are 16 bits/44100Hz/Mono/Caf files.
Now I want to run it on iOS8.
Building my app with its original code on xcode6, the app runs fine on an iOS7 device but it produces no sounds on an iOS8 device.
As AudioUnitSampleType and kAudioFormatFlagsCanonical are deprecated in iOS8, I replaced them, after some researches, with float and kAudioFormatFlagIsFloat | kAudioFormatFlagIsPacked | kAudioFormatFlagIsNonInterleaved.
Now my app runs fine on iOS8 but the sounds are saturated on iOS7.
Has anyone experiences this? Any help ? Thanks, I am stuck here.
Pascal
ps : here is my sample loading method
#define AUDIO_SAMPLE_TYPE float
#define AUDIO_FORMAT_FLAGS kAudioFormatFlagIsFloat | kAudioFormatFlagIsPacked | kAudioFormatFlagIsNonInterleaved
-(void) load:(NSURL*)fileNameURL{
if (frameCount>0){
if (leftChannel!= NULL){
free (leftChannel);
leftChannel = 0;
}
if (rightChannel != NULL){
free (rightChannel);
rightChannel = 0;
}
}
soundFileURLRef=(CFURLRef)fileNameURL;
//----------------------------------------------
// 1.[OPEN AUDIO FILE] and associate it with the extended audio file object.
//----------------------------------------------
ExtAudioFileRef audioFileExtendedObject = 0;
log_if_err(ExtAudioFileOpenURL((CFURLRef)soundFileURLRef,
&audioFileExtendedObject),
#"ExtAudioFileOpenURL failed");
//----------------------------------------------
// 2.[AUDIO FILE LENGTH] Get the audio file's length in frames.
//----------------------------------------------
UInt64 totalFramesInFile = 0;
UInt32 frameLengthPropertySize = sizeof (totalFramesInFile);
log_if_err(ExtAudioFileGetProperty(audioFileExtendedObject,
kExtAudioFileProperty_FileLengthFrames,
&frameLengthPropertySize,
&totalFramesInFile),
#"ExtAudioFileGetProperty (audio file length in frames) failed");
frameCount = totalFramesInFile;
//----------------------------------------------
// 3.[AUDIO FILE FORMAT] Get the audio file's number of channels. Normally CAF.
//----------------------------------------------
AudioStreamBasicDescription fileAudioFormat = {0};
UInt32 formatPropertySize = sizeof (fileAudioFormat);
log_if_err(ExtAudioFileGetProperty(audioFileExtendedObject,
kExtAudioFileProperty_FileDataFormat,
&formatPropertySize,
&fileAudioFormat),
#"ExtAudioFileGetProperty (file audio format) failed");
//----------------------------------------------
// 4.[ALLOCATE AUDIO FILE MEMORY] Allocate memory in the soundFiles instance
// variable to hold the left channel, or mono, audio data
//----------------------------------------------
UInt32 channelCount = fileAudioFormat.mChannelsPerFrame;
// DLog(#"fileNameURL=%# | channelCount=%d",fileNameURL,(int)channelCount);
if (leftChannel != NULL){
free (leftChannel);
leftChannel = 0;
}
leftChannel =(AUDIO_UNIT_SAMPLE_TYPE *) calloc (totalFramesInFile, sizeof(AUDIO_UNIT_SAMPLE_TYPE));
AudioStreamBasicDescription importFormat = {0};
if (2==channelCount) {
isStereo = YES;
if (rightChannel != NULL){
free (rightChannel);
rightChannel = 0;
}
rightChannel = (AUDIO_UNIT_SAMPLE_TYPE *) calloc (totalFramesInFile, sizeof (AUDIO_UNIT_SAMPLE_TYPE));
importFormat = stereoStreamFormat;
} else if (1==channelCount) {
isStereo = NO;
importFormat = monoStreamFormat;
} else {
ExtAudioFileDispose (audioFileExtendedObject);
return;
}
//----------------------------------------------
// 5.[ASSIGN THE MIXER INPUT BUS STREAM DATA FORMAT TO THE AUDIO FILE]
// Assign the appropriate mixer input bus stream data format to the extended audio
// file object. This is the format used for the audio data placed into the audio
// buffer in the SoundStruct data structure, which is in turn used in the
// inputRenderCallback callback function.
//----------------------------------------------
UInt32 importFormatPropertySize = (UInt32) sizeof (importFormat);
log_if_err(ExtAudioFileSetProperty(audioFileExtendedObject,
kExtAudioFileProperty_ClientDataFormat,
importFormatPropertySize,
&importFormat),
#"ExtAudioFileSetProperty (client data format) failed");
//----------------------------------------------
// 6.[SET THE AUDIBUFFER LIST STRUCT] which has two roles:
//
// 1. It gives the ExtAudioFileRead function the configuration it
// needs to correctly provide the data to the buffer.
//
// 2. It points to the soundFiles[soundFile].leftChannel buffer, so
// that audio data obtained from disk using the ExtAudioFileRead function
// goes to that buffer
//
// Allocate memory for the buffer list struct according to the number of
// channels it represents.
//----------------------------------------------
AudioBufferList *bufferList;
bufferList = (AudioBufferList *) malloc(sizeof(AudioBufferList)+sizeof(AudioBuffer)*(channelCount-1));
if (NULL==bufferList){
NSLog(#"*** malloc failure for allocating bufferList memory");
return;
}
//----------------------------------------------
// 7.initialize the mNumberBuffers member
//----------------------------------------------
bufferList->mNumberBuffers = channelCount;
//----------------------------------------------
// 8.initialize the mBuffers member to 0
//----------------------------------------------
AudioBuffer emptyBuffer = {0};
size_t arrayIndex;
for (arrayIndex = 0; arrayIndex < channelCount; arrayIndex++) {
bufferList->mBuffers[arrayIndex] = emptyBuffer;
}
//----------------------------------------------
// 9.set up the AudioBuffer structs in the buffer list
//----------------------------------------------
bufferList->mBuffers[0].mNumberChannels = 1;
bufferList->mBuffers[0].mDataByteSize = totalFramesInFile * sizeof (AUDIO_UNIT_SAMPLE_TYPE);
bufferList->mBuffers[0].mData = leftChannel;
if (channelCount==2){
bufferList->mBuffers[1].mNumberChannels = 1;
bufferList->mBuffers[1].mDataByteSize = totalFramesInFile * sizeof (AUDIO_UNIT_SAMPLE_TYPE);
bufferList->mBuffers[1].mData = rightChannel;
}
//----------------------------------------------
// 10.Perform a synchronous, sequential read of the audio data out of the file and
// into the "soundFiles[soundFile].leftChannel" and (if stereo) ".rightChannel" members.
//----------------------------------------------
UInt32 numberOfPacketsToRead = (UInt32) totalFramesInFile;
OSStatus result = ExtAudioFileRead (audioFileExtendedObject,
&numberOfPacketsToRead,
bufferList);
free (bufferList);
if (noErr != result) {
log_if_err(result,#"ExtAudioFileRead failure");
//
// If reading from the file failed, then free the memory for the sound buffer.
//
free (leftChannel);
leftChannel = 0;
if (2==channelCount) {
free (rightChannel);
rightChannel = 0;
}
frameCount = 0;
}
//----------------------------------------------
// Dispose of the extended audio file object, which also
// closes the associated file.
//----------------------------------------------
ExtAudioFileDispose (audioFileExtendedObject);
return;
}

Resources