iOS 10.2 callkit error -> [aurioc] 892 - swift2

I use Callkit with iOS10.0.1 and he works perfectly (outbound and inbound calls).
After update my iPhone7 to iOS 10.2. I heard nothing when i receive an inbound call.
For AudioController :
try {
// Configure the audio session
AVAudioSession *sessionInstance = [AVAudioSession sharedInstance];
// we are going to play and record so we pick that category
NSError *error = nil;
[sessionInstance setCategory:AVAudioSessionCategoryPlayAndRecord error:&error];
XThrowIfError((OSStatus)error.code, "couldn't set session's audio category");
// set the mode to voice chat
[sessionInstance setMode:AVAudioSessionModeVoiceChat error:&error];
XThrowIfError((OSStatus)error.code, "couldn't set session's audio mode");
// set the buffer duration to 5 ms
NSTimeInterval bufferDuration = .005;
[sessionInstance setPreferredIOBufferDuration:bufferDuration error:&error];
XThrowIfError((OSStatus)error.code, "couldn't set session's I/O buffer duration");
// set the session's sample rate
[sessionInstance setPreferredSampleRate:44100 error:&error];
XThrowIfError((OSStatus)error.code, "couldn't set session's preferred sample rate");
// add interruption handler
[[NSNotificationCenter defaultCenter] addObserver:self
selector:#selector(handleInterruption:)
name:AVAudioSessionInterruptionNotification
object:sessionInstance];
// we don't do anything special in the route change notification
[[NSNotificationCenter defaultCenter] addObserver:self
selector:#selector(handleRouteChange:)
name:AVAudioSessionRouteChangeNotification
object:sessionInstance];
// if media services are reset, we need to rebuild our audio chain
[[NSNotificationCenter defaultCenter] addObserver: self
selector: #selector(handleMediaServerReset:)
name: AVAudioSessionMediaServicesWereResetNotification
object: sessionInstance];
}
catch (CAXException &e) {
NSLog(#"Error returned from setupAudioSession: %d: %s", (int)e.mError, e.mOperation);
}
catch (...) {
NSLog(#"Unknown error returned from setupAudioSession");
}
and
try {
// Create a new instance of Apple Voice Processing IO
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
AudioComponent comp = AudioComponentFindNext(NULL, &desc);
XThrowIfError(AudioComponentInstanceNew(comp, &_rioUnit), "couldn't create a new instance of Apple Voice Processing IO");
// Enable input and output on Apple Voice Processing IO
// Input is enabled on the input scope of the input element
// Output is enabled on the output scope of the output element
UInt32 one = 1;
XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &one, sizeof(one)), "could not enable input on Apple Voice Processing IO");
XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, 0, &one, sizeof(one)), "could not enable output on Apple Voice Processing IO");
// Explicitly set the input and output client formats
// sample rate = 44100, num channels = 1, format = 32 bit floating point
CAStreamBasicDescription ioFormat = CAStreamBasicDescription(44100, 1, CAStreamBasicDescription::kPCMFormatFloat32, false);
XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &ioFormat, sizeof(ioFormat)), "couldn't set the input client format on Apple Voice Processing IO");
XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &ioFormat, sizeof(ioFormat)), "couldn't set the output client format on Apple Voice Processing IO");
// Set the MaximumFramesPerSlice property. This property is used to describe to an audio unit the maximum number
// of samples it will be asked to produce on any single given call to AudioUnitRender
UInt32 maxFramesPerSlice = 4096;
XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice, sizeof(UInt32)), "couldn't set max frames per slice on Apple Voice Processing IO");
// Get the property value back from Apple Voice Processing IO. We are going to use this value to allocate buffers accordingly
UInt32 propSize = sizeof(UInt32);
XThrowIfError(AudioUnitGetProperty(_rioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice, &propSize), "couldn't get max frames per slice on Apple Voice Processing IO");
// We need references to certain data in the render callback
// This simple struct is used to hold that information
cd.rioUnit = _rioUnit;
cd.muteAudio = &_muteAudio;
cd.audioChainIsBeingReconstructed = &_audioChainIsBeingReconstructed;
// Set the render callback on Apple Voice Processing IO
AURenderCallbackStruct renderCallback;
renderCallback.inputProc = performRender;
renderCallback.inputProcRefCon = NULL;
XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, 0, &renderCallback, sizeof(renderCallback)), "couldn't set render callback on Apple Voice Processing IO");
// Initialize the Apple Voice Processing IO instance
XThrowIfError(AudioUnitInitialize(_rioUnit), "couldn't initialize Apple Voice Processing IO instance");
}
catch (CAXException &e) {
NSLog(#"Error returned from setupIOUnit: %d: %s", (int)e.mError, e.mOperation);
}
catch (...) {
NSLog(#"Unknown error returned from setupIOUnit");
}
and i have this in my log :
[aurioc] 892: failed: '!pri' (enable 3, outf< 1 ch, 44100 Hz, Float32> inf< 1 ch, 44100 Hz, Float32>)
Error returned from setupIOUnit: 561017449: couldn't initialize Apple Voice Processing IO instance
do you have an idea ?

Enable your sound devices after Audio session activate. Write your sound enable call in audio session activation call back. You can also use block to invoke from audio session for answer / unhold / etc.

Related

VoiceProcessingIO Audio Unit adds an unexpected input stream to Built-in output device (macOS)

I work on VoIP app on macOS and use VoiceProcessingIO Audio Unit for audio processing like Echo cancellation and automatic gain control.
Problem is, when I init the audio unit, the list of Core Audio devices changes - not just by adding new aggregate device which VP audio unit uses for it's needs, but also because built-in output device (i.e. "Built - In MacBook Pro Speakers") now appears also as an input device, i.e. having an unexpected input stream in addition to output ones.
This is a list of INPUT devices (aka "microphones") I get from Core Audio before initialising my VP AU:
DEVICE: INPUT 45 BlackHole_UID
DEVICE: INPUT 93 BuiltInMicrophoneDevice
This is the same list when my VP AU is initialised:
DEVICE: INPUT 45 BlackHole_UID
DEVICE: INPUT 93 BuiltInMicrophoneDevice
DEVICE: INPUT 86 BuiltInSpeakerDevice /// WHY?
DEVICE: INPUT 98 VPAUAggregateAudioDevice-0x101046040
This is very frustrating because I need to display a list of devices in the app and even though I can filter out Aggregate devices from device list boldly (they are not usable with VP AU anyway), I cannot exclude our built-in macBook Speaker device.
Maybe someone of You has already been through this and has a clue what's going on and if this can be fixed. Some kAudioObjectPropertyXX I need to watch for to exclude the device from inputs list. Or course this might be a bug/feature on Apple's side and I simply have to hack my way around this.
VP AU works well, and the problem reproduces despite devices used (I tried on built-in and on external/USB/Bluetooth alike). The problem is reproduced on all macOS version I could test on, starting from 10.13 and ending by 11.0 included. This also reproduces on different Macs and different audio device sets connected. I am curious that there is next to zero info on that problem available, which brings me to a thought that I did something wrong.
One more strange thing is, when VP AU is working, the HALLab app indicates the another thing: Built-in Input having two more input streams (ok, I would survive this If it was just that!). But it doesn't indicate that Built-In output has input streams added, like in my app.
Here is extract from cpp code on how I setup VP Audio Unit:
#define MAX_FRAMES_PER_CALLBACK 1024
AudioComponentInstance AvHwVoIP::getComponentInstance(OSType type, OSType subType) {
AudioComponentDescription desc = {0};
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
desc.componentSubType = subType;
desc.componentType = type;
AudioComponent ioComponent = AudioComponentFindNext(NULL, &desc);
AudioComponentInstance unit;
OSStatus status = AudioComponentInstanceNew(ioComponent, &unit);
if (status != noErr) {
printf("Error: %d\n", status);
}
return unit;
}
void AvHwVoIP::enableIO(uint32_t enableIO, AudioUnit auDev) {
UInt32 no = 0;
setAudioUnitProperty(auDev,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
1,
&enableIO,
sizeof(enableIO));
setAudioUnitProperty(auDev,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
0,
&enableIO,
sizeof(enableIO));
}
void AvHwVoIP::setDeviceAsCurrent(AudioUnit auDev, AudioUnitElement element, AudioObjectID devId) {
//Set the Current Device to the AUHAL.
//this should be done only after IO has been enabled on the AUHAL.
setAudioUnitProperty(auDev,
kAudioOutputUnitProperty_CurrentDevice,
element == 0 ? kAudioUnitScope_Output : kAudioUnitScope_Input,
element,
&devId,
sizeof(AudioDeviceID));
}
void AvHwVoIP::setAudioUnitProperty(AudioUnit auDev,
AudioUnitPropertyID inID,
AudioUnitScope inScope,
AudioUnitElement inElement,
const void* __nullable inData,
uint32_t inDataSize) {
OSStatus status = AudioUnitSetProperty(auDev, inID, inScope, inElement, inData, inDataSize);
if (noErr != status) {
std::cout << "****** ::setAudioUnitProperty failed" << std::endl;
}
}
void AvHwVoIP::start() {
m_auVoiceProcesing = getComponentInstance(kAudioUnitType_Output, kAudioUnitSubType_VoiceProcessingIO);
enableIO(1, m_auVoiceProcesing);
m_format_description = SetAudioUnitStreamFormatFloat(m_auVoiceProcesing);
SetAudioUnitCallbacks(m_auVoiceProcesing);
setDeviceAsCurrent(m_auVoiceProcesing, 0, m_renderDeviceID);//output device AudioDeviceID here
setDeviceAsCurrent(m_auVoiceProcesing, 1, m_capDeviceID);//input device AudioDeviceID here
setInputLevelListener();
setVPEnabled(true);
setAGCEnabled(true);
UInt32 maximumFramesPerSlice = 0;
UInt32 size = sizeof(maximumFramesPerSlice);
OSStatus s1 = AudioUnitGetProperty(m_auVoiceProcesing, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maximumFramesPerSlice, &size);
printf("max frames per callback: %d\n", maximumFramesPerSlice);
maximumFramesPerSlice = MAX_FRAMES_PER_CALLBACK;
s1 = AudioUnitSetProperty(m_auVoiceProcesing, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maximumFramesPerSlice, size);
OSStatus status = AudioUnitInitialize(m_auVoiceProcesing);
if (noErr != status) {
printf("*** error AU initialize: %d", status);
}
status = AudioOutputUnitStart(m_auVoiceProcesing);
if (noErr != status) {
printf("*** AU start error: %d", status);
}
}
And Here is how I get my list of devices:
//does this device have input/output streams?
bool hasStreamsForCategory(AudioObjectID devId, bool input)
{
const AudioObjectPropertyScope scope = (input == true ? kAudioObjectPropertyScopeInput : kAudioObjectPropertyScopeOutput);
AudioObjectPropertyAddress propertyAddress{kAudioDevicePropertyStreams, scope, kAudioObjectPropertyElementWildcard};
uint32_t dataSize = 0;
OSStatus status = AudioObjectGetPropertyDataSize(devId,
&propertyAddress,
0,
NULL,
&dataSize);
if (noErr != status)
printf("%s: Error in AudioObjectGetPropertyDataSize: %d \n", __FUNCTION__, status);
return (dataSize / sizeof(AudioStreamID)) > 0;
}
std::set<AudioDeviceID> scanCoreAudioDeviceUIDs(bool isInput)
{
std::set<AudioDeviceID> deviceIDs{};
// find out how many audio devices there are
AudioObjectPropertyAddress propertyAddress = {kAudioHardwarePropertyDevices, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMaster};
uint32_t dataSize{0};
OSStatus err = AudioObjectGetPropertyDataSize(kAudioObjectSystemObject, &propertyAddress, 0, NULL, &dataSize);
if ( err != noErr )
{
printf("%s: AudioObjectGetPropertyDataSize: %d\n", __FUNCTION__, dataSize);
return deviceIDs;//empty
}
// calculate the number of device available
uint32_t devicesAvailable = dataSize / sizeof(AudioObjectID);
if ( devicesAvailable < 1 )
{
printf("%s: Core audio available devices were not found\n", __FUNCTION__);
return deviceIDs;//empty
}
AudioObjectID devices[devicesAvailable];//devices to get
err = AudioObjectGetPropertyData(kAudioObjectSystemObject, &propertyAddress, 0, NULL, &dataSize, devices);
if ( err != noErr )
{
printf("%s: Core audio available devices were not found\n", __FUNCTION__);
return deviceIDs;//empty
}
const AudioObjectPropertyScope scope = (isInput == true ? kAudioObjectPropertyScopeInput : kAudioObjectPropertyScopeOutput);
for (uint32_t i = 0; i < devicesAvailable; ++i)
{
const bool hasCorrespondingStreams = hasStreamsForCategory(devices[i], isInput);
if (!hasCorrespondingStreams) {
continue;
}
printf("DEVICE: \t %s \t %d \t %s\n", isInput ? "INPUT" : "OUTPUT", devices[i], deviceUIDFromAudioDeviceID(devices[i]).c_str());
deviceIDs.insert(devices[i]);
}//end for
return deviceIDs;
}
Well, replying my own question in 4 months since Apple Feedback Assistant responded to my request:
"There are two things you were noticing, both of which are expected and considered as implementation details of AUVP:
The speaker device has input stream - this is the reference tap stream for echo cancellation.
There is additional input stream under the built-in mic device - this is the raw mic streams enabled by AUVP.
For #1, We'd advise you to treat built-in speaker and (on certain Macs) headphone with special caution when determining whether it’s input/output device based on its input/output streams.
For #2, We'd advise you to ignore the extra streams on the device."
So they suggest me doing exactly what I did then: determine built - in output device before starting AU and then just memorising it; Ignoring any extra streams that appear in built - in devices during VP AU operation.

How to exclude input or output channels from an aggregate CoreAudio device?

I've got a CoreAudio-based MacOS/X program that allows the user to select an input-audio-device and an output-audio-device, and (if the user didn't choose the same device for both input and output) my program creates a private aggregate-audio-device and uses that to receive audio the audio, process it, and then send it out for playback.
That's all working great, but there is one minor problem -- if the selected input-device also has some outputs associated with its hardware, those outputs show up as part of the aggregate device's output-channels, which isn't the behavior I want. Similarly, if the selected output-device also has some inputs associated with its hardware, those inputs will show up as input channels in the aggregate device's inputs, which I also don't want.
My question is, is there any way to tell CoreAudio not to include the inputs or outputs of a sub-device in the aggregate device I'm constructing? (my fallback solution would be to modify my audio-rendering callback to ignore the unwanted audio channels, but that seems less than elegant, so I'm curious if there is a better way to handle it)
My function that creates the aggregate device is below, in case it is relevant:
// This code was adapted from the example code at : https://web.archive.org/web/20140716012404/http://daveaddey.com/?p=51
ConstCoreAudioDeviceRef CoreAudioDevice :: CreateAggregateDevice(const ConstCoreAudioDeviceInfoRef & inputCadi, const ConstCoreAudioDeviceInfoRef & outputCadi, bool require96kHz, int32 optRequiredBufferSizeFrames)
{
OSStatus osErr = noErr;
UInt32 outSize;
Boolean outWritable;
//-----------------------
// Start to create a new aggregate by getting the base audio hardware plugin
//-----------------------
osErr = AudioHardwareGetPropertyInfo(kAudioHardwarePropertyPlugInForBundleID, &outSize, &outWritable);
if (osErr != noErr) return ConstCoreAudioDeviceRef();
AudioValueTranslation pluginAVT;
CFStringRef inBundleRef = CFSTR("com.apple.audio.CoreAudio");
AudioObjectID pluginID;
pluginAVT.mInputData = &inBundleRef;
pluginAVT.mInputDataSize = sizeof(inBundleRef);
pluginAVT.mOutputData = &pluginID;
pluginAVT.mOutputDataSize = sizeof(pluginID);
osErr = AudioHardwareGetProperty(kAudioHardwarePropertyPlugInForBundleID, &outSize, &pluginAVT);
if (osErr != noErr) return ConstCoreAudioDeviceRef();
//-----------------------
// Create a CFDictionary for our aggregate device
//-----------------------
CFMutableDictionaryRef aggDeviceDict = CFDictionaryCreateMutable(NULL, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
CFStringRef aggregateDeviceNameRef = CFSTR("My Aggregate Device");
CFStringRef aggregateDeviceUIDRef = CFSTR("com.mycomapany.myaggregatedevice");
// add the name of the device to the dictionary
CFDictionaryAddValue(aggDeviceDict, CFSTR(kAudioAggregateDeviceNameKey), aggregateDeviceNameRef);
// add our choice of UID for the aggregate device to the dictionary
CFDictionaryAddValue(aggDeviceDict, CFSTR(kAudioAggregateDeviceUIDKey), aggregateDeviceUIDRef);
if (IsDebugFlagEnabled("public_cad_device") == false)
{
// make it private so that we don't have the user messing with it
int value = 1;
CFDictionaryAddValue(aggDeviceDict, CFSTR(kAudioAggregateDeviceIsPrivateKey), CFNumberCreate(NULL, kCFNumberIntType, &value));
}
//-----------------------
// Create a CFMutableArray for our sub-device list
//-----------------------
// we need to append the UID for each device to a CFMutableArray, so create one here
CFMutableArrayRef subDevicesArray = CFArrayCreateMutable(NULL, 0, &kCFTypeArrayCallBacks);
// add the sub-devices to our aggregate device
const CFStringRef inputDeviceUID = inputCadi()->GetPersistentUID().ToCFStringRef();
const CFStringRef outputDeviceUID = outputCadi()->GetPersistentUID().ToCFStringRef();
CFArrayAppendValue(subDevicesArray, inputDeviceUID);
CFArrayAppendValue(subDevicesArray, outputDeviceUID);
//-----------------------
// Feed the dictionary to the plugin, to create a blank aggregate device
//-----------------------
AudioObjectPropertyAddress pluginAOPA;
pluginAOPA.mSelector = kAudioPlugInCreateAggregateDevice;
pluginAOPA.mScope = kAudioObjectPropertyScopeGlobal;
pluginAOPA.mElement = kAudioObjectPropertyElementMaster;
UInt32 outDataSize;
osErr = AudioObjectGetPropertyDataSize(pluginID, &pluginAOPA, 0, NULL, &outDataSize);
if (osErr != noErr) return ConstCoreAudioDeviceRef();
AudioDeviceID outAggregateDevice;
osErr = AudioObjectGetPropertyData(pluginID, &pluginAOPA, sizeof(aggDeviceDict), &aggDeviceDict, &outDataSize, &outAggregateDevice);
if (osErr != noErr) return ConstCoreAudioDeviceRef();
//-----------------------
// Set the sub-device list
//-----------------------
pluginAOPA.mSelector = kAudioAggregateDevicePropertyFullSubDeviceList;
pluginAOPA.mScope = kAudioObjectPropertyScopeGlobal;
pluginAOPA.mElement = kAudioObjectPropertyElementMaster;
outDataSize = sizeof(CFMutableArrayRef);
osErr = AudioObjectSetPropertyData(outAggregateDevice, &pluginAOPA, 0, NULL, outDataSize, &subDevicesArray);
if (osErr != noErr) return ConstCoreAudioDeviceRef();
//-----------------------
// Set the master device
//-----------------------
// set the master device manually (this is the device which will act as the master clock for the aggregate device)
// pass in the UID of the device you want to use
pluginAOPA.mSelector = kAudioAggregateDevicePropertyMasterSubDevice;
pluginAOPA.mScope = kAudioObjectPropertyScopeGlobal;
pluginAOPA.mElement = kAudioObjectPropertyElementMaster;
outDataSize = sizeof(outputDeviceUID);
osErr = AudioObjectSetPropertyData(outAggregateDevice, &pluginAOPA, 0, NULL, outDataSize, &outputDeviceUID);
if (osErr != noErr) return ConstCoreAudioDeviceRef();
//-----------------------
// Clean up
//-----------------------
// release the CF objects we have created - we don't need them any more
CFRelease(aggDeviceDict);
CFRelease(subDevicesArray);
// release the device UID CFStringRefs
CFRelease(inputDeviceUID);
CFRelease(outputDeviceUID);
ConstCoreAudioDeviceInfoRef infoRef = CoreAudioDeviceInfo::GetAudioDeviceInfo(outAggregateDevice);
if (infoRef())
{
ConstCoreAudioDeviceRef ret(new CoreAudioDevice(infoRef, true));
return ((ret())&&(SetupSimpleCoreAudioDeviceAux(ret()->GetDeviceInfo(), require96kHz, optRequiredBufferSizeFrames, false).IsOK())) ? ret : ConstCoreAudioDeviceRef();
}
else return ConstCoreAudioDeviceRef();
}
There are ways to handle the channel mapping (which you're basically describing), but I doubt if it is a "better" way in your case.
Such functionality is covered in the AudioToolbox framework using Audio Units. Especially the kAudioUnitSubType_HALOutput AudioUnit (AUComponent.h) is interesting in this case.
Using this type of AudioUnit you can send and receive audio to and from a specific audio device in a specified channel format. When the desired channel layout doesn't match the channel layout of the device you can do channel mapping.
To get some technical details have a look at:
https://developer.apple.com/library/archive/technotes/tn2091/_index.html
Please not that a lot of the AudioToolbox is in the process of being replaced by AVAudioEngine.
So, in your case I think it would be easier to do manual channel mapping by just ignoring the samples you don't need.
Also, I'm not sure if CoreAudio provides 'slicenced' output buffers. To be sure consider silencing them yourself.
EDIT
Looking at the docs in AudioHardware.h there seems to be a way of enabling and disabling streams of a particular IOProc.
When OS X creates an aggregate, it puts all the channels of the different subdevices in different streams, so in your case you should be able to disable the stream which contains the inputs of the output device and and vice versa disable the stream which contains the outputs of the input device.
For this have a look at AudioHardwareIOProcStreamUsage and kAudioDevicePropertyIOProcStreamUsage both in AudioHardware.h
I found the HALLab utility from Apple very useful in finding out about the actual streams.
(https://developer.apple.com/download/more/ and search for "Audio Tools for Xcode")

CoreAudio Output Sample Rate Discrepancy

I am using ffmpeg to acquire audio from .mov files. Looking over my settings, I am not sample rate converting the audio buffers I am generating so that is unlikely to account for the issues I am having. Regardless of the sample rate I set on my Built-in Output, my audio files that are at 44.1 kHz playback at the correct rate. If I playback a 48kHz file, the file plays back slower (at 91% of the normal rate) which indicates that the true rate is 44.1kHz. I can change my built-in output to 44.1, 48, or 96 kHz and the same phenomenon exists. I change my default output rate using the Audio Midi Setup app. I then verify my sample rate using AudioUnitGetProperty on my ouputAudioUnit. This matches the sample rate in the Audio Midi Setup.
Thoughts? I am including my audio graph code
CheckError(NewAUGraph(&fp.graph), "Couldn't create a new AUGraph");
//varispeednode has an input callback
//the vairspeed node feeds an output node which is running
//at the frequency of the system default output
AUNode outputNode;
AudioComponentDescription outputcd = [self defaultOutputComponent];
CheckError(AUGraphAddNode(fp.graph, &outputcd, &outputNode),
"AUGraphAddNode[kAudioUnitSubType_DefaultOutput] failed");
AUNode varispeedNode;
AudioComponentDescription varispeedcd = [self variSpeedComponent];
CheckError(AUGraphAddNode(fp.graph, &varispeedcd, &varispeedNode),
"AUGraphAddNode[kAudioUnitSubType_Varispeed] failed");
CheckError(AUGraphOpen(fp.graph),
"Couldn't Open AudioGraph");
CheckError(AUGraphNodeInfo(fp.graph, outputNode, NULL, &fp.outputAudioUnit),
"Couldn't Retrieve output node");
CheckError(AUGraphNodeInfo(fp.graph, varispeedNode, NULL, &fp.variSpeedAudioUnit),
"Couldn't Retrieve Varispeed Audio Unit");
AURenderCallbackStruct input;
input.inputProc = CBufferProviderCallback;
input.inputProcRefCon = &playerStruct;
CheckError(AudioUnitSetProperty(fp.variSpeedAudioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
0,
&input,
sizeof(input)),
"AudioUnitSetProperty failed");
CheckError(AUGraphConnectNodeInput(fp.graph, varispeedNode, 0, outputNode, 0),
"Couldn't Connect varispeed to output");
CheckError(AUGraphInitialize(fp.graph),
"Couldn't Initialize AUGraph");
// check output sample rate
Float64 outputSampleRate = 48000.0;
UInt32 sizeOfFloat64 = sizeof(Float64);
outputSampleRate = 0.0;
CheckError(AudioUnitGetProperty(fp.outputAudioUnit,
kAudioUnitProperty_SampleRate,
kAudioUnitScope_Global,
0,
&outputSampleRate,
&sizeOfFloat64),
"Couldn't get output sampleRate");
I solved the issue. When building the audio graph, you need to specify the input sample rate of the varispeed audio unit before you connect it to an output node inside of the augraph. See the example code at
https://developer.apple.com/library/content/samplecode/CAPlayThrough/Listings/ReadMe_txt.html
CheckError(NewAUGraph(&fp.graph), "BuildGraphError");
AUNode outputNode;
AudioComponentDescription outputcd = [self defaultOutputComponent];
CheckError(AUGraphAddNode(fp.graph, &outputcd, &outputNode),
"AUGraphAddNode[kAudioUnitSubType_DefaultOutput] failed");
AUNode varispeedNode;
AudioComponentDescription varispeedcd = [self variSpeedComponent];
CheckError(AUGraphAddNode(fp.graph, &varispeedcd, &varispeedNode),
"AUGraphAddNode[kAudioUnitSubType_Varispeed] failed");
CheckError(AUGraphOpen(fp.graph),
"Couldn't Open AudioGraph");
CheckError(AUGraphNodeInfo(fp.graph, outputNode, NULL, &fp.outputAudioUnit),
"Couldn't Retrieve File Audio Unit");
CheckError(AUGraphNodeInfo(fp.graph, varispeedNode, NULL, &fp.variSpeedAudioUnit),
"Couldn't Retrieve Varispeed Audio Unit");
AURenderCallbackStruct input;
input.inputProc = CBufferProviderCallback;
input.inputProcRefCon = &playerStruct;
CheckError(AudioUnitSetProperty(fp.variSpeedAudioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
0,
&input,
sizeof(input)),
"AudioUnitSetProperty failed");
//you have to set the varispeed rate before you connect it
//see CAPlayThrough
AudioStreamBasicDescription asbd = {0};
UInt32 size;
Boolean outWritable;
//Gets the size of the Stream Format Property and if it is writable
OSStatus result = AudioUnitGetPropertyInfo(fp.variSpeedAudioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
0,
&size,
&outWritable);
//Get the current stream format of the output
result = AudioUnitGetProperty (fp.variSpeedAudioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
0,
&asbd,
&size);
asbd.mSampleRate = targetSampleRate;
//Set the stream format of the output to match the input
result = AudioUnitSetProperty (fp.variSpeedAudioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&asbd,
size);
printf("AudioUnitSetProperty result %d %d\n", result, noErr);
CheckError(AUGraphConnectNodeInput(fp.graph, varispeedNode, 0, outputNode, 0),
"Couldn't Connect varispeed to output");
CheckError(AUGraphInitialize(fp.graph),
"Couldn't Initialize AUGraph");
Float64 outputSampleRate = 48000.0;
UInt32 sizeOfFloat64 = sizeof(Float64);
outputSampleRate = 0.0;
CheckError(AudioUnitGetProperty(fp.outputAudioUnit,
kAudioUnitProperty_SampleRate,
kAudioUnitScope_Global,
0,
&outputSampleRate,
&sizeOfFloat64),
"Couldn't get output sampleRate");
NSLog(#"Output Sample Rate of the ->%f", outputSampleRate);

HALOutput in AUGraph select and configure specific output device

I successfully managed to build a complex AUGraph that I'm able to reconfigure on the fly, and all is working well.
I'm facing a wall now with what seems a very simple task: selecting a sepcific output device.
I'm able to get the deviceUID and ID thanks to this post: AudioObjectGetPropertyData to get a list of input devices (that I've modified to get output devices) and to the code below (I can't remember where I've found it, unfortunately)
- (AudioDeviceID) deviceIDWithUID:(NSString *)uid
{
AudioDeviceID myDevice;
AudioValueTranslation trans;
CFStringRef *myKnownUID = (__bridge CFStringRef *)uid;
trans.mInputData = &myKnownUID;
trans.mInputDataSize = sizeof (CFStringRef);
trans.mOutputData = &myDevice;
trans.mOutputDataSize = sizeof(AudioDeviceID);
UInt32 size = sizeof (AudioValueTranslation);
AudioHardwareGetProperty (kAudioHardwarePropertyDeviceForUID,
&size,
&trans);
return myDevice;
}
I'm getting the AudioDeviceID from this method which I store in an NSDictionary. I can NSLog it and when I convert it in hexadecimal it gives me the right ID, found in HALLab.
But when I configure my unit (see code below) the graph only plays on the default device (the one selected in Sound Preferences).
AudioComponent comp = AudioComponentFindNext(NULL, &_componentDescription);
if (comp == NULL) {
printf ("Can't get output unit");
exit (-1);
}
CheckError(AudioComponentInstanceNew(comp, &_auUnit),
"Couldn't open component for output Unit");
UInt32 disableFlag = 0;
UInt32 enableFlag = 1;
AudioUnitScope outputBus = 0;
AudioUnitScope inputBus = 1;
CheckError (AudioUnitSetProperty(_auUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
outputBus,
&enableFlag,
sizeof(enableFlag)), "AudioUnitSetProperty[kAudioOutputUnitProperty_EnableIO] failed - enable Output");
CheckError (AudioUnitSetProperty(_auUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
inputBus,
&disableFlag,
sizeof(disableFlag)), "AudioUnitSetProperty[kAudioOutputUnitProperty_EnableIO] failed - disable Input");
AudioDeviceID devID = (AudioDeviceID)[[[_devices objectAtIndex:0] objectForKey:#"deviceID"] unsignedIntValue];
CheckError(AudioUnitSetProperty(_auUnit,
kAudioOutputUnitProperty_CurrentDevice,
kAudioUnitScope_Output,
0,
&devID,
sizeof(AudioDeviceID)), "AudioUnitSetProperty[kAudioOutputUnitProperty_CurrentDevice] failed");
The AUGraph is already configured with all units, nodes are connected, and it's open. What am I doing wrong ?
I would be very grateful for any clue to resolve this problem.

Core Audio and the Phantom Device ID

So here's what is going on.
I am attempting to work with Core Audio, specifically input devices. I want to mute, change volume, etc, etc. I've encountered something absolutely bizarre that I cannot figure out. Thus far, google has been of no help.
When I query the system and ask for a list of all audio devices, I am returned an array of device IDs. In this case, 261, 259, 263, 257.
Using kAudioDevicePropertyDeviceName, I get the following:
261: Built-in Microphone
259: Built-in Input
263: Built-in Output
257: iPhoneSimulatorAudioDevice
This is all well and good.
// This method returns an NSArray of all the audio devices on the system, both input and
// On my system, it returns 261, 259, 263, 257
- (NSArray*)getAudioDevices
{
AudioObjectPropertyAddress propertyAddress = {
kAudioHardwarePropertyDevices,
kAudioObjectPropertyScopeGlobal,
kAudioObjectPropertyElementMaster
};
UInt32 dataSize = 0;
OSStatus status = AudioObjectGetPropertyDataSize(kAudioObjectSystemObject, &propertyAddress, 0, NULL, &dataSize);
if(kAudioHardwareNoError != status)
{
MZLog(#"Unable to get number of audio devices. Error: %d",status);
return NULL;
}
UInt32 deviceCount = dataSize / sizeof(AudioDeviceID);
AudioDeviceID *audioDevices = malloc(dataSize);
status = AudioObjectGetPropertyData(kAudioObjectSystemObject, &propertyAddress, 0, NULL, &dataSize, audioDevices);
if(kAudioHardwareNoError != status)
{
MZLog(#"AudioObjectGetPropertyData failed when getting device IDs. Error: %d",status);
free(audioDevices), audioDevices = NULL;
return NULL;
}
NSMutableArray* devices = [NSMutableArray array];
for(UInt32 i = 0; i < deviceCount; i++)
{
MZLog(#"device found: %d",audioDevices[i]);
[devices addObject:[NSNumber numberWithInt:audioDevices[i]]];
}
free(audioDevices);
return [NSArray arrayWithArray:devices];
}
The problem crops up when I then query the system and ask it for the ID of the default input device. This method returns an ID of 269, which is not listed in the array of all devices.
If I attempt to use kAudioDevicePropertyDeviceName to get the name of the device, I am returned an empty string. Although it doesn't appear to have a name, if I mute this device ID, my built-in microphone will mute. Conversely, if I mute the 261 ID, which is named "Built-In Microphone", my microphone does not mute.
// Gets the current default audio input device
// On my system, it returns 269, which is NOT LISTED in the array of ALL audio devices
- (AudioDeviceID)defaultInputDevice
{
AudioDeviceID defaultAudioDevice;
UInt32 propertySize = 0;
OSStatus status = noErr;
AudioObjectPropertyAddress propertyAOPA;
propertyAOPA.mElement = kAudioObjectPropertyElementMaster;
propertyAOPA.mScope = kAudioObjectPropertyScopeGlobal;
propertyAOPA.mSelector = kAudioHardwarePropertyDefaultInputDevice;
propertySize = sizeof(AudioDeviceID);
status = AudioHardwareServiceGetPropertyData(kAudioObjectSystemObject, &propertyAOPA, 0, NULL, &propertySize, &defaultAudioDevice);
if(status)
{ //Error
NSLog(#"Error %d retreiving default input device",status);
return 0;
}
return defaultAudioDevice;
}
To further confuse things, if I manually switch my input to "Line In" and re-run the program, I get an ID of 259 when querying for the default input device, which is listed in the array of all devices.
So, to summarize:
I am attempting to interact with the input devices in my system. If I try to interact with device ID 261 which is my "Built-In Microphone", nothing happens. If I try to interact with device ID 269 which is, apparently, a phantom ID, my built-in microphone is affected. The 269 ID is returned when I query the system for the default input device, but it is not listed when I query the system for a list of all devices.
Does anyone know what is happening? Am I simply going insane?
Thanks in advance!
Fixed it.
First off, the phantom device ID was simply a virtual device the system was using.
Secondly, the reason I couldn't mute or do anything with the actual devices was because I was using AudioHardwareServiceSetPropertyData instead of AudioObjectSetPropertyData.
It all works now.

Resources