How to get audio from device audio input using audio unit osx - macos

I have spent quite a time trying to figure out of how I can get the voice from the user microphone using audio unit so that I can use it in the audio unit recording call back but I am still stack.
- (OSStatus) setupMicInput {
AudioObjectPropertyAddress addr;
UInt32 size = sizeof(AudioDeviceID);
AudioDeviceID deviceID = 0;
addr.mSelector = kAudioHardwarePropertyDefaultInputDevice;
addr.mScope = kAudioObjectPropertyScopeGlobal;
addr.mElement = kAudioObjectPropertyElementMaster;
OSStatus err = AudioObjectGetPropertyData(kAudioObjectSystemObject, &addr, 0, NULL, &size, &deviceID);
checkStatus(err);
if (err == noErr) {
err = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Global, 0, &deviceID, size);
}
checkStatus(err);
return err;
}
I do get this error
2018-05-08 10:07:29.454485+0300 OsxSocketSound[1414:20839] [AudioHAL_Client] AudioHardware.cpp:578:AudioObjectGetPropertyDataSize: AudioObjectGetPropertyDataSize: no object with given ID 0
2018-05-08 10:07:29.454517+0300 OsxSocketSound[1414:20839]
[AudioHAL_Client] AudioHardware.cpp:666:AudioObjectGetPropertyData: AudioObjectGetPropertyData: no object with given ID 0
2018-05-08 10:07:29.454715+0300 OsxSocketSound[1414:20839]
[AudioHAL_Client] AudioHardware.cpp:3446:AudioDeviceSetProperty: AudioDeviceSetProperty: no device with given ID
2018-05-08 10:07:29.454738+0300 OsxSocketSound[1414:20839] 1610: ca_verify_noerr: [AudioDeviceSetProperty(mDeviceID, NULL, 0, isInput, kAudioDevicePropertyIOProcStreamUsage, theSize, theStreamUsage), 560227702].
I wish someone can help me with an example of an audio unit and capturing microphone input. Thanks

Seems, your app entitlements are not set properly. Under capabilities tab, you should check microphone. Please check once.

Related

VoiceProcessingIO Audio Unit adds an unexpected input stream to Built-in output device (macOS)

I work on VoIP app on macOS and use VoiceProcessingIO Audio Unit for audio processing like Echo cancellation and automatic gain control.
Problem is, when I init the audio unit, the list of Core Audio devices changes - not just by adding new aggregate device which VP audio unit uses for it's needs, but also because built-in output device (i.e. "Built - In MacBook Pro Speakers") now appears also as an input device, i.e. having an unexpected input stream in addition to output ones.
This is a list of INPUT devices (aka "microphones") I get from Core Audio before initialising my VP AU:
DEVICE: INPUT 45 BlackHole_UID
DEVICE: INPUT 93 BuiltInMicrophoneDevice
This is the same list when my VP AU is initialised:
DEVICE: INPUT 45 BlackHole_UID
DEVICE: INPUT 93 BuiltInMicrophoneDevice
DEVICE: INPUT 86 BuiltInSpeakerDevice /// WHY?
DEVICE: INPUT 98 VPAUAggregateAudioDevice-0x101046040
This is very frustrating because I need to display a list of devices in the app and even though I can filter out Aggregate devices from device list boldly (they are not usable with VP AU anyway), I cannot exclude our built-in macBook Speaker device.
Maybe someone of You has already been through this and has a clue what's going on and if this can be fixed. Some kAudioObjectPropertyXX I need to watch for to exclude the device from inputs list. Or course this might be a bug/feature on Apple's side and I simply have to hack my way around this.
VP AU works well, and the problem reproduces despite devices used (I tried on built-in and on external/USB/Bluetooth alike). The problem is reproduced on all macOS version I could test on, starting from 10.13 and ending by 11.0 included. This also reproduces on different Macs and different audio device sets connected. I am curious that there is next to zero info on that problem available, which brings me to a thought that I did something wrong.
One more strange thing is, when VP AU is working, the HALLab app indicates the another thing: Built-in Input having two more input streams (ok, I would survive this If it was just that!). But it doesn't indicate that Built-In output has input streams added, like in my app.
Here is extract from cpp code on how I setup VP Audio Unit:
#define MAX_FRAMES_PER_CALLBACK 1024
AudioComponentInstance AvHwVoIP::getComponentInstance(OSType type, OSType subType) {
AudioComponentDescription desc = {0};
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
desc.componentSubType = subType;
desc.componentType = type;
AudioComponent ioComponent = AudioComponentFindNext(NULL, &desc);
AudioComponentInstance unit;
OSStatus status = AudioComponentInstanceNew(ioComponent, &unit);
if (status != noErr) {
printf("Error: %d\n", status);
}
return unit;
}
void AvHwVoIP::enableIO(uint32_t enableIO, AudioUnit auDev) {
UInt32 no = 0;
setAudioUnitProperty(auDev,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
1,
&enableIO,
sizeof(enableIO));
setAudioUnitProperty(auDev,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
0,
&enableIO,
sizeof(enableIO));
}
void AvHwVoIP::setDeviceAsCurrent(AudioUnit auDev, AudioUnitElement element, AudioObjectID devId) {
//Set the Current Device to the AUHAL.
//this should be done only after IO has been enabled on the AUHAL.
setAudioUnitProperty(auDev,
kAudioOutputUnitProperty_CurrentDevice,
element == 0 ? kAudioUnitScope_Output : kAudioUnitScope_Input,
element,
&devId,
sizeof(AudioDeviceID));
}
void AvHwVoIP::setAudioUnitProperty(AudioUnit auDev,
AudioUnitPropertyID inID,
AudioUnitScope inScope,
AudioUnitElement inElement,
const void* __nullable inData,
uint32_t inDataSize) {
OSStatus status = AudioUnitSetProperty(auDev, inID, inScope, inElement, inData, inDataSize);
if (noErr != status) {
std::cout << "****** ::setAudioUnitProperty failed" << std::endl;
}
}
void AvHwVoIP::start() {
m_auVoiceProcesing = getComponentInstance(kAudioUnitType_Output, kAudioUnitSubType_VoiceProcessingIO);
enableIO(1, m_auVoiceProcesing);
m_format_description = SetAudioUnitStreamFormatFloat(m_auVoiceProcesing);
SetAudioUnitCallbacks(m_auVoiceProcesing);
setDeviceAsCurrent(m_auVoiceProcesing, 0, m_renderDeviceID);//output device AudioDeviceID here
setDeviceAsCurrent(m_auVoiceProcesing, 1, m_capDeviceID);//input device AudioDeviceID here
setInputLevelListener();
setVPEnabled(true);
setAGCEnabled(true);
UInt32 maximumFramesPerSlice = 0;
UInt32 size = sizeof(maximumFramesPerSlice);
OSStatus s1 = AudioUnitGetProperty(m_auVoiceProcesing, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maximumFramesPerSlice, &size);
printf("max frames per callback: %d\n", maximumFramesPerSlice);
maximumFramesPerSlice = MAX_FRAMES_PER_CALLBACK;
s1 = AudioUnitSetProperty(m_auVoiceProcesing, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maximumFramesPerSlice, size);
OSStatus status = AudioUnitInitialize(m_auVoiceProcesing);
if (noErr != status) {
printf("*** error AU initialize: %d", status);
}
status = AudioOutputUnitStart(m_auVoiceProcesing);
if (noErr != status) {
printf("*** AU start error: %d", status);
}
}
And Here is how I get my list of devices:
//does this device have input/output streams?
bool hasStreamsForCategory(AudioObjectID devId, bool input)
{
const AudioObjectPropertyScope scope = (input == true ? kAudioObjectPropertyScopeInput : kAudioObjectPropertyScopeOutput);
AudioObjectPropertyAddress propertyAddress{kAudioDevicePropertyStreams, scope, kAudioObjectPropertyElementWildcard};
uint32_t dataSize = 0;
OSStatus status = AudioObjectGetPropertyDataSize(devId,
&propertyAddress,
0,
NULL,
&dataSize);
if (noErr != status)
printf("%s: Error in AudioObjectGetPropertyDataSize: %d \n", __FUNCTION__, status);
return (dataSize / sizeof(AudioStreamID)) > 0;
}
std::set<AudioDeviceID> scanCoreAudioDeviceUIDs(bool isInput)
{
std::set<AudioDeviceID> deviceIDs{};
// find out how many audio devices there are
AudioObjectPropertyAddress propertyAddress = {kAudioHardwarePropertyDevices, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMaster};
uint32_t dataSize{0};
OSStatus err = AudioObjectGetPropertyDataSize(kAudioObjectSystemObject, &propertyAddress, 0, NULL, &dataSize);
if ( err != noErr )
{
printf("%s: AudioObjectGetPropertyDataSize: %d\n", __FUNCTION__, dataSize);
return deviceIDs;//empty
}
// calculate the number of device available
uint32_t devicesAvailable = dataSize / sizeof(AudioObjectID);
if ( devicesAvailable < 1 )
{
printf("%s: Core audio available devices were not found\n", __FUNCTION__);
return deviceIDs;//empty
}
AudioObjectID devices[devicesAvailable];//devices to get
err = AudioObjectGetPropertyData(kAudioObjectSystemObject, &propertyAddress, 0, NULL, &dataSize, devices);
if ( err != noErr )
{
printf("%s: Core audio available devices were not found\n", __FUNCTION__);
return deviceIDs;//empty
}
const AudioObjectPropertyScope scope = (isInput == true ? kAudioObjectPropertyScopeInput : kAudioObjectPropertyScopeOutput);
for (uint32_t i = 0; i < devicesAvailable; ++i)
{
const bool hasCorrespondingStreams = hasStreamsForCategory(devices[i], isInput);
if (!hasCorrespondingStreams) {
continue;
}
printf("DEVICE: \t %s \t %d \t %s\n", isInput ? "INPUT" : "OUTPUT", devices[i], deviceUIDFromAudioDeviceID(devices[i]).c_str());
deviceIDs.insert(devices[i]);
}//end for
return deviceIDs;
}
Well, replying my own question in 4 months since Apple Feedback Assistant responded to my request:
"There are two things you were noticing, both of which are expected and considered as implementation details of AUVP:
The speaker device has input stream - this is the reference tap stream for echo cancellation.
There is additional input stream under the built-in mic device - this is the raw mic streams enabled by AUVP.
For #1, We'd advise you to treat built-in speaker and (on certain Macs) headphone with special caution when determining whether it’s input/output device based on its input/output streams.
For #2, We'd advise you to ignore the extra streams on the device."
So they suggest me doing exactly what I did then: determine built - in output device before starting AU and then just memorising it; Ignoring any extra streams that appear in built - in devices during VP AU operation.

How to exclude input or output channels from an aggregate CoreAudio device?

I've got a CoreAudio-based MacOS/X program that allows the user to select an input-audio-device and an output-audio-device, and (if the user didn't choose the same device for both input and output) my program creates a private aggregate-audio-device and uses that to receive audio the audio, process it, and then send it out for playback.
That's all working great, but there is one minor problem -- if the selected input-device also has some outputs associated with its hardware, those outputs show up as part of the aggregate device's output-channels, which isn't the behavior I want. Similarly, if the selected output-device also has some inputs associated with its hardware, those inputs will show up as input channels in the aggregate device's inputs, which I also don't want.
My question is, is there any way to tell CoreAudio not to include the inputs or outputs of a sub-device in the aggregate device I'm constructing? (my fallback solution would be to modify my audio-rendering callback to ignore the unwanted audio channels, but that seems less than elegant, so I'm curious if there is a better way to handle it)
My function that creates the aggregate device is below, in case it is relevant:
// This code was adapted from the example code at : https://web.archive.org/web/20140716012404/http://daveaddey.com/?p=51
ConstCoreAudioDeviceRef CoreAudioDevice :: CreateAggregateDevice(const ConstCoreAudioDeviceInfoRef & inputCadi, const ConstCoreAudioDeviceInfoRef & outputCadi, bool require96kHz, int32 optRequiredBufferSizeFrames)
{
OSStatus osErr = noErr;
UInt32 outSize;
Boolean outWritable;
//-----------------------
// Start to create a new aggregate by getting the base audio hardware plugin
//-----------------------
osErr = AudioHardwareGetPropertyInfo(kAudioHardwarePropertyPlugInForBundleID, &outSize, &outWritable);
if (osErr != noErr) return ConstCoreAudioDeviceRef();
AudioValueTranslation pluginAVT;
CFStringRef inBundleRef = CFSTR("com.apple.audio.CoreAudio");
AudioObjectID pluginID;
pluginAVT.mInputData = &inBundleRef;
pluginAVT.mInputDataSize = sizeof(inBundleRef);
pluginAVT.mOutputData = &pluginID;
pluginAVT.mOutputDataSize = sizeof(pluginID);
osErr = AudioHardwareGetProperty(kAudioHardwarePropertyPlugInForBundleID, &outSize, &pluginAVT);
if (osErr != noErr) return ConstCoreAudioDeviceRef();
//-----------------------
// Create a CFDictionary for our aggregate device
//-----------------------
CFMutableDictionaryRef aggDeviceDict = CFDictionaryCreateMutable(NULL, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
CFStringRef aggregateDeviceNameRef = CFSTR("My Aggregate Device");
CFStringRef aggregateDeviceUIDRef = CFSTR("com.mycomapany.myaggregatedevice");
// add the name of the device to the dictionary
CFDictionaryAddValue(aggDeviceDict, CFSTR(kAudioAggregateDeviceNameKey), aggregateDeviceNameRef);
// add our choice of UID for the aggregate device to the dictionary
CFDictionaryAddValue(aggDeviceDict, CFSTR(kAudioAggregateDeviceUIDKey), aggregateDeviceUIDRef);
if (IsDebugFlagEnabled("public_cad_device") == false)
{
// make it private so that we don't have the user messing with it
int value = 1;
CFDictionaryAddValue(aggDeviceDict, CFSTR(kAudioAggregateDeviceIsPrivateKey), CFNumberCreate(NULL, kCFNumberIntType, &value));
}
//-----------------------
// Create a CFMutableArray for our sub-device list
//-----------------------
// we need to append the UID for each device to a CFMutableArray, so create one here
CFMutableArrayRef subDevicesArray = CFArrayCreateMutable(NULL, 0, &kCFTypeArrayCallBacks);
// add the sub-devices to our aggregate device
const CFStringRef inputDeviceUID = inputCadi()->GetPersistentUID().ToCFStringRef();
const CFStringRef outputDeviceUID = outputCadi()->GetPersistentUID().ToCFStringRef();
CFArrayAppendValue(subDevicesArray, inputDeviceUID);
CFArrayAppendValue(subDevicesArray, outputDeviceUID);
//-----------------------
// Feed the dictionary to the plugin, to create a blank aggregate device
//-----------------------
AudioObjectPropertyAddress pluginAOPA;
pluginAOPA.mSelector = kAudioPlugInCreateAggregateDevice;
pluginAOPA.mScope = kAudioObjectPropertyScopeGlobal;
pluginAOPA.mElement = kAudioObjectPropertyElementMaster;
UInt32 outDataSize;
osErr = AudioObjectGetPropertyDataSize(pluginID, &pluginAOPA, 0, NULL, &outDataSize);
if (osErr != noErr) return ConstCoreAudioDeviceRef();
AudioDeviceID outAggregateDevice;
osErr = AudioObjectGetPropertyData(pluginID, &pluginAOPA, sizeof(aggDeviceDict), &aggDeviceDict, &outDataSize, &outAggregateDevice);
if (osErr != noErr) return ConstCoreAudioDeviceRef();
//-----------------------
// Set the sub-device list
//-----------------------
pluginAOPA.mSelector = kAudioAggregateDevicePropertyFullSubDeviceList;
pluginAOPA.mScope = kAudioObjectPropertyScopeGlobal;
pluginAOPA.mElement = kAudioObjectPropertyElementMaster;
outDataSize = sizeof(CFMutableArrayRef);
osErr = AudioObjectSetPropertyData(outAggregateDevice, &pluginAOPA, 0, NULL, outDataSize, &subDevicesArray);
if (osErr != noErr) return ConstCoreAudioDeviceRef();
//-----------------------
// Set the master device
//-----------------------
// set the master device manually (this is the device which will act as the master clock for the aggregate device)
// pass in the UID of the device you want to use
pluginAOPA.mSelector = kAudioAggregateDevicePropertyMasterSubDevice;
pluginAOPA.mScope = kAudioObjectPropertyScopeGlobal;
pluginAOPA.mElement = kAudioObjectPropertyElementMaster;
outDataSize = sizeof(outputDeviceUID);
osErr = AudioObjectSetPropertyData(outAggregateDevice, &pluginAOPA, 0, NULL, outDataSize, &outputDeviceUID);
if (osErr != noErr) return ConstCoreAudioDeviceRef();
//-----------------------
// Clean up
//-----------------------
// release the CF objects we have created - we don't need them any more
CFRelease(aggDeviceDict);
CFRelease(subDevicesArray);
// release the device UID CFStringRefs
CFRelease(inputDeviceUID);
CFRelease(outputDeviceUID);
ConstCoreAudioDeviceInfoRef infoRef = CoreAudioDeviceInfo::GetAudioDeviceInfo(outAggregateDevice);
if (infoRef())
{
ConstCoreAudioDeviceRef ret(new CoreAudioDevice(infoRef, true));
return ((ret())&&(SetupSimpleCoreAudioDeviceAux(ret()->GetDeviceInfo(), require96kHz, optRequiredBufferSizeFrames, false).IsOK())) ? ret : ConstCoreAudioDeviceRef();
}
else return ConstCoreAudioDeviceRef();
}
There are ways to handle the channel mapping (which you're basically describing), but I doubt if it is a "better" way in your case.
Such functionality is covered in the AudioToolbox framework using Audio Units. Especially the kAudioUnitSubType_HALOutput AudioUnit (AUComponent.h) is interesting in this case.
Using this type of AudioUnit you can send and receive audio to and from a specific audio device in a specified channel format. When the desired channel layout doesn't match the channel layout of the device you can do channel mapping.
To get some technical details have a look at:
https://developer.apple.com/library/archive/technotes/tn2091/_index.html
Please not that a lot of the AudioToolbox is in the process of being replaced by AVAudioEngine.
So, in your case I think it would be easier to do manual channel mapping by just ignoring the samples you don't need.
Also, I'm not sure if CoreAudio provides 'slicenced' output buffers. To be sure consider silencing them yourself.
EDIT
Looking at the docs in AudioHardware.h there seems to be a way of enabling and disabling streams of a particular IOProc.
When OS X creates an aggregate, it puts all the channels of the different subdevices in different streams, so in your case you should be able to disable the stream which contains the inputs of the output device and and vice versa disable the stream which contains the outputs of the input device.
For this have a look at AudioHardwareIOProcStreamUsage and kAudioDevicePropertyIOProcStreamUsage both in AudioHardware.h
I found the HALLab utility from Apple very useful in finding out about the actual streams.
(https://developer.apple.com/download/more/ and search for "Audio Tools for Xcode")

HALOutput in AUGraph select and configure specific output device

I successfully managed to build a complex AUGraph that I'm able to reconfigure on the fly, and all is working well.
I'm facing a wall now with what seems a very simple task: selecting a sepcific output device.
I'm able to get the deviceUID and ID thanks to this post: AudioObjectGetPropertyData to get a list of input devices (that I've modified to get output devices) and to the code below (I can't remember where I've found it, unfortunately)
- (AudioDeviceID) deviceIDWithUID:(NSString *)uid
{
AudioDeviceID myDevice;
AudioValueTranslation trans;
CFStringRef *myKnownUID = (__bridge CFStringRef *)uid;
trans.mInputData = &myKnownUID;
trans.mInputDataSize = sizeof (CFStringRef);
trans.mOutputData = &myDevice;
trans.mOutputDataSize = sizeof(AudioDeviceID);
UInt32 size = sizeof (AudioValueTranslation);
AudioHardwareGetProperty (kAudioHardwarePropertyDeviceForUID,
&size,
&trans);
return myDevice;
}
I'm getting the AudioDeviceID from this method which I store in an NSDictionary. I can NSLog it and when I convert it in hexadecimal it gives me the right ID, found in HALLab.
But when I configure my unit (see code below) the graph only plays on the default device (the one selected in Sound Preferences).
AudioComponent comp = AudioComponentFindNext(NULL, &_componentDescription);
if (comp == NULL) {
printf ("Can't get output unit");
exit (-1);
}
CheckError(AudioComponentInstanceNew(comp, &_auUnit),
"Couldn't open component for output Unit");
UInt32 disableFlag = 0;
UInt32 enableFlag = 1;
AudioUnitScope outputBus = 0;
AudioUnitScope inputBus = 1;
CheckError (AudioUnitSetProperty(_auUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
outputBus,
&enableFlag,
sizeof(enableFlag)), "AudioUnitSetProperty[kAudioOutputUnitProperty_EnableIO] failed - enable Output");
CheckError (AudioUnitSetProperty(_auUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
inputBus,
&disableFlag,
sizeof(disableFlag)), "AudioUnitSetProperty[kAudioOutputUnitProperty_EnableIO] failed - disable Input");
AudioDeviceID devID = (AudioDeviceID)[[[_devices objectAtIndex:0] objectForKey:#"deviceID"] unsignedIntValue];
CheckError(AudioUnitSetProperty(_auUnit,
kAudioOutputUnitProperty_CurrentDevice,
kAudioUnitScope_Output,
0,
&devID,
sizeof(AudioDeviceID)), "AudioUnitSetProperty[kAudioOutputUnitProperty_CurrentDevice] failed");
The AUGraph is already configured with all units, nodes are connected, and it's open. What am I doing wrong ?
I would be very grateful for any clue to resolve this problem.

How do I set/get the volume level with Core Audio?

I want to be able to get and set the system volume level with Core Audio. I've followed the code on this other thread:
objective c audio meter
However, my call to AudioHardwareServiceHasProperty to find the kAudioHardwareServiceDeviceProperty_VirtualMasterVolume property returns false. Why is this happening, and how do I get around it? What approach should I take to getting and setting the system volume level with Core Audio?
Have you tried kAudioDevicePropertyVolumeScalar:
UInt32 channel = 1; // Channel 0 is master, if available
AudioObjectPropertyAddress prop = {
kAudioDevicePropertyVolumeScalar,
kAudioDevicePropertyScopeOutput,
channel
};
if(!AudioObjectHasProperty(deviceID, &prop))
// error
Float32 volume;
UInt32 dataSize = sizeof(volume);
OSStatus result = AudioObjectGetPropertyData(deviceID, &prop, 0, NULL, &dataSize, &volume);
if(kAudioHardwareNoError != result)
// error

Core Audio and the Phantom Device ID

So here's what is going on.
I am attempting to work with Core Audio, specifically input devices. I want to mute, change volume, etc, etc. I've encountered something absolutely bizarre that I cannot figure out. Thus far, google has been of no help.
When I query the system and ask for a list of all audio devices, I am returned an array of device IDs. In this case, 261, 259, 263, 257.
Using kAudioDevicePropertyDeviceName, I get the following:
261: Built-in Microphone
259: Built-in Input
263: Built-in Output
257: iPhoneSimulatorAudioDevice
This is all well and good.
// This method returns an NSArray of all the audio devices on the system, both input and
// On my system, it returns 261, 259, 263, 257
- (NSArray*)getAudioDevices
{
AudioObjectPropertyAddress propertyAddress = {
kAudioHardwarePropertyDevices,
kAudioObjectPropertyScopeGlobal,
kAudioObjectPropertyElementMaster
};
UInt32 dataSize = 0;
OSStatus status = AudioObjectGetPropertyDataSize(kAudioObjectSystemObject, &propertyAddress, 0, NULL, &dataSize);
if(kAudioHardwareNoError != status)
{
MZLog(#"Unable to get number of audio devices. Error: %d",status);
return NULL;
}
UInt32 deviceCount = dataSize / sizeof(AudioDeviceID);
AudioDeviceID *audioDevices = malloc(dataSize);
status = AudioObjectGetPropertyData(kAudioObjectSystemObject, &propertyAddress, 0, NULL, &dataSize, audioDevices);
if(kAudioHardwareNoError != status)
{
MZLog(#"AudioObjectGetPropertyData failed when getting device IDs. Error: %d",status);
free(audioDevices), audioDevices = NULL;
return NULL;
}
NSMutableArray* devices = [NSMutableArray array];
for(UInt32 i = 0; i < deviceCount; i++)
{
MZLog(#"device found: %d",audioDevices[i]);
[devices addObject:[NSNumber numberWithInt:audioDevices[i]]];
}
free(audioDevices);
return [NSArray arrayWithArray:devices];
}
The problem crops up when I then query the system and ask it for the ID of the default input device. This method returns an ID of 269, which is not listed in the array of all devices.
If I attempt to use kAudioDevicePropertyDeviceName to get the name of the device, I am returned an empty string. Although it doesn't appear to have a name, if I mute this device ID, my built-in microphone will mute. Conversely, if I mute the 261 ID, which is named "Built-In Microphone", my microphone does not mute.
// Gets the current default audio input device
// On my system, it returns 269, which is NOT LISTED in the array of ALL audio devices
- (AudioDeviceID)defaultInputDevice
{
AudioDeviceID defaultAudioDevice;
UInt32 propertySize = 0;
OSStatus status = noErr;
AudioObjectPropertyAddress propertyAOPA;
propertyAOPA.mElement = kAudioObjectPropertyElementMaster;
propertyAOPA.mScope = kAudioObjectPropertyScopeGlobal;
propertyAOPA.mSelector = kAudioHardwarePropertyDefaultInputDevice;
propertySize = sizeof(AudioDeviceID);
status = AudioHardwareServiceGetPropertyData(kAudioObjectSystemObject, &propertyAOPA, 0, NULL, &propertySize, &defaultAudioDevice);
if(status)
{ //Error
NSLog(#"Error %d retreiving default input device",status);
return 0;
}
return defaultAudioDevice;
}
To further confuse things, if I manually switch my input to "Line In" and re-run the program, I get an ID of 259 when querying for the default input device, which is listed in the array of all devices.
So, to summarize:
I am attempting to interact with the input devices in my system. If I try to interact with device ID 261 which is my "Built-In Microphone", nothing happens. If I try to interact with device ID 269 which is, apparently, a phantom ID, my built-in microphone is affected. The 269 ID is returned when I query the system for the default input device, but it is not listed when I query the system for a list of all devices.
Does anyone know what is happening? Am I simply going insane?
Thanks in advance!
Fixed it.
First off, the phantom device ID was simply a virtual device the system was using.
Secondly, the reason I couldn't mute or do anything with the actual devices was because I was using AudioHardwareServiceSetPropertyData instead of AudioObjectSetPropertyData.
It all works now.

Resources