Anyone know how to stream audio to multiple AirPlay destinations? Apparently, this was possible through Core Audio at some point in the past, but on 10.9 and 10.10, this does not seem possible. iTunes does it, so what's the secret? Here is some code I tried to see if I could get this to work:
OSStatus err = 0;
UInt32 size = sizeof(UInt32);
SSAudioSource * targetSource = airplayDevice.airplaySources[0];
AudioDeviceID airPlayDeviceID = targetSource.deviceID;
SSAudioSource * source1 = airplayDevice.airplaySources[0];
SSAudioSource * source2 = airplayDevice.airplaySources[1];
SSAudioSource * source3 = airplayDevice.airplaySources[2];
AudioDeviceID alldevices[] = {source3.sourceID, source2.sourceID, source1.sourceID};
AudioObjectPropertyAddress addr;
addr.mSelector = kAudioDevicePropertyDataSource;
addr.mScope = kAudioDevicePropertyScopeOutput;
addr.mElement = kAudioObjectPropertyElementMaster;
// Set the 'AirPlay' device to point to all of its sources...
err = AudioObjectSetPropertyData(airPlayDeviceID, &addr, 0, nil, size, alldevices);
AudioObjectPropertyAddress audioDevicesAddress = {
kAudioHardwarePropertyDefaultOutputDevice,
kAudioObjectPropertyScopeGlobal,
kAudioObjectPropertyElementMaster
};
// ...now set the system output to point at the 'AirPlay' device
err = AudioObjectSetPropertyData(kAudioObjectSystemObject, &audioDevicesAddress, 0, nil, size, &airPlayDeviceID);
No matter how I arrange the devices in the array, sound only comes out of the first device (index 0) of the array. So what's the secret?
Thanks
I raised a bug report with Apple for this back in July and got a reply in October:
Engineering has determined that there are no plans to address this
issue.
I've gone back to Apple asking why the functionality has been removed but not hopeful for a (timely) response.
For what it's worth I think your approach is correct, it's similar to the way I had it working in the past for an app. I suspect iTunes uses Audio Units or something similar to do multiple speakers.
Related
I've got a CoreAudio-based MacOS/X program that allows the user to select an input-audio-device and an output-audio-device, and (if the user didn't choose the same device for both input and output) my program creates a private aggregate-audio-device and uses that to receive audio the audio, process it, and then send it out for playback.
That's all working great, but there is one minor problem -- if the selected input-device also has some outputs associated with its hardware, those outputs show up as part of the aggregate device's output-channels, which isn't the behavior I want. Similarly, if the selected output-device also has some inputs associated with its hardware, those inputs will show up as input channels in the aggregate device's inputs, which I also don't want.
My question is, is there any way to tell CoreAudio not to include the inputs or outputs of a sub-device in the aggregate device I'm constructing? (my fallback solution would be to modify my audio-rendering callback to ignore the unwanted audio channels, but that seems less than elegant, so I'm curious if there is a better way to handle it)
My function that creates the aggregate device is below, in case it is relevant:
// This code was adapted from the example code at : https://web.archive.org/web/20140716012404/http://daveaddey.com/?p=51
ConstCoreAudioDeviceRef CoreAudioDevice :: CreateAggregateDevice(const ConstCoreAudioDeviceInfoRef & inputCadi, const ConstCoreAudioDeviceInfoRef & outputCadi, bool require96kHz, int32 optRequiredBufferSizeFrames)
{
OSStatus osErr = noErr;
UInt32 outSize;
Boolean outWritable;
//-----------------------
// Start to create a new aggregate by getting the base audio hardware plugin
//-----------------------
osErr = AudioHardwareGetPropertyInfo(kAudioHardwarePropertyPlugInForBundleID, &outSize, &outWritable);
if (osErr != noErr) return ConstCoreAudioDeviceRef();
AudioValueTranslation pluginAVT;
CFStringRef inBundleRef = CFSTR("com.apple.audio.CoreAudio");
AudioObjectID pluginID;
pluginAVT.mInputData = &inBundleRef;
pluginAVT.mInputDataSize = sizeof(inBundleRef);
pluginAVT.mOutputData = &pluginID;
pluginAVT.mOutputDataSize = sizeof(pluginID);
osErr = AudioHardwareGetProperty(kAudioHardwarePropertyPlugInForBundleID, &outSize, &pluginAVT);
if (osErr != noErr) return ConstCoreAudioDeviceRef();
//-----------------------
// Create a CFDictionary for our aggregate device
//-----------------------
CFMutableDictionaryRef aggDeviceDict = CFDictionaryCreateMutable(NULL, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
CFStringRef aggregateDeviceNameRef = CFSTR("My Aggregate Device");
CFStringRef aggregateDeviceUIDRef = CFSTR("com.mycomapany.myaggregatedevice");
// add the name of the device to the dictionary
CFDictionaryAddValue(aggDeviceDict, CFSTR(kAudioAggregateDeviceNameKey), aggregateDeviceNameRef);
// add our choice of UID for the aggregate device to the dictionary
CFDictionaryAddValue(aggDeviceDict, CFSTR(kAudioAggregateDeviceUIDKey), aggregateDeviceUIDRef);
if (IsDebugFlagEnabled("public_cad_device") == false)
{
// make it private so that we don't have the user messing with it
int value = 1;
CFDictionaryAddValue(aggDeviceDict, CFSTR(kAudioAggregateDeviceIsPrivateKey), CFNumberCreate(NULL, kCFNumberIntType, &value));
}
//-----------------------
// Create a CFMutableArray for our sub-device list
//-----------------------
// we need to append the UID for each device to a CFMutableArray, so create one here
CFMutableArrayRef subDevicesArray = CFArrayCreateMutable(NULL, 0, &kCFTypeArrayCallBacks);
// add the sub-devices to our aggregate device
const CFStringRef inputDeviceUID = inputCadi()->GetPersistentUID().ToCFStringRef();
const CFStringRef outputDeviceUID = outputCadi()->GetPersistentUID().ToCFStringRef();
CFArrayAppendValue(subDevicesArray, inputDeviceUID);
CFArrayAppendValue(subDevicesArray, outputDeviceUID);
//-----------------------
// Feed the dictionary to the plugin, to create a blank aggregate device
//-----------------------
AudioObjectPropertyAddress pluginAOPA;
pluginAOPA.mSelector = kAudioPlugInCreateAggregateDevice;
pluginAOPA.mScope = kAudioObjectPropertyScopeGlobal;
pluginAOPA.mElement = kAudioObjectPropertyElementMaster;
UInt32 outDataSize;
osErr = AudioObjectGetPropertyDataSize(pluginID, &pluginAOPA, 0, NULL, &outDataSize);
if (osErr != noErr) return ConstCoreAudioDeviceRef();
AudioDeviceID outAggregateDevice;
osErr = AudioObjectGetPropertyData(pluginID, &pluginAOPA, sizeof(aggDeviceDict), &aggDeviceDict, &outDataSize, &outAggregateDevice);
if (osErr != noErr) return ConstCoreAudioDeviceRef();
//-----------------------
// Set the sub-device list
//-----------------------
pluginAOPA.mSelector = kAudioAggregateDevicePropertyFullSubDeviceList;
pluginAOPA.mScope = kAudioObjectPropertyScopeGlobal;
pluginAOPA.mElement = kAudioObjectPropertyElementMaster;
outDataSize = sizeof(CFMutableArrayRef);
osErr = AudioObjectSetPropertyData(outAggregateDevice, &pluginAOPA, 0, NULL, outDataSize, &subDevicesArray);
if (osErr != noErr) return ConstCoreAudioDeviceRef();
//-----------------------
// Set the master device
//-----------------------
// set the master device manually (this is the device which will act as the master clock for the aggregate device)
// pass in the UID of the device you want to use
pluginAOPA.mSelector = kAudioAggregateDevicePropertyMasterSubDevice;
pluginAOPA.mScope = kAudioObjectPropertyScopeGlobal;
pluginAOPA.mElement = kAudioObjectPropertyElementMaster;
outDataSize = sizeof(outputDeviceUID);
osErr = AudioObjectSetPropertyData(outAggregateDevice, &pluginAOPA, 0, NULL, outDataSize, &outputDeviceUID);
if (osErr != noErr) return ConstCoreAudioDeviceRef();
//-----------------------
// Clean up
//-----------------------
// release the CF objects we have created - we don't need them any more
CFRelease(aggDeviceDict);
CFRelease(subDevicesArray);
// release the device UID CFStringRefs
CFRelease(inputDeviceUID);
CFRelease(outputDeviceUID);
ConstCoreAudioDeviceInfoRef infoRef = CoreAudioDeviceInfo::GetAudioDeviceInfo(outAggregateDevice);
if (infoRef())
{
ConstCoreAudioDeviceRef ret(new CoreAudioDevice(infoRef, true));
return ((ret())&&(SetupSimpleCoreAudioDeviceAux(ret()->GetDeviceInfo(), require96kHz, optRequiredBufferSizeFrames, false).IsOK())) ? ret : ConstCoreAudioDeviceRef();
}
else return ConstCoreAudioDeviceRef();
}
There are ways to handle the channel mapping (which you're basically describing), but I doubt if it is a "better" way in your case.
Such functionality is covered in the AudioToolbox framework using Audio Units. Especially the kAudioUnitSubType_HALOutput AudioUnit (AUComponent.h) is interesting in this case.
Using this type of AudioUnit you can send and receive audio to and from a specific audio device in a specified channel format. When the desired channel layout doesn't match the channel layout of the device you can do channel mapping.
To get some technical details have a look at:
https://developer.apple.com/library/archive/technotes/tn2091/_index.html
Please not that a lot of the AudioToolbox is in the process of being replaced by AVAudioEngine.
So, in your case I think it would be easier to do manual channel mapping by just ignoring the samples you don't need.
Also, I'm not sure if CoreAudio provides 'slicenced' output buffers. To be sure consider silencing them yourself.
EDIT
Looking at the docs in AudioHardware.h there seems to be a way of enabling and disabling streams of a particular IOProc.
When OS X creates an aggregate, it puts all the channels of the different subdevices in different streams, so in your case you should be able to disable the stream which contains the inputs of the output device and and vice versa disable the stream which contains the outputs of the input device.
For this have a look at AudioHardwareIOProcStreamUsage and kAudioDevicePropertyIOProcStreamUsage both in AudioHardware.h
I found the HALLab utility from Apple very useful in finding out about the actual streams.
(https://developer.apple.com/download/more/ and search for "Audio Tools for Xcode")
I need access to audio data from microphone on macbook. I have the an example program for recording microphone data based on the one in "Learning Core Audio". When I run this program and break on the call back routine I see the inBuffer pointer and the mAudioData pointer. However I am having a heck of a time making sense of the data. I've tried casting the void* pointer to mAudioData to SInt16, to SInt32 and to float and tried a number of endian conversions all with nonsense looking results. What I need to know definitively is the number format for the data in the buffer. The example actually works writing microphone data to a file which I can play so I know that real audio is being recorded.
AudioStreamBasicDescription recordFormat;
memset(&recordFormat,0,sizeof(recordFormat));
//recordFormat.mFormatID = kAudioFormatMPEG4AAC;
recordFormat.mFormatID = kAudioFormatLinearPCM;
recordFormat.mChannelsPerFrame = 2;
recordFormat.mBitsPerChannel = 16;
recordFormat.mBytesPerPacket = recordFormat.mBytesPerFrame = recordFormat.mChannelsPerFrame * sizeof(SInt16);
recordFormat.mFramesPerPacket = 1;
MyGetDefaultInputDeviceSampleRate(&recordFormat.mSampleRate);
UInt32 propSize = sizeof(recordFormat);
CheckError(AudioFormatGetProperty(kAudioFormatProperty_FormatInfo,
0,
NULL,
&propSize,
&recordFormat),
"AudioFormatProperty failed");
//set up queue
AudioQueueRef queue = {0};
CheckError(AudioQueueNewInput(&recordFormat,
MyAQInputCallback,
&recorder,
NULL,
kCFRunLoopCommonModes,
0,
&queue),
"AudioQueueNewInput failed");
UInt32 size = sizeof(recordFormat);
CheckError(AudioQueueGetProperty(queue,
kAudioConverterCurrentOutputStreamDescription,
&recordFormat,
&size), "Couldn't get queue's format");
I'm learning how to build OSX applications, and I was wondering if there is a way to check if there is some audio being outputted by any application on the system? Thanks
I think this can be checked with the kAudioDevicePropertyDeviceIsRunningSomewhere property.
From the header doc:
A UInt32 where 1 means that the AudioDevice is running in at least one process on the system and 0 means that it isn't running at all.
Pseudo-y code:
bool isRunningSomewhere(AudioDeviceID deviceId) {
uint32 val;
uint32 size = sizeof(val);
AudioObjectPropertyAddress pa = { kAudioDevicePropertyDeviceIsRunningSomewhere, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMaster };
AudioObjectGetPropertyData(deviceId, &pa, 0, NULL, &size, &val);
return val == 1;
}
This should tell you if the device is being used (i.e. has an active IOProc.) But it won't tell you if that IOProc is just sending silence.
This can't be done at the user application level. It might be possible by installing an OS X kext (kernel extension) or a custom audio device driver, which requires sudo privileges and possibly a reboot.
I need to be notified, when a new audio device appears on OS X. I'm not sure where to start. Can Core Audio do this for me, or do I need to get down to a lower level with for instance IO Kit?
You can do this by observing kAudioHardwarePropertyDevices. The code looks roughly like:
AudioObjectPropertyAddress propertyAddress = {
.mSelector = kAudioHardwarePropertyDevices,
.mScope = kAudioObjectPropertyScopeGlobal,
.mElement = kAudioObjectPropertyElementMaster
};
OSStatus result = AudioObjectAddPropertyListener(kAudioObjectSystemObject, &propertyAddress, myAudioObjectPropertyListenerProc, NULL);
In myAudioObjectPropertyListenerProc you can determine what devices are currently available.
So here's what is going on.
I am attempting to work with Core Audio, specifically input devices. I want to mute, change volume, etc, etc. I've encountered something absolutely bizarre that I cannot figure out. Thus far, google has been of no help.
When I query the system and ask for a list of all audio devices, I am returned an array of device IDs. In this case, 261, 259, 263, 257.
Using kAudioDevicePropertyDeviceName, I get the following:
261: Built-in Microphone
259: Built-in Input
263: Built-in Output
257: iPhoneSimulatorAudioDevice
This is all well and good.
// This method returns an NSArray of all the audio devices on the system, both input and
// On my system, it returns 261, 259, 263, 257
- (NSArray*)getAudioDevices
{
AudioObjectPropertyAddress propertyAddress = {
kAudioHardwarePropertyDevices,
kAudioObjectPropertyScopeGlobal,
kAudioObjectPropertyElementMaster
};
UInt32 dataSize = 0;
OSStatus status = AudioObjectGetPropertyDataSize(kAudioObjectSystemObject, &propertyAddress, 0, NULL, &dataSize);
if(kAudioHardwareNoError != status)
{
MZLog(#"Unable to get number of audio devices. Error: %d",status);
return NULL;
}
UInt32 deviceCount = dataSize / sizeof(AudioDeviceID);
AudioDeviceID *audioDevices = malloc(dataSize);
status = AudioObjectGetPropertyData(kAudioObjectSystemObject, &propertyAddress, 0, NULL, &dataSize, audioDevices);
if(kAudioHardwareNoError != status)
{
MZLog(#"AudioObjectGetPropertyData failed when getting device IDs. Error: %d",status);
free(audioDevices), audioDevices = NULL;
return NULL;
}
NSMutableArray* devices = [NSMutableArray array];
for(UInt32 i = 0; i < deviceCount; i++)
{
MZLog(#"device found: %d",audioDevices[i]);
[devices addObject:[NSNumber numberWithInt:audioDevices[i]]];
}
free(audioDevices);
return [NSArray arrayWithArray:devices];
}
The problem crops up when I then query the system and ask it for the ID of the default input device. This method returns an ID of 269, which is not listed in the array of all devices.
If I attempt to use kAudioDevicePropertyDeviceName to get the name of the device, I am returned an empty string. Although it doesn't appear to have a name, if I mute this device ID, my built-in microphone will mute. Conversely, if I mute the 261 ID, which is named "Built-In Microphone", my microphone does not mute.
// Gets the current default audio input device
// On my system, it returns 269, which is NOT LISTED in the array of ALL audio devices
- (AudioDeviceID)defaultInputDevice
{
AudioDeviceID defaultAudioDevice;
UInt32 propertySize = 0;
OSStatus status = noErr;
AudioObjectPropertyAddress propertyAOPA;
propertyAOPA.mElement = kAudioObjectPropertyElementMaster;
propertyAOPA.mScope = kAudioObjectPropertyScopeGlobal;
propertyAOPA.mSelector = kAudioHardwarePropertyDefaultInputDevice;
propertySize = sizeof(AudioDeviceID);
status = AudioHardwareServiceGetPropertyData(kAudioObjectSystemObject, &propertyAOPA, 0, NULL, &propertySize, &defaultAudioDevice);
if(status)
{ //Error
NSLog(#"Error %d retreiving default input device",status);
return 0;
}
return defaultAudioDevice;
}
To further confuse things, if I manually switch my input to "Line In" and re-run the program, I get an ID of 259 when querying for the default input device, which is listed in the array of all devices.
So, to summarize:
I am attempting to interact with the input devices in my system. If I try to interact with device ID 261 which is my "Built-In Microphone", nothing happens. If I try to interact with device ID 269 which is, apparently, a phantom ID, my built-in microphone is affected. The 269 ID is returned when I query the system for the default input device, but it is not listed when I query the system for a list of all devices.
Does anyone know what is happening? Am I simply going insane?
Thanks in advance!
Fixed it.
First off, the phantom device ID was simply a virtual device the system was using.
Secondly, the reason I couldn't mute or do anything with the actual devices was because I was using AudioHardwareServiceSetPropertyData instead of AudioObjectSetPropertyData.
It all works now.