I am currently using the Mac OS X Audio Queue Services API for audio recording and sound analysis. Works fine using the default mic input.
If there is more than one microphone plugged into the Mac (USB, headset jack, etc.), is there a way to programmatically enumerate and select which mic is to be used for audio input within an application? (e.g. not have to send the user to the system preferences panel, which may affect a users other audio applications.) If so, which APIs should be used to select the mic input.
To enumerate available input devices please see my answer to AudioObjectGetPropertyData to get a list of input devices.
Once you've determined the input device you'd like to use, you can set the kAudioQueueProperty_CurrentDevice property to the device's UID.
I fear, no, because AQ is hard-coded to use default input (to my best knowledge). AQ is fairly limited and only iOS gives more control via AutoSessions. However, you can use AUHAL to record from an arbitrary device:
http://developer.apple.com/library/mac/#technotes/tn2091/_index.html
You won't need listing 4 from above because you'll use the AudioDeviceID for the device you have chosen (presumably by getting the list of devices using AudioObjectGetPropertyDataSize and picking the one you want).
FWIW: if you decide that's too much, you can presumably still use AudioHardwareSetProperty to set kAudioHardwarePropertyDefaultInputDevice from your code - not what you wanted but certainly less work...
If you set up the Audio Queue to read from the default input device, then it will read from the mic that is selected as default in the System Preferences->Soubd->Input tab.
Related
I need to do some system-wide audio processing in my app.
I have installed Soundflower and selected it as my default output device in order to get the system audio. I know that Soundflower merely copies the mix buffer to a ThruBuffer and passes it to the apps so they can get it in their AudioDeviceIOProc callback.
What I don't understand is how to route the audio back to the Built-In output device after I've done the audio processing. I have the Soundflower device as the default, and it produces silence as I try to route the audio to the default output unit. Maybe what I need is to create a Multi-Output device in my program but I'm not sure how to do that.
You can create a multi-output device on osx - they're called "aggregate devices". You can do it manually in Audio MIDI Setup app and use that device in your app, or do it programmatically in your app.
If you do do it in app, example code seems to be rare. I cribbed the info I needed from this blog post.
NB the post is very old, I had to go to the Internet Archive Wayback Machine to find it.
I am developing an application which does custom audio processing and sends the processed audio to the USB headset. My requirement is that the USB headset should not be visible to the user in the list of Audio output devices in System Preferences. Using "SampleUSBAudioOverrideDriver" code-less kext sample code from Apple, I'm able to change the interface name but I really need to hide it.
Is subclassing AppleUSBAudioDevice an option?
The recommended way to do pre-processing of a USB audio device's input and output streams in kernel space is to use the AppleUSBAudioPlugin API. This kext does not appear in the list of devices because it isn't an instance of IOAudioEngine, so there is no "hiding" involved.
I'm working on an application that plays audio on OsX. I'm able to list the available output devices with CodeAudio, but I have issues with a bluetooth headset; Even though the device is powered off and not connected, it's still listed in the OsX sound pref pane, and therefore picked up by CoreAudio as a valid output.
I'd like not to display bluetooth outputs if the corresponding device isn't connected already.
I've tried to check CoreAudio properties like those:
kAudioDevicePropertyDeviceIsAlive
kAudioDevicePropertyDeviceIsRunning
kAudioDevicePropertyDeviceIsRunningSomewhere
but there's no difference between the default output and the bluetooth output.
Is this kind of detection something doable with Coreaudio?
For the benefit of future searchers the way I've done it in the past is to:
Enumerate the detected devices
Query the kAudioDevicePropertyTransportType property for each AudioDeviceID
which will return a transport type ID constant.
Match for the
kAudioDeviceTransportTypeBluetooth or
kAudioDeviceTransportTypeBluetoothLE type
That way you can determine the type of connection the device is using (USB, Firewire etc). You can find the full list of transport types in AudioHardwareBase.h
There are two apps for OS X that allow you to pre-amplify audio before it gets played by the hardware: Audio Hijack (pre-amplifies output from particular applications) and Boom (pre-amplifies all system audio). These apps work by applying equalization to pre-existing audio streams - with a high pre-amp setting - before they are sent to the sound card.
My question is: how to hijack the system audio stream and then send it along to the sound card. Is this somewhere in an API, or would it require altering a system library ?
1) Create a standard sound device that shows up in audio system preferences. This has to be in the form of a kernel extension (kext). Difficult to create by just reading apple docs - try looking at an app called Soundflower.
2) Once you've loaded the kext and have the new audio device available, select it in preferences as the default output device for system audio.
3) Now you need to alter the audio and pipe it to the real system output. This can be done in an accompanying application that adds callback "IOProc" functions to a) the new device and b) your computer's built-in output device. You can then copy audio buffers from one device to the other, to pipe the audio to your speakers. To increase the volume, multiply all the bytes in the buffer by some factor. See Soundflower's accompanying app.
I have two USB mice connected to my Mac, one of which I'm using as a scanner. I need access to the Generic X and Y data but I don't want that data to move the cursor. How, under either carbon or cocoa environments, do I tell the system to ignore the mouse as a pointing device?
Edit: after some digging I've found that I can turn off mouse position updating with the CGAssociateMouseAndMouseCursorPosition() function, but this does not allow me to specify a single mouse. Can anyone explain the OS X relationship between HID mouse devices and the cursor? There has to be a binding between the hardware and software on a device by device basis but I can't find it.
I would look into writing a basic user-space driver for the mouse.
This will allow you direct access to the mouse as a USB device. You can also take control of the device from the system for your exclusive use.
There is some documentation here:
Working With USB Device Interfaces
To get you started, the set up steps to connect to a USB device go like this (I think, my IOKit is rusty)
include < IOKit/IOKitLib.h > and < IOKit/usb/IOUSBLib.h >
find the device you are interested in using IOServiceMatching(). This lets you pick find the correct USB device based on its properties, including things like vendor ID, &c. (See the IORegistryExplorer tool screen shot below)
get a USB plugin instance (let's call it plugin) with IOCreatePlugInInterfaceForService()
use plugin from step 2 get a device interface (let's call it device) using (**plugin)->QueryInterface()
device represents a connection handle to your USB device--open it first using either (**device).USBDeviceOpen or (**device).USBDeviceOpenSeize(). from there you should be able to send/receive data.
Sounds like a lot I know.. and there might be an easier way, but this is what comes to my mind. There may be some benefits to having this level of control of the device, not sure. good luck.