How can I dynamically select audio devices? - windows

A friend of mine and I got into a conversation and realized Windows 7 is missing a key component to its per-application audio settings. You can set volume but you can't stipulate which device each application should use.
Some applications such as ventrilo or skype allow you to select which device to use however, MOST applications simply rely on the current 'Default Audio Device.'
Is there a way to access this? What language would be best used to expose these kinds of functions? Thanks!

Ventrilo and Skype are able to choose which audio device to use because they are coded to directly specify audio output devices instead of just getting the default from the OS. For applications which are coded to use the default Windows device, you can of course change which device is the default device using the sound settings, but this will change the default for the whole system.
Setting different audio devices for separate applications which all use the default audio device isn't something that is necessarily supported by Windows, and many applications use the DirectSound API which complicates the situation further. However, some applications check which device is the default when they initialize and then output exclusively to that device. In this case, you could change the default sound device to one audio device, start a program, then change the default to another audio device, and the first program would continue to use the device which was default when it started up.
However, this is a pretty weak workaround and will only work for specific applications which have been coded in the way described above.

Related

How to implement and publish virtual audio driver to Apple App Store?

At 3:38-4:00 in the session video, it seems Baek San Chang says that AudioDriverKit will not be allowed to be used for virtual audio devices
Video: https://developer.apple.com/videos/play/wwdc2021/10190/
Here is what he says:
Keep in mind that the sample code presented is purely for
demonstrative purposes and creates a virtual audio driver that is not
associated with a hardware device, and so entitlements will not be
granted for that kind of use case.
For virtual audio driver, where device is all that is needed, the
audio server plugin driver model should continue to be used.
The mentioning of sample code is a little confusing; Does he mean the entitlements for hardware access won't be granted for a virtual device? That would seem obvious.
But if he means the entitlements for driver kit extensions (com.apple.developer.driverkit and com.apple.developer.driverkit.allow-any-userclient-access) won't be granted for virtual audio devices, and this is why AudioServerPlugins should still be used, then that's another story.
Are we allowed to use AudioDriverKit Extension for Virtual Devices?
The benefit of having the extension bundled with the app rather than requiring an installer is a significant reason to use an extension if allowed.
I need to create a virtual audio driver that presents a virtual microphone and a virtual speaker to the user. The user can then select these virtual endpoints in 3rd party audio communication apps like Skype, Zoom etc. The virtual audio driver implementation then routes audio between physical devices (selected by the user in the virtual driver userspace control app) and the virtual devices.
It is a requirement that the virtual audio driver and its control app can be published to the Apple app store for users to download and install on their machine without any problems.
How should I go about this?
How should I go about this?
Apply for the entitlements straight away (don't lie on your request form obviously), wait until AudioDriverKit is out of beta, then file a Developer Tech Support TSI and explain what you're trying to do and ask what the policy is. I haven't seen any written policy on this, and the information in the video may or may not be accurate.
Don't forget that you don't just need the entitlements; your virtual audio drivers will also need to pass App Store review, so I'd make sure to get something in writing before you spend all that effort implementing your driver.
One more comment: com.apple.developer.driverkit.allow-any-userclient-access is not generally needed, and whether or not you need to apply for it depends on the architecture you are planning for your driver.

Would DriverKit work for custom USB device to control mic volume (no stream)?

I would like to ask for guidance on how to ideally communicate with a custom USB HID device on MacOS.
Use case
Modify a microphone volume via an external USB HID device.
Question
Can I use DriverKit (HIDDriverKit) for that or I need to use IOKit? I have read something here about audio limitation, but not sure what exactly is not supported.
DriverKit doesn’t support USB devices that manipulate audio or that
communicate wirelessly over Bluetooth or Wi-Fi. For those types of
devices, create a kernel extension using IOKit.
— Source
Would DriverKit still work in my case as I am not sending audio streams but controlling volume only?
Many thanks!
Cheers,
Tom
If I understand you correctly, you wouldn't even need to use DriverKit. (from experience: avoid it if you can!)
You can communicate with HID-compliant devices directly from user space processes. User space processes can generally also control the volume on audio devices.
So by far the easiest option would be to have a launch agent which uses IOKit matching as its launch condition so it starts up when your device is connected. Your agent can communicate with the device using the IOHIDManager API to receive events when your buttons are pressed, and then use the regular Core Audio APIs to control volume.
It doesn't have to be a launch agent, incidentally: a regular Cocoa app with a UI can do all of this as well. (And indeed, you may want to show some form of UI as feedback to the user pressing the buttons.)

How do I change the default playout device on Mac Native WebRTC?

I'm using WebRTC (Native Mac & Native Windows -- not JS) and am trying to change the default playout and recording devices and am having a lot of trouble. This is starting to drive me nuts since it should be very simple.
Question: What's the recommended way to change audio playout and recording devices while a call is ongoing on Mac & Windows, natively?
Here's what I've tried:
Method 1
Mac
I noticed the audio device module listens to Core Audio API notifications and adjusts playout and recording devices properly. This works, but I'm not sure if this is the recommended way to change devices.
Windows
I was not able to find a system-wide way of setting the default audio playout/recording device. The only way I could tell MIGHT work is by getting a reference to the audio device module and calling SetPlayoutDevice / SetRecordingDevice on it manually...which leads to Method 2 below:
Method 2
Mac
If possible, I'd rather use SetPlayoutDevice (link) / SetRecordingDevice (link) to change the audio input/output (so Mac & Windows work the same way).
The unit tests to test real audio IO devices shows we should be able to call StartPlayout and StopPlayout after a call to SetPlayoutDevice -- but this makes my app freeze. I've tried it without the call to StopPlayout and StartPlayout however it doesn't seem to do anything. This makes sense since it looks like only internal state is modified but nothing is modified.
Q: How can I change the default audio playout device and recording device on Mac?
Windows
I haven't had a chance to try this out on Windows yet, but Mac not working makes me think there's something I'm missing here.
Answering this myself.
VoEHardwareImpl (https://chromium.googlesource.com/external/webrtc/stable/webrtc/+/refs/heads/master/voice_engine/voe_hardware_impl.cc) seems to have some relevant code.
For playout:
StopPlayout
...set device index...
StereoPlayoutIsAvailable
SetStereoPlayout
InitPlayout
StartPlayout
For recording:
StopRecording
...set device index...
StereoRecordingIsAvailable
SetStereoRecording
InitRecording
StartRecording
Depending on the commit you're on, you might need to ensure you won't get into a deadlock situation. Some of these methods acquire locks so make sure the lock isn't acquired yet if you're calling a method that requires locks. Better yet -- do this a level above or wrap the adm if possible.

Hog mode / Exclusive access to audio output device with SoX

I would like to know whether SoX/LibSoX offers the possibility to access a sound device in exclusive/hog mode. The idea is to prevent other applications from accessing the sound card / DAC that is being used by the focal app.
My main target is OSX CoreAudio output, but I am also eager to know about Linux (OSS/Alsa).
I know this is possible in CoreAudio, because I have seen it implemented in several apps, including this open source one.
On Mac OS X at least, the answer appears to be no. In http://sourceforge.net/p/sox/code/ci/master/tree/src/coreaudio.c SoX uses the default input or output device but there is no provision for hog mode.

Mac OS X virtual audio driver

I want to create a virtual audio device that gets audio data from the default output (which is an output IOAudioStream) and converts it to an input IOAudioStream.
I went through most of the examples I could find, however they only implement a feature to copy the output IOAudioStream to the input one at most. That means it only converts the audio to an input stream if the audio device is selected as output.
This should be possible, since ScreenFlow allows recording of computer audio by installing a kext that creates a virtual driver.
How can I access the audio data from the default output and send it to my virtual driver?
Take a look at the open-source WavTap, which is a simplified fork of the open-source SoundFlower virtual sound card driver. It is a .kext that I believe does substantially what you want.
For reference, here is how some popular commercial closed-source options work:
Rogue Amoeba's Audio Hijack Pro
-Captures system audio via code based off of the open-source SoundFlower .kext
-Captures an application's audio by substituting a "patch" framework for the normal CoreAudio.framework when launching the application
-Captures an already-running application's audio with the help of the haxie "Application Enhancer" (APE) from Unsanity
These features are branded as their "Instant On" feature (InstantOn.kext).
Ambrosia Software's WireTap Studio
-Captures system audio and application audio via an in-house developed .kext
Telestream's ScreenFlow
-Captures system audio via an in-house developed .kext. (Version 2.x uses varaudio.kext; Version 3.x uses TelestreamAudio.kext)
Macsome's Audio Recorder
-Unknown method
Araelium Group's Screenflick
-Captures system audio using the SoundFlower .kext
UPDATE #1
After reading the author's comments, it appears the underlying goal is to be able to capture the system sound without publishing the virtual audio driver as a device (that would appear in the System Preference's list) and without changing the current default output device (or at least the appearance that the device has changed).
SoundFlower: Adds a sound device to the list upon installation
WavTap: Adds a sound device to the list upon installation; auto-selects the device when the WavTap application is started; auto-deselects the device when the application is shutdown and reselects the previous device
Audio Hijack Pro: Adds a sound device only when audio capture of the default system sound is selected; removes the sound device when audio capture is no longer selected and reselects the previous device
WireTap Studio: Unknown
ScreenFlow: Captures the system sound without changing the current default output device and without publishing the virtual audio driver as a device
UPDATE #2
A quote from Jeff Moore, a CoreAudio Apple engineer, in reference to applications such as WireTap and Audio Hijack Pro:
"There are no APIs on the system that will give you the output of any specific app or the whole mix going to the hardware...[Capturing System Sound] isn't supported by the System and those folks had to be clever. There's nothing stopping you from doing the same thing except how willing you are to get your hands dirty.
The fact is, Mac OS X's audio system was designed first and foremost for performance. This lead us to a design where it is not easy to support the functionality you want without imposing performance penalties. So, we have opted for better performance at the cost of not being able to provide this feature."
If you want to read more on the subject, check out these threads on the CoreAudio API mailing list:
"WireTap, CoreAudio's API, and system capture, and kexts..."
"Another question on capturing audio played back by a software"
"Capturing currently played audio using CoreAudio on Mac"
"'audio hijack'"
"monitoring system audio output like wire tap"
"Capturing audio output to a file"
"Mirroring Audio Output"
"Recording system audio"
Relevant SO Questions:
Hide Audio device using codeless kext
So long story short, you're not likely to find examples from Apple that accomplish this, and you're not likely to find open source code that accomplishes this either, unless someone is feeling very generous. It appears to be too valuable of information.
After additional research, here are some theoretical techniques I came up with that might allow you to accomplish your goal:
Similar to Prosoft Engineering's Hear product, you could create a HAL plugin (user-mode virtual driver) rather than a .kext (kernel-mode virtual driver). Apple has a sample HAL plugin called "SampleHardwarePlugIn" and PulseAudio has one as well. However, with his method I don't think you get access to a pre-mixed system sound stream. You would have to gather up all streams from the various applications (which must use CoreAudio to play sound) and mix them together for pseudo system sound capture.
Create a virtual audio device that is hidden [1][2] from user interaction. When the user wishes to capture the default sound, programmatically create an aggregate device that includes your hidden virtual device and the current default sound device. Temporarily set this aggregate device as the default output. In this manner, you are able to both capture the default sound and hear it.
Side Note: If Mac OS X allows for a hidden device to also be set as the default output device, what would System Preferences show as the selected device? If it instead shows the secondary output device as selected, then you have the added allusion that nothing has changed.
A newer open-source virtual audio device that works with the latest versions of MacOS is BlackHole - it supports multiple audio channels and sampling rates.
It can be used as an audio sink and/or source. It's also handy as part of an aggregate audio device so audio can be heard and re-routed - e.g. using the MacOS Audio MIDI Setup app

Resources