Tokbox Share System sound during screen sharing - opentok

I am trying to create a screen sharing app through Tokbox but I am not able to share the system audio. Instead, it is using the microphone to share the audio. Can you please suggest if it is possible to share system sound (like music etc.) rather than microphone?

It's not possible to access the system audio from JavaScript on Linux and Mac. As Adam mentions in the comment, you can get the system audio along with the screen on Windows.
For non-Windows you would need to use a native application to pipe the system audio output back into a virtual microphone, then use TokBox to access the microphone as normal.
See Access/process system audio with Javascript/Web Audio API
If you want to combine a screenshare with a microphone you will need to use the new custom audio source option (available from v2.13.0) combined with the screenshare video source:
Promise.all([
OT.getUserMedia({ videoSource: null}),
OT.getUserMedia({ videoSource: 'screen' }),
]).then(([audioStream, videoStream]) => {
const audioSource = audioStream.getAudioTracks()[0];
const videoSource = videoStream.getVideoTracks()[0];
const publisher = OT.initPublisher({ audioSource, videoSource });
session.publish(publisher);
});
If you want to stream 2 audio tracks then you will need to use multiple publishers. One publisher would have the screenshare video combined with the audio from the normal microphone. The other publisher would only have the virtual microphone (which has the system audio piped to it) and no video. You should use OT.getDevices() to determine which microphone should belong to each publisher.
You may be able to use the WebAudio API to combine the audio from both real and virtual microphones into a single track so you only need a single publisher, but that requires more effort.
Make sure enableStereo is turned on and audioBitrate is set to 64000 (or more) for the publisher that's streaming music.
If you need to support all operating systems then you could detect the OS based on the user agent and execute the appropriate strategy accordingly.

Related

How can I capture microphone data and route it to a virtual microphone device?

Recently, I wanted to get my hands dirty with Core Audio, so I started working on a simple desktop app that will apply effects (eg. echo) on the microphone data in real-time and then the processed data can be used on communication apps (eg. Skype, Zoom, etc).
To do that, I figured that I have to create a virtual microphone, to be able to send processed (with the applied effects) data over communication apps. For example, the user will need to select this new microphone (virtual) device as Input Device in a Zoom call so that the other users in the call can hear her with her voiced being processed.
My main concern is that I need to find a way to "route" the voice data captured from the physical microphone (eg. the built-in mic) to the virtual microphone. I've spent some time reading the book "Learning Core Audio" by Adamson and Avila, and in Chapter 8 the author explains how to write an app that a) uses an AUHAL in order to capture data from the system's default input device and b) then sends the data to the system's default output using an AUGraph. So, following this example, I figured that I also need to do create an app that captures the microphone data only when it's running.
So, what I've done so far:
I've created the virtual microphone, for which I followed the NullAudio driver example from Apple.
I've created the app that captures the microphone data.
For both of the above "modules" I'm certain that they work as expected independently, since I've tested them with various ways. The only missing piece now is how to "connect" the physical mic with the virtual mic. I need to connect the output of the physical microphone with the input of the virtual microphone.
So, my questions are:
Is this something trivial that can be achieved using the AUGraph approach, as described in the book? Should I just find the correct way to configure the graph in order to achieve this connection between the two devices?
The only related thread I found is this, where the author states that the routing is done by
sending this audio data to driver via socket connection So other apps that request audio from out virtual mic in fact get this audio from user-space application that listen for mic at the same time (so it should be active)
but I'm not quite sure how to even start implementing something like that.
The whole process I did for capturing data from the microphone seems quite long and I was thinking if there's a more optimal way to do this. The book seems to be from 2012 with some corrections done in 2014. Has Core Audio changed dramatically since then and this process can be achieved more easily with just a few lines of code?
I think you'll get more results by searching for the term "play through" instead of "routing".
The Adamson / Avila book has an ideal play through example that unfortunately for you only works for when both input and output are handled by the same device (e.g. the built in hardware on most mac laptops and iphone/ipad devices).
Note that there is another audio device concept called "playthru" (see kAudioDevicePropertyPlayThru and related properties) which seems to be a form of routing internal to a single device. I wish it were a property that let you set a forwarding device, but alas, no.
Some informal doco on this: https://lists.apple.com/archives/coreaudio-api/2005/Aug/msg00250.html
I've never tried it but you should be able to connect input to output on an AUGraph like this. AUGraph is however deprecated in favour of AVAudioEngine which last time I checked did not handle non default input/output devices well.
I instead manually copy buffers from the input device to the output device via a ring buffer (TPCircularBuffer works well). The devil is in the detail, and much of the work is deciding on what properties you want and their consequences. Some common and conflicting example properties:
minimal lag
minimal dropouts
no time distortion
In my case, if output is lagging too much behind input, I brutally dump everything bar 1 or 2 buffers. There is some dated Apple sample code called CAPlayThrough which elegantly speeds up the output stream. You should definitely check this out.
And if you find a simpler way, please tell me!
Update
I found a simpler way:
create an AVCaptureSession that captures from your mic
add an AVCaptureAudioPreviewOutput that references your virtual device
When routing from microphone to headphones, it sounded like it had a few hundred milliseconds' lag, but if AVCaptureAudioPreviewOutput and your virtual device handle timestamps properly, that lag may not matter.

HAL virtual device: how to "proxy" microphone

I`m trying to create "virtual microphone", that should work "in front" of default input device/microphone. So when user select "virtual microphone" as input (in Audacity, for example) and starts to record sound - Audacity will receive samples from virtual device driver, that was taken by driver from real/default microphone. So "virtual microphone" is a kind of proxy device for real (default/built-in/etc) one. This is needed for later processing of microphone input on-the-fly.
So far i created virtual HAL device (based on NullAudio.c driver example from Apple), i can generate procedural sound for Audacity, but i still can not figure out the way to read data from real microphone (using its deviceID) from inside driver.
Is it ok to use normal recording as in usual app (via AudioUnits/AURemoteIO/AUHAL), etc? Or something like IOServices should be used?
Documentation states that
An AudioServerPlugIn operates in a limited environment. First and
foremost, an AudioServerPlugIn may not make any calls to the client
HAL API in the CoreAudio.framework. This will result in undefined
(but generally bad) behavior.
but it is not clear what API is "client" API and what is not, in regard of reading microphone data.
What kind of API can/should be used from virtual device driver for accessing real microphone data in realtime?

How to change the result of IAudioClient->GetMixFormat() method?

In normal case, call IAudioClient->GetMixFormat() method on a device which only have stereo playback device will get default audio format on share mode which is 2 channels format.
But for some reason, I need to let all applicationes on this device get 6/8 channels format when they call IAudioClient->GetMixFormat().
Here is a section of decription of IAudioClient->GetMixFormat() method on MSDN website.
The mix format is the format that the audio engine uses internally for digital processing of shared-mode streams. This format is not necessarily a format that the audio endpoint device supports.
It is the mix format for shared usage mode when applications play audio [in compatible formats] and system mixes everything together for resulting playback stream. The format may be altered via Control Panel, see screenshot in this answer:
Not every device will offer 5.1 and 7.1 options though.

WP7 BackgroundAudioAgent: get Meta data from Icecast

I have a Windows Phone project that streams radio stations from an Icecast server.
I am using Background Audio Agent to play the streams.
The Icecast stream provides Track Title - Artist name as the metadata.
Is there any way I can fetch the metadata from the Audio Player?
Right now I am fetching the metadata from a PHP script every 10 secs. If I get it directly from Icecast, it will be good.
In my IPhone application I am able to see the metadata. I am using the video player in IPhone app.
Tell me whether it can be done or not.
If not, please tell me whether I can read the stream byte by byte and send it to the Audio agent.
Thanks.

WP7 stream audio with dynamic URI

I used BackgroundAudioPlayer to play mp3 from internet and I know that background audio agent automatically streams the mp3
Does audio agent plays after completely streams the mp3 ?
If the mp3 URI is dynamic, how to play it ?
If it is a continuous audio stream, then the background audio agent will continue playing it. If it is a single file, the playback will stop on completion.
Also, there is no way you can specify a dynamic URL. There is the core location that you specify for the stream that has to be adjusted in-app.

Resources