How do I route audio output to a selected audio endpoint/device in Windows? - windows

TL;DR: When playing audio using Windows UWP MediaPlayer, how do I route audio to a specific audio device/endpoint?
Full Context
I'm working on an app to place calls. Some requirements are:
Play audio sounds at different points (e.g. when the call hangs up)
Allow users to change in-call audio output to different endpoints (not an issue)
Ensure that when in-call audio has routed to a different "default" endpoint, that any other sounds that are played are routed to the same endpoint (this is what I need help with)
Currently, when I route audio to a different endpoint, other sounds that are played with Windows UWP MediaPlayer do not get routed to the same "new" endpoint. This makes sense since we aren't changing application-wide settings.
My question is: How do I route audio to the same endpoint that the call audio is going through, given that I'm using Windows UWP MediaPlayer and given that I can get device information?

When playing audio using Windows UWP MediaPlayer, how do I route audio to a specific audio device/endpoint?
Please check Output to a specific audio endpoint document. By default, the audio output from a MediaPlayer is routed to the default audio endpoint for the system, but you can specify a specific audio endpoint that the MediaPlayer should use for output.
You could use GetAudioRenderSelector to get the render selector then use FindAllAsync to get render device the pass the specific device to mediaplayer AudioDevice property.

Related

OpenTok real-time audio transcription

I am trying to transcribe the audio in an OpenTok session in real-time. The OpenTok API does not not seem to have that feature. Is there any way I can capture the data in some form and push it to another script/ tool that makes the transcription?
The issue is not with transcribing, the issue is in accessing the live audio stream data and using it in real-time.
You can get access to the video/audio stream (MediaStream) with https://tokbox.com/developer/sdks/js/reference/OT.html#getUserMedia in client SDK.
You can manipulate audio using available API from WebAudio spec.
Publish audio from an audio MediaStreamTrack object. For example, you can use the AudioContext object and the Web Audio API to dynamically generate audio. You can then call createMediaStreamDestination().stream.getAudioTracks()[0]on the AudioContext object to get the audio MediaStreamTrack object to use as the audioSource property of the optionsobject you pass into the OT.initPublisher() method. For a basic example, see the Stereo-Audio sample OpenTok-web-samples repo on GitHub.
This above GitHub example is about injecting your audio stream. However, you can also extract/capture your audio before injecting it. See detail here...
https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API.

Is it possible to decrypt a DRM HLS content in windows?

I have to develop a function that decrypt a DRM HLS content using video.js lib.
I've retrieved about this issue and I found it(https://learn.microsoft.com/ko-kr/azure/media-services/previous/media-services-protect-hls-with-fairplay) So I've used playready as like as a below
const setPlayerForDashHLS = (src, type, key, licenseURL) => {
const customData = DRM_TYPE + SITE_ID + key;
player.src({
src: src,
type: type,
keySystems: {
'com.microsoft.playready': {
url: licenseURL,
licenseHeaders: {
'pallycon-customdata': customData
}
}
}
});
}
but It's not working to play DRM HLS video and not print anything about an error log.
I want to know that how to play it.
Since you are using Azure Media Services, you already have dynamic packaging - i.e. you can have both HLS and MPEG-DASH output from the same underlying video assets. I would strongly encourage you to use MPEG-DASH if you are trying to play back content in the browser. HLS + PlayReady is only supported on very few devices, whereas MPEG-DASH + PlayReady is supported on pretty much all the places that support PlayReady.
Also, i presume you already work with a vendor for PlayReady DRM, if not the PlayReady and/or Widevine site lists a number of vendors.
Also, you probably want to support Widevine as well, as that is what is supported for playback in Google Chrome and Firefox.
There is an official plugin for VideoJS that supports DRM playback, leveraging the EME standard.
EME (Encrypted Media Standard) is a HTML5 extension to support playing encrypted media in a standard way - https://www.w3.org/TR/2017/REC-encrypted-media-20170918/)
You can see the VideoJS plugin here: https://github.com/videojs/videojs-contrib-eme
It includes instructions for configuring and playing back with PlayReady DRM.
Looking at the error message you are receiving:
This is generated by the platform or browser and indicates:
NotSupportedError
Either the specified keySystem isn't supported by the platform or the browser, or none of the configurations specified by supportedConfigurations can be satisfied (if, for example, none of the codecs specified in contentType are available).
(https://developer.mozilla.org/en-US/docs/Web/API/Navigator/requestMediaKeySystemAccess)
Looking at your configuration above, one reason why you might get this could be playing back the video on a browser which does not support PlayReady. As a general (not absolute) rule the DRM's supported 'natively' by browsers and devices are at this time:
Android devices - Widevine
Chrome browser - Widevine
FireFox - Widevine
iOS device - FairPlay
Safari browser - FairPlay
Internet Explorer browser - PlayReady
To try to minimise the overhead for video stream providers CENC (https://en.wikipedia.org/wiki/MPEG_Common_Encryption) allows you have a single stream protected by both PlayReady or Widevine.
Added to this, CMAF and the announced support for AES-CBC mode encryption by all major devices and browsers promises the ability to have a single media stream for encrypted HLS and DASH streams, but it will likely be some time before the announced support rolls out to all devices, meaning that now typically both HLS and DASH streams are typically required for maximum reach for streamed video.
Is it possible to decrypt a DRM HLS content in windows?
I have to develop a function that decrypt [...]
Your question shows misunderstaning about how DRM-enabled playback works.
You cannot decrypt DRM-protected content in Windows in any event. You would need a decryption key and when it comes to decryption, this key is applied in a known, well-defined way. However the key is exactly what you are never given. Especially in Windows, such as in the case of Microsoft PlayReady DRM, compatible browsers are providing decryption service via implementation of EME (Encrypted Media Extensions) specification. Browsers decrypt content on terms that decrypted content is played back subject to additional constraints (pretty restrictive, think of enforced content protected on the physical cable to monitor when you play such content!), and you never get plain decrypted data back in particular.
You can play DRM-protected content with browsers by working collaboratively and playing protected data. But you never decrypt content on your own, such as by implementing a decryption function.

Is there an option to create a Snapchat-like features addon to interact with video calls?

Is there a way to create an add-on “snapchat-like” video feature for the video calls?
I want to build an app that will get and manipulate the user camera stream source, then the user will decide if and when to share it on his ms-teams calls.
Currently we only have Calls and online meetings support for bot. There is no support for processing video call stream yet.
As of now we don't have more details to share on this.

How to cast a specific song programatically with Cast Audio

I'd like to hook up a bttn such that when that button is pressed, a specific song is played through my speakers using the new Chromecast Audio. I couldn't find documentation for a REST API that would allow me to accomplish this.
Is there any direct hookup that would be possible such that I can call some REST API to play audio through Cast Audio?
There is no single rest apis to do so; the process of casting a media, using the Cast SDK, amounts to starting a discovery, selecting (connecting to) a device, setting up the so-called RemoteMediaPlayer and then loading a media. There is plenty of documentation on our Cast documentation site that helps you follow and implement the above steps, along with a good number of sample apps.

Audio chatting and phone calls through browser

If I want to create online phone for small office with web client (through Astrisk/Adhearsion) how can I stream IN/OUT audio right in my browser (like GTalk do). Preferably without Java applets and without Flash.
I need to get voice stream from Astriks call, stream it in browser, receive audio from mic and send it back to Astrisk.
And what shoud I choose for streaming audio IN/OUT as backend? XMPP?
You can't. There isn't a standard way (currently) to do what you are looking to do. You need something to help you out. This is often done in the form of a Flash application.
Google actually uses a browser plugin from Vidyo to make this happen effectively.

Resources