How do i stream a mp3 file instead of microphone data in opentok session - opentok

We need to stream audio file user selects instead of default microphone. Is there any option in SDK to do so?

If you are using a native SDK such as iOS, Android or Windows you should build your own audio driver.
See our samples:
iOS: https://github.com/opentok/opentok-ios-sdk-samples-swift/tree/master/Custom-Audio-Driver
Android: https://github.com/opentok/opentok-android-sdk-samples/tree/master/Custom-Audio-Driver
That audio driver will open the mp3 file and will send it over the OpenTok session.

If you are using the opentok.js SDK then you can use the Fetch API to fetch the mp3 file and the web audio API to decode the audio data and create a media stream destination out of it. Then you can take the audio track from that stream and pass it to the audioSource in OT.initPublisher.
Here is a sample that loads an mp3 file into a session.
https://github.com/opentok/opentok-web-samples/tree/master/Stereo-Audio

Related

How to decrypt and record mpd (dash) Live Streaming videos in ffmpeg?

I want to decrypt and download/record MPD (dash) live stream and store that video for later viewing. I have made a demo to play MPD with license URL, but I want to download video data for later view. The stream is using Widevine DRM.
One of the main purposes of a DRM system like widevine is to prevent it being possible to view (other than on the screen) and share decrypted video, so if this is your goal then the simple answer is you can't without breaking Widevine.
If you simply want to be able to download / copy the content to a device and allow an authorised viewer watch it offline then you can do this using persistent licenses.
When a DRM license is delivered it contains entitlement data along with the decryption key, and this entitlement data can include the ability to persist the license for a period of time - this is typically how 'download and go' video services work.

How do I route audio output to a selected audio endpoint/device in Windows?

TL;DR: When playing audio using Windows UWP MediaPlayer, how do I route audio to a specific audio device/endpoint?
Full Context
I'm working on an app to place calls. Some requirements are:
Play audio sounds at different points (e.g. when the call hangs up)
Allow users to change in-call audio output to different endpoints (not an issue)
Ensure that when in-call audio has routed to a different "default" endpoint, that any other sounds that are played are routed to the same endpoint (this is what I need help with)
Currently, when I route audio to a different endpoint, other sounds that are played with Windows UWP MediaPlayer do not get routed to the same "new" endpoint. This makes sense since we aren't changing application-wide settings.
My question is: How do I route audio to the same endpoint that the call audio is going through, given that I'm using Windows UWP MediaPlayer and given that I can get device information?
When playing audio using Windows UWP MediaPlayer, how do I route audio to a specific audio device/endpoint?
Please check Output to a specific audio endpoint document. By default, the audio output from a MediaPlayer is routed to the default audio endpoint for the system, but you can specify a specific audio endpoint that the MediaPlayer should use for output.
You could use GetAudioRenderSelector to get the render selector then use FindAllAsync to get render device the pass the specific device to mediaplayer AudioDevice property.

OpenTok real-time audio transcription

I am trying to transcribe the audio in an OpenTok session in real-time. The OpenTok API does not not seem to have that feature. Is there any way I can capture the data in some form and push it to another script/ tool that makes the transcription?
The issue is not with transcribing, the issue is in accessing the live audio stream data and using it in real-time.
You can get access to the video/audio stream (MediaStream) with https://tokbox.com/developer/sdks/js/reference/OT.html#getUserMedia in client SDK.
You can manipulate audio using available API from WebAudio spec.
Publish audio from an audio MediaStreamTrack object. For example, you can use the AudioContext object and the Web Audio API to dynamically generate audio. You can then call createMediaStreamDestination().stream.getAudioTracks()[0]on the AudioContext object to get the audio MediaStreamTrack object to use as the audioSource property of the optionsobject you pass into the OT.initPublisher() method. For a basic example, see the Stereo-Audio sample OpenTok-web-samples repo on GitHub.
This above GitHub example is about injecting your audio stream. However, you can also extract/capture your audio before injecting it. See detail here...
https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API.

FFPLAY unable to facebook live stream url i.e. rtmps url

I am trying to play facebook live streraming url using ffplay where i have used rtmps stream url & stream key. I have enabled openssl library and rtmps library in ffmpeg. But while running command of ffplay with stream url & key, i am getting Pull function error.
So is there any solution known please let me know.
Thank you.
That is not a valid method of playing back the video. Facebook disables RTMP pull, they only use RTMPs for push, i.e. sending video TO Facebook, not getting video FROM Facebook. There is nothing you can do to make that work.

Record Live Stream http to a file using Xamarin

I have a live stream URL http and I want to record live video for some periods of time.
How can I do this for Xamarin Android code ?
thank you

Resources