Is there a way to detect when a participant has muted their audio in opentok.js? - opentok

I went through the publisher docs which has the methods publishVideo(value) and publishAudio(value).
Corresponding to the video part, the subscriber receives the event videoDisabled or videoEnabled with the reason publishVideo which allows me to determine whether the subscribed participant has intentionally turned off their video or not, but I can't find something similar for audio such as audioDisabled or audioEnabled. audioBlocked event supposedly only covers blocks by browser's autoplay policy: Dispatched when the subscriber's audio is blocked because of the browser's autoplay policy.
The audioLevelUpdated event provides the current audio level, but that could just be silence, not an intentional mute, so that doesn't look ideal for this purpose.
I want to show a audio muted icon on the subscribed participants element when they have intentionally turned off their audio by calling publishAudio() method. How can that be achieved?
Referenced docs:
Subscriber events: https://tokbox.com/developer/sdks/js/reference/Subscriber.html#events
Publisher methods: https://tokbox.com/developer/sdks/js/reference/Publisher.html#methods

Each stream has an hasAudio attribute that returns false if the user's audio is muted or their microphone is muted. Similarly, streams also have a hasVideo attribute. You can reference the stream docs at https://tokbox.com/developer/sdks/js/reference/Stream.html.
I personally use it like so:
session.streams.forEach((stream) => {
const name = stream.name;
const video = stream.hasVideo;
const audio = stream.hasAudio;
});
You can listen for these changes with the session.on('streamPropertyChanged') event: https://tokbox.com/developer/sdks/js/reference/StreamPropertyChangedEvent.html

Hav u tried audioLevelUpdated and checking the level of audio
If level is 0 then its muted.
https://tokbox.com/developer/sdks/js/reference/Subscriber.html#getAudioVolume
So steps are listen audioLevelUpdated and check AudioVolume, audio volume should get u the subscriber level point.

Set OTSubscriberKitNetworkStatsDelegate to subscriber as
subscriber.networkStatsDelegate = self
Then you can call the below function to get simultaneous changes on subscribers' microphone status
func subscriber(_ subscriber: OTSubscriberKit, audioNetworkStatsUpdated stats: OTSubscriberKitAudioNetworkStats) {
// call stream.hasAudio
}
You can also check from opentok audioNetworkStatsUpdated

Related

ThreeJS Positional Audio with WebRTC streams produces no sound

I am trying to use ThreeJS positional audio with WebRTC to build a sort of 3D Room audio chat feature. I am able to get the audio streams sent across the clients. However, the positional audio does not seem to work. Irrespective of where the user (camera) moves, the intensity of the audio remains the same. Some relevant code is being posted below:
The getUserMedia has the following in the promise response
// create the listener
listener = new THREE.AudioListener();
// add it to the camera object
camera.object3D.add( listener );
//store the local stream to be sent to other peers
localStream = stream;
Then on each WebRTC PEER connection, I set the stream that is received to a mesh using Positional Audio
// create the sound object
sound = new THREE.PositionalAudio( listener ); // using the listener created earlier
const soundSource = this.sound.context.createMediaStreamSource(stream);
// set the media source to as the sound source
sound.setNodeSource(soundSource);
// assume that I have handle to the obj where I need to set the sound
obj.object3D.add( sound )
This is done for each of the clients and these local streams are being sent via WebRTC to one another, however there is no sound from the speakers? Thanks

google-cast-sdk audio and subtitles manual handling on sender and receiver

we occurred an issue handling manually audio track and subtitles on sender web and mobile (v3 for both).
Basically, we are able to add some track info before load media, we found the added tracks on the receiver but there are present also the tracks that come from the manifest in two formats (AF and standard object).
There is a way to handle them once and to remove the original that comes from manifest on the receiver side?
Additionally, in this way, the senders will be notified of the change (eg. visible only audio track manually added)?
Many thanks for your support.
You can use message interception:
https://developers.google.com/cast/docs/caf_receiver_features#message-interception
Your interceptor should return the modified request or a Promise that resolves with the modified request value.
You can add your own tracks:
request.media.contentId = mediaUrl;
request.media.contentType = 'application/dash+xml';
request.media.tracks = [{
trackId: 1,
trackContentId: captionUrl,
trackContentType: 'text/vtt',
type: cast.framework.messages.TrackType.TEXT
}];

Masking voice while in an tokbox session

I have an application using Tokbox to create 1:1 video calls with the users. However is it possible to mask/morph the voice of a user as they speak while in a tokbox session? Pitch Modifier
It's possible but not using the officially supported API. You will need to intercept the getUserMedia call, make modifications to the audio track of the intercepted stream, and pass through the modified stream to opentok.js.
See https://tokbox.com/blog/camera-filters-in-opentok-for-web/ for an example on how to intercept the getUserMedia call to make modifications to the video track of the stream.
Here is a basic example of using the mockGetUserMedia function from the blog post to replace the audio track with a simple sine wave:
mockGetUserMedia((originalStream) => {
const audioContext = new window.AudioContext();
const destination = audioContext.createMediaStreamDestination();
const customStream = destination.stream;
originalStream.getVideoTracks().map(videoTrack => customStream.addTrack(videoTrack));
const oscillator = audioContext.createOscillator();
oscillator.start(audioContext.currentTime);
oscillator.connect(destination);
return customStream;
});
Remember: This is not an officially supported API, use at your own risk.

Control Chromecast Audio volume using Chrome API

I need to draw in UI a proper volume level on client (sender) side when working with Chromecast Audio. I see there are two ways of receiving (and might setting as well) volume from Chromecast - from Receiver and Media namespaces. In my understanding Receiver namespace stores general device's volume while Media namespace stores volume of currently played track.
It seems that I can't receive media volume by using GET_STATUS request for Media namespace before I load any tracks with LOAD request. Then how do I correctly display volume that will be used before loading media? Changing in UI RECEIVER volume to MEDIA volume after media is loaded doesn't look like a good solution and will be a surprise for users.
I fail to control volume using SET_VOLUME request for Receiver namespace - I've got no reply from Chromecast
Json::Value msg, response;
msg["type"] = "SET_VOLUME";
msg["requestId"] = ++request_id;
msg["volume"]["level"] = value; // float
response = send("urn:x-cast:com.google.cast.receiver", msg);
If the following lines are used instead of the last one, media volume is controlled OK:
msg["mediaSessionId"] = m_media_session_id;
response = send("urn:x-cast:com.google.cast.media", msg);
What am I doing wrong here?
In order to set the volume on the receiver, you should be using the SDK's APIs instead of sending a hand-crafted message. For example, you should use setReceiverVolumeLevel(). Also, use the receiver volume and not the stream volume.

Listen to all html video events

Playing around with html5 video and was wondering if there was a way to list its events as it happens.
I have one for ended - myVideo.addEventListener('ended',videoEnded,false);
Which works fine but how can i create a a listener which will listen to every event and name them?
myVideo.addEventListener('ALL',AddToLog,false);
function AddToLog(){
console.log(eventname);
}
Any pointers welcome.
Dan.
Subscribe to all <video> or <audio> events at once:
function addListenerMulti(el, s, fn) {
s.split(' ').forEach(e => el.addEventListener(e, fn, false));
}
var video = document.getElementById('myVideo');
addListenerMulti(video, 'abort canplay canplaythrough durationchange emptied encrypted ended error interruptbegin interruptend loadeddata loadedmetadata loadstart mozaudioavailable pause play playing progress ratechange seeked seeking stalled suspend timeupdate volumechange waiting', function(e){
console.log(e.type);
});
It's also a good idea to study media events list from the specification.
Multiple events listener credits go to #RobG's solution.
You can't. You need to listen to specific events.
You can find a list of possibly available events from the specification

Resources