google-cast-sdk audio and subtitles manual handling on sender and receiver - chromecast

we occurred an issue handling manually audio track and subtitles on sender web and mobile (v3 for both).
Basically, we are able to add some track info before load media, we found the added tracks on the receiver but there are present also the tracks that come from the manifest in two formats (AF and standard object).
There is a way to handle them once and to remove the original that comes from manifest on the receiver side?
Additionally, in this way, the senders will be notified of the change (eg. visible only audio track manually added)?
Many thanks for your support.

You can use message interception:
https://developers.google.com/cast/docs/caf_receiver_features#message-interception
Your interceptor should return the modified request or a Promise that resolves with the modified request value.
You can add your own tracks:
request.media.contentId = mediaUrl;
request.media.contentType = 'application/dash+xml';
request.media.tracks = [{
trackId: 1,
trackContentId: captionUrl,
trackContentType: 'text/vtt',
type: cast.framework.messages.TrackType.TEXT
}];

Related

Is there a way to detect when a participant has muted their audio in opentok.js?

I went through the publisher docs which has the methods publishVideo(value) and publishAudio(value).
Corresponding to the video part, the subscriber receives the event videoDisabled or videoEnabled with the reason publishVideo which allows me to determine whether the subscribed participant has intentionally turned off their video or not, but I can't find something similar for audio such as audioDisabled or audioEnabled. audioBlocked event supposedly only covers blocks by browser's autoplay policy: Dispatched when the subscriber's audio is blocked because of the browser's autoplay policy.
The audioLevelUpdated event provides the current audio level, but that could just be silence, not an intentional mute, so that doesn't look ideal for this purpose.
I want to show a audio muted icon on the subscribed participants element when they have intentionally turned off their audio by calling publishAudio() method. How can that be achieved?
Referenced docs:
Subscriber events: https://tokbox.com/developer/sdks/js/reference/Subscriber.html#events
Publisher methods: https://tokbox.com/developer/sdks/js/reference/Publisher.html#methods
Each stream has an hasAudio attribute that returns false if the user's audio is muted or their microphone is muted. Similarly, streams also have a hasVideo attribute. You can reference the stream docs at https://tokbox.com/developer/sdks/js/reference/Stream.html.
I personally use it like so:
session.streams.forEach((stream) => {
const name = stream.name;
const video = stream.hasVideo;
const audio = stream.hasAudio;
});
You can listen for these changes with the session.on('streamPropertyChanged') event: https://tokbox.com/developer/sdks/js/reference/StreamPropertyChangedEvent.html
Hav u tried audioLevelUpdated and checking the level of audio
If level is 0 then its muted.
https://tokbox.com/developer/sdks/js/reference/Subscriber.html#getAudioVolume
So steps are listen audioLevelUpdated and check AudioVolume, audio volume should get u the subscriber level point.
Set OTSubscriberKitNetworkStatsDelegate to subscriber as
subscriber.networkStatsDelegate = self
Then you can call the below function to get simultaneous changes on subscribers' microphone status
func subscriber(_ subscriber: OTSubscriberKit, audioNetworkStatsUpdated stats: OTSubscriberKitAudioNetworkStats) {
// call stream.hasAudio
}
You can also check from opentok audioNetworkStatsUpdated

Why is Chromecast unable to stream this HLS video? "Neither ID3 nor ADTS header was found" / Error NETWORK/315

I'm trying to stream some URLs to my Chromecast through a sender app. They're HLS/m3u8 URLs.
Here's one such example URL: https://qa-apache-php7.dev.kaltura.com/p/1091/sp/109100/playManifest/entryId/0_wifqaipd/protocol/https/format/applehttp/flavorIds/0_h65mfj7f,0_3flmvnwc,0_m131krws,0_5407xm9j/a.m3u8
However they never seem to load on the Chromecast, despite other HLS/m3u8 URLs working (example of an HLS stream that does work).
It's not related to CORS as they indeed have the proper CORS headers.
I notice they have separate audio groups in the root HLS manifest file.
When I hook it up to a custom receiver app, I get the following logs:
The relevant bits being (I think): Neither ID3 nor ADTS header was found at 0 and cast.player.api.ErrorCode.NETWORK/315 (which I believe is a result of the first)
These are perfectly valid/working HLS URLs. They play back in Safari on iOS and desktop perfectly, as well as VLC.
Is there something I need to be doing (either in my sender app or my receiver app) to enable something like the audio tracks? The docs seem to indicate something about that.
I also found this Google issue where a person had a similar issue, but solved it somehow that I can't understand. https://issuetracker.google.com/u/1/issues/112277373
How would I playback this URL on Chromecast properly? Am I to do something in code?
This already has a solution here but I will add this answer in case someone looks up the exact error message / code.
The problem lies in the hlsSegmentFormat which is initialized to TS for multiplexed segments but currently defaults to packed audio for HLS with alternate audio tracks.
The solution is to intercept the CAF LOAD request and set the correct segment format:
const context = cast.framework.CastReceiverContext.getInstance();
const playerManager = context.getPlayerManager();
// intercept the LOAD request
playerManager.setMessageInterceptor(cast.framework.messages.MessageType.LOAD, loadRequestData => {
loadRequestData.media.hlsSegmentFormat = cast.framework.messages.HlsSegmentFormat.TS;
return loadRequestData;
});
context.start();
Source: Google Cast issue tracker
For those who manage multiple video sources in various formats and who don't want to arbitrarily force the HLS fragment format to TS, I suggest to track the error and set a flag that force the format at the next retry (by default, the receiver tries 3 times before giving up).
First, have a global flag to enable the HLS segments format override:
setHlsSegmentFormat = false;
Then detect the error:
playerManager.addEventListener(cast.framework.events.EventType.ERROR,
event => {
if (event.detailedErrorCode == cast.framework.events.DetailedErrorCode.HLS_NETWORK_INVALID_SEGMENT) {
// Failed parsing HLS fragments. Will retry with HLS segments format set to 'TS'
setHlsSegmentFormat = true;
}
}
);
Finally, handle the flag when intercepting the playback request:
playerManager.setMediaPlaybackInfoHandler(
(loadRequest, playbackConfig) => {
if (setHlsSegmentFormat) {
loadRequest.media.hlsSegmentFormat = cast.framework.messages.HlsSegmentFormat.TS;
// clear the flag to not force the format for subsequent playback requests
setHlsSegmentFormat = false;
}
}
);
The playback will quickly fail the first time and will succeed at the next attempt. The loading time is a bit longer but the HLS segment format is only set when required.

YoutubeLive API: custom slate image for a scheduled video

I'm working on a desktop live streaming software and I'd like to add my custom thumbnail/image for a scheduled live video (it's known as "slateImage" in youtube's api - https://developers.google.com/youtube/v3/live/getting-started).
I found in Broadcast.insert liveBroadcasts#resource
contains a parameter called snippet.thumbnails.(key)
However it doesn't work for me, the video is with same default slateImage and yes, I remember to enable contentDetails.startWithSlate = true there.
Have anybody faced with same?
if you check the documentation livebroadcasts.insert
Provide a liveBroadcast resource in the request body. For that resource:
You must specify a value for these properties:
snippet.title
snippet.scheduledStartTime
status.privacyStatus
You can set values for these properties:
snippet.title
snippet.description
snippet.scheduledStartTime
snippet.scheduledEndTime
status.privacyStatus
contentDetails.monitorStream.enableMonitorStream
contentDetails.monitorStream.broadcastStreamDelayMs
contentDetails.enableDvr
contentDetails.enableContentEncryption
contentDetails.enableEmbed
contentDetails.recordFromStart
contentDetails.startWithSlate
contentDetails.enableClosedCaptions
The same is stated under livebroadcast.update I would say that the snippet.thumbnails.key is read only. You are not allowed to write to it though the api.
contentDetails.startWithSlate
This setting indicates whether the broadcast should automatically begin with an in-stream slate when you update the broadcast's status to live. After updating the status, you then need to send a liveCuepoints.insert request that sets the cuepoint's eventState to end to remove the slate and make your broadcast stream visible to viewers. When you update a broadcast, this property must be set if your API request includes the contentDetails part in the part parameter value. However, when you insert a broadcast, the property is optional and has a default value of false.
Note: This property cannot be updated once the broadcast is in the testing or live state.

Control Chromecast Audio volume using Chrome API

I need to draw in UI a proper volume level on client (sender) side when working with Chromecast Audio. I see there are two ways of receiving (and might setting as well) volume from Chromecast - from Receiver and Media namespaces. In my understanding Receiver namespace stores general device's volume while Media namespace stores volume of currently played track.
It seems that I can't receive media volume by using GET_STATUS request for Media namespace before I load any tracks with LOAD request. Then how do I correctly display volume that will be used before loading media? Changing in UI RECEIVER volume to MEDIA volume after media is loaded doesn't look like a good solution and will be a surprise for users.
I fail to control volume using SET_VOLUME request for Receiver namespace - I've got no reply from Chromecast
Json::Value msg, response;
msg["type"] = "SET_VOLUME";
msg["requestId"] = ++request_id;
msg["volume"]["level"] = value; // float
response = send("urn:x-cast:com.google.cast.receiver", msg);
If the following lines are used instead of the last one, media volume is controlled OK:
msg["mediaSessionId"] = m_media_session_id;
response = send("urn:x-cast:com.google.cast.media", msg);
What am I doing wrong here?
In order to set the volume on the receiver, you should be using the SDK's APIs instead of sending a hand-crafted message. For example, you should use setReceiverVolumeLevel(). Also, use the receiver volume and not the stream volume.

Identify http request / response by id

I am building an extension with Firefox's ADD ON SDK (v1.9) that will be able to read all HTTP requests / responses and calculate the time they took to load. This includes not only the main frame but any other loading file (sub frame, script, css, image, etc.).
So far, I am able to use the "observer-service" module to listen for:
"http-on-modify-request" when a HTTP request is created.
"http-on-examine-response" when a HTTP response is received
"http-on-examine-cached-response" when a HTTP response is received entirely from cache
"http-on-examine-merged-response" when a HTTP response is received partially from cache
My application follows the following sequence:
A request is created and registered through the observer.
I save the current time and mark it as start_time of the request load.
A response for a request is received and registered through one of the observers.
I save the current time and use the previously saved time to calculate load time of the request.
Problem:
I am not able to link the start and end times of the load since I cannot find a request ID (or other unique value) that will tie the request with the response.
I am currently using the URL of the request / response to tie them together but this is not correct since it will raise a "race condition" if two or more equal urls are loading at the same time. Google Chrome solves this issue by providing unique requestIds, but I have not been able to find a similar functionality on Firefox.
I am aware of two ways to recognize a channel that you receive in this observer. The "old" solution is to use nsIWritablePropertyBag interface to attach data to the channel:
var {Ci} = require("chrome");
var channelId = 0;
...
// Attach channel ID to a channel
if (channel instanceof Ci.nsIWritablePropertyBag)
channel.setProperty("myExtension-channelId", ++channelId);
...
// Read out channel ID for a channel
if (channel instanceof Ci.nsIPropertyBag)
console.log(channel.getProperty("myExtension-channelId"));
The other solution would be using WeakMap API (only works properly starting with Firefox 13):
var channelMap = new WeakMap();
var channelId = 0;
...
// Attach channel ID to a channel
channelMap.set(channel, ++channelId);
...
// Read out channel ID for a channel
console.log(channelMap.get(channel));
I'm not sure whether WeakMap is available in the context of Add-on SDK modules, you might have to "steal" it from a regular JavaScript module:
var {Cu} = require("chrome");
var {WeakMap} = Cu.import("resource://gre/modules/FileUtils.jsm", null);
Obviously, in both cases you can attach more data to the channel than a simple number.
Firebug does what you're thinking of by implementing a central observer for these events:
https://github.com/firebug/firebug/blob/master/extension/modules/firebug-http-observer.js
This might be a good place to start, although eventually Firefox will ship a more complete network monitor / debugger by default. I think I read somewhere that it will be based on Firebug's.

Resources