Chromecast Receiver CAF, infinite loading of MPEG-DASH stream, ShakaPlayer access - chromecast

I've got a problem with Chromecast playing MPEG-DASH livestream. The infinite loading occurs because of the lack of UTCTiming tag in manifest. The problem is known to occur with ShakaPlayer. It's the first problem in FAQ: https://github.com/google/shaka-player/blob/master/docs/tutorials/faq.md
On chromecast however, i can't access the ShakaPlayer directly (or maybe there is a way that i'm not familiar with). There are 2 solutions to the problem that i can think of:
Modify manifest dynamically.
this.playbackConfig.manifestHandler = (manifest) => {
//adding UTCTiming to the manifest with attributes like this:
customUTC.setAttribute("schemeIdUri", "urn:mpeg:dash:utc:http-head:2014");
customUTC.setAttribute("value", this.manifestUrl); }
This however doesn't change the behaviour of the chromecast player, the infinite loading still occurs, am I doing something wrong here?
Using the older player (Media Player Library) via setting useLegacyDashSupport makes chromecast play stream normally, but breaks UI a little.Can I switch to the legacy player dynamically only when it's needed? Based on a manifest for example, or during loadRequest from sender app.

The UTCTiming element is required since Shaka Player needs to know what time is on the server so it can play at the right time. If the client and the server have different clock times, the video will likely fail to play. It isn't really a requirement of Shaka Player, more of a requirement of DASH in general.
But if you can't set the element in the manifest, you can use the manifest.dash.clockSyncUri (see docs) configuration parameter to set a clock sync URL to use. For example:
player.configure({manifest: {dash: {clockSyncUri: 'https://example.com/clock'}}});
Note that the URL used for clock sync needs to have a correct Date header on the response (be careful of caching) and if the request is cross-origin, you'll need to expose the header or there will be CORS errors.
Also, shaka-player#999 is a feature request to help with drift. After that feature lands, the Player will use the segments in the manifest to guess the live edge instead of using the clock time. This means you won't have to set up clock sync.

I agree with you. This is very annoying behavior that shaka forces to use UTCTiming.
If you have an option to modify shaka-player codes in your fork, I would suggest that you should call the setClockOffset method after manifest initialized (check here). Manifest has presentationTimeline which has the setClockOffset method. Otherwise, you can reach the manifest from here. The setClockOffset method is triggered for UTCTiming. If you cannot set up UTCTiming for your manifest, setting offset manually might be the best option for your case.
A sample code case would be =>
player.load(manifestUri)
.then(() => {
const manifest = player.getManifest();
const presentationTimeline = manifest.presentationTimeline;
presentationTimeline.setClockOffset(10/* find a suitable offset */);
});
Good luck!

Related

"Translate" utag.link (tealium tracking function) into _satellite.track (Adobe Launch tracking)

we are migrating Tealium web analytics tracking into Adobe Launch.
Part of the website is tagged by utag.link method, e.g.
utag.link({
"item1" : "item1_value",
"item2" : "item2_value",
"event" : "event_value"})
and we need to "translate" it into Adobe Launch syntax, to save developers time, e.g.
_satellite.track("event_value",{item1:"item1_value",item2:"item2_value"})
How would you approach it? Is it doable?
Many thanks
Pavel
Okay, this is a bit more complex than it looks. Technically, this answers your question completely: https://experienceleaguecommunities.adobe.com/t5/adobe-experience-platform-launch/satellite-track-and-passing-related-information/m-p/271467
Hooowever! This will make the tracking only accessible by Launch/DTM. Other TMSes or even global env JS will end up relying on Launch if they need a piece of that data too. And imagine what happens when in five years you wanna migrate from Launch like you do now with Tealium? You will have to do the same needless thing. If your Tealium implementation had been implemented more carefully, you wouldn't need to waste your time on this migration now.
Therefore, I would suggest not using _satellite.track(). I would suggest using pure JS CustomEvents with payloads in details. Launch natively has triggers for native JS events and the ability to access their details through CJS: event.details. But even if I need to use that in GTM, I can deploy a simple event listener in GTM that will re-route all the wonderful CustomEvents into DL events and their payloads in neat DL vars.
Having this, you will never need to bother front-end devs when you need to make tracking available for a different TMS whether as a result of migration or parity tracking into a different analytics system.
In general, agree with BNazaruk's answer/philosophy that the best way to future-proof your implementation is to create a generic data layer and broadcast it to custom javascript events. Virtually all modern tag managers have a way to subscribe to them, map to their equivalent of environment variables, event rules, etc.
Having said that, here is an overview of Adobe's current best practice for Adobe Experience Platform Data Collection (Launch), using the Adobe Client Data Layer Extension.
Once you install the extension, you change your utag calls, e.g.
utag.link({
"item1" : "item1_value",
"item2" : "item2_value",
"event" : "event_value"
})
to this:
window.adobeDataLayer = window.adobeDataLayer || [];
window.adobeDataLayer.push({
"item1" : "item1_value",
"item2" : "item2_value",
"event" : "event_value"
});
A few notes about this:
adobeDataLayer is the default array name the Launch extension will look for. You can change this to something else within the extension's config (though Adobe does not recommend this, because reasons).
You can keep the current payload structure you used for Tealium and work with that, though longer term, you should consider restructuring your data layer. Things get a little complicated when dealing with Tealium's data layer syntax/conventions vs. Launch. For example, if you have multiple comma delimited events in your event string (Tealium convention) vs. creating Event Rules in Launch (which expects a single event in the string). There are workarounds for this stuff (ask a separate question if you need help), but again, longer term best path would be to change the structure of the data layer to something more standard.
Then, within Launch, you can create Data Elements to map to a given data point passed in the adobeDataLayer.push call.
Meanwhile, you can make a Rule with an event that listens for the data pushed, based on various criteria. Common example is to listen for a Specific Event, which corresponds to the event value you pushed. For example:
And then in the Rule's Conditions and Actions, you can reference the Data Elements you made. For example, if you want to trigger the Rule if event equals "event_value" (above), AND if item2 equals "item2_value", you can add a Condition like this:
Another example, Action to set Adobe Analytics eVar1 to the value of item2:
I would advise to remove any dependencies to TMS from your platform code and migrate to use a generic data layer. This way you developers will not have any issues in future to migrate TMS.
See this article about generic data layer not TMS provider specific: https://dev.to/alcazes/generic-data-layer-1i90

Audio Unit (AUv3) in macOS only works in selected track in DAW

When I build an Audio Unit extension (instrument/aumu) using the template in XCode (New project → App, then New target → Audio Unit Extension) and then build and run it with either Logic Pro X or Garage Band, the plugin only functions when the track it's inserted on is selected.
If any other track is selected, breakpoints in eg. the process or handleMIDIEvent overriden functions never get triggered. (plus the unselected tracks start to output a constant, short period glitch noise if they were actually outputting sound before the selected track changed)
Any idea why this happens? I would suspect a fault in XCode or the DAW's part, but I have seen other macOS AUv3 plugins (a still extremely rare breed, unfortunately) work just fine, so I know it's definitely possible.
After much fiddling, I finally found the problem. (I REALLY wish there was more knowledge widely available online on AUv3...)
It seems that both Logic Pro X and Garage Band on each render cycle ask the plugin process for blocks of different lengths depending on whether the plugin is in the selected track or not. If the track is selected, the requested block will be the length set in the DAW's settings (I/O Buffer Size), presumably for highest priority rendering? Unselected tracks are asked for 1024 frames (the longest Logic's buffers can go, it seems), regardless of I/O Buffer Size setting.
1024 frames is longer than the
AUAudioFrameCount maxFramesToRender = 512;
the Audio Unit Extension template stubs in DSPKernel.hpp, thus making it so that rendering fails only when on an unselected track. (the short period glitch noise I mentioned appears to be whatever values were left in the output buffer since the last playback being re-output once every 1024 frames)
Setting maxFramesToRender = 1024; fixes that problem.
And now for a heavily opinionated rant:
I can't help but feel this default maxFramesToRender value is setting newbies (like me) up for failure, since 1) it's never mentioned in the official tutorials or documentation (AFAIK) 2) it doesn't play nice with Apple's own DAWs, presumably the most obvious places to test 3) it initially works, but only up until you try to play two tracks at once, at which point you probably already have a lot of code down and are all the more susceptible to confusion.
But oh well, I guess it is what it is.

Control Chromecast buffering at start

Is there a way to control the amount of buffering CC devices do before they start playback?
My sender apps sends real time audio flac and CC waits +10s before starting to play. I've built a customer receiver and tried to change the autoPauseDuration and autoResumeDuration but it does not seem to matter. I assume it's only used when an underflow event happens, but not at startup.
I realize that forcing a start with low buffering level might endup in underflow, but that's a "risk" that is much better than always waiting such a long time before playback starts. And if it happens, the autoPause/Resume hysteresis would allow a larger re-buffering to take place then.
If you are using the Media Player Library, take a look at player.getBufferDuration. The docs cover more details about how you can customize the player behavior: https://developers.google.com/cast/docs/player#frequently-asked-questions
Finally, it turned to be a problem with the way to send audio to the default receiver. I was streaming flac, and as it is a streamable format, I did not include any header (you might be able to start anywhere in the stream, it's just a matter of finding the synchro). But the flac decoder in the CC does not like that and was talking 10+ second to start. As soon as I've added a STREAMINFO header, problem went away

libtorrent new piece alerts

I am developing an application that will stream multimedia files over torrents.
The backend needs to serve new pieces to the frontend as they arrive.
I need a mechanism to get notified when new pieces have arrived and been verified. From what I can tell, I could do this using block_finished_alerts. I would keep track of which blocks have arrived for a given piece, and read the piece when all blocks have arrived.
This solution seems kind of roundabout and I was wondering if there was a better way.
What you're asking for is called piece_finished_alert. It's posted every time a new piece completes downloading and passes the hash-check. To read a piece from disk, you may use torrent_handle::read_piece() (and get the result in read_piece_alert).
However, if you want to stream media, you probably want to use torrent_handle::set_piece_deadline() and set the flag to send read_piece_alerts as pieces come in. This will invoke the built-in streaming feature of libtorrent.

Flex 4 > spark.components.VideoPlayer > How to switch bit rate?

The VideoPlayer (possibly VideoDisplay also) component is capable of somehow automatically picking the best quality video on the list it's given. An example is here:
http://help.adobe.com/en_US/FlashPlatform/beta/reference/actionscript/3/spark/components/mediaClasses/DynamicStreamingVideoItem.html#includeExamplesSummary
I cannot find the answers to below questions.
Assuming that the server that streams recorded videos is capable of switching across same videos with different bit rates and streaming them from any point within their timelines:
Is the bandwidth test/calculation within this component only done before the video starts playing, at which point it picks the best video source and never uses the other ones? Or, does it continuously or periodically execute its bandwidth tests and does it accordingly switch between video sources during the playback?
Does it support setting the video source through code and can its automatic switching between video sources be turned off (in case I want to provide this functionality to the user in the form of some button/dropdown or similar)? I know that the preferred video source can be set, but this only means that that video source will be tested/attempted first.
What other media servers can be used with this component, besides the one provided by Adobe, to achieve automated and manual switching between different quality of same video?
Obviously, I'd like to create a player that is smart enough to automatically switch between different quality of videos, and that will support manual instructions related to which source to play - both without interrupting the playback, or at least without restarting it (minor interruptions acceptable). Also, the playback needs to be able to start at any given point within the video, after enough data has been buffered (of course), but most importantly, I want to be able to start the playback beyond what's buffered. A note or two about fast-forwarding is not going to hurt, if anyone knows anything.
Thank you for your time.

Resources