Does cobalt support YouTube 360 Video(Spherical Video) - cobalt

Dose cobalt support youtube 360 video(Spherical Video)? If yes, how it's been implemented, is there any document for it? Does it need the platform to do extra things to support it?

Almost. There is still some small remaining work that prevents this from being a simple yes, but the vast bulk of the work is done and has been shown to function.
A document will soon be appearing in the source tree explaining all this, but here is a preview...
In order to support spherical video, a platform will have to support decode-to-texture, introduced in Starboard API version 4. Cobalt will choose between punch-out and decode-to-texture when creating an SbPlayer, based on whether a mesh transform has been applied to the video tag. In decode-to-texture mode, every frame, the video texture will then be queried from the player, and rendered into the UI graphics plane with the current transform applied.

Related

Camera texture in Unity with multithreaded rendering

I'm trying to do pretty much what TangoARScreen does but with multithreaded rendering on in Unity. I did some experiments and I'm stuck.
I tried several things, such as letting Tango render into the OES texture that would be then blitted into a regular Texture2D in Unity, but OpenGL keeps complaining about invalid display when I try to use it. Probably not even OnTangoCameraTextureAvailable is called in the correct GL context? Hard to say when you have no idea how Tango Core works internally.
Is registering a YUV texture via TangoService_Experimental_connectTextureIdUnity the way to go? I'd have to deal with YUV2RGB conversion I assume. Or should I use OnTangoImageMultithreadedAvailable and deal with the buffer? Render it with a custom share for instance? The documentation is pretty blank in these areas and every experiment means several wasted days at least. Did anyone get this working? Could you point me in the right direction? All I need is live camera image rendered into Unity's camera background.
Frome the April 2017: Gankino release notes: "The C API now supports getting the latest camera image's timestamp outside a GL thread .... Unity multithreaded rendering support will get added in a future release.". So I guess we need to wait a little bit.
Multithreaded rendering still can be used in applications without camera feed (with motion tracking only), choosing "YUV Texture and Raw Bytes" as overlay method in Tango Application Script.

Daydream Async Reprojection Nexus6P

even though the google nexus 6P isn't classified as "Daydream-Ready" it can be used as a development platform for Daydream. Has anyone tested Async Reprojection on the Nexus 6P and can confirm that it supports Front-Buffer bzw. Single-Buffer rendering, or supports the EGL_MUTABLE_RENDER_BUFFER_BIT_KHR Extension on Android 7 ?
confirmed it on 6P. the Async Reprojection is the Front-buffer rendering feature. the latency is about 20ms compare to 80+ms without it.
it will render with Timewrapping on a new thread.
I found this Site http://opengles.gpuinfo.org/gles_generatereport.php?reportID=932 which lists EGL extensions for most Phones. According to the Specs, EGL_MUTABLE_RENDER_BUFFER_BIT_KHR btw. EGL_KHR_mutable_render_buffer is supported on the Nexus 6P. The other Phones that support "Front Buffer Rendering" and so would be capable of "Async Reprojection" are: Nexus6P, Nexus 5X and Google Pixel. Surprisingly these are the only few Phones, even though as of 28.11.2016 the Moto Z is advertised as "daydream-ready". Probably the data base for the Moto Z hasn't been updated yet. So the HW of the Nexus6P is capable of async reprojection (and so sub-20ms-Motion to Photon Latency) and probably supports daydream async reprojection, even though not classified as daydream-ready.
I can now also confirm that both on the Nexus5X and Nexus6P it is possible to create a valid EGL Config that allows rendering to the front buffer. Ether by adding
EGL_SURFACE_TYPE,EGL_MUTABLE_RENDER_BUFFER_BIT_KHR
to the context attribute list and then toggeling between front and back buffer, or by simply adding
EGL_RENDER_BUFFER,EGL_SINGLE_BUFFER to the surfaceAttribs list.
This will create a surface that only works in single buffer mode,and may also work on all Android 7 devices, even devices without the "mutable" extension.
But I couldn't test the second approach on phones with android 7 but without "mutable" extension.
Of course, since google decided not to classify the Nexus5X as "Daydream-ready", Async reprojection is only working on the Nexus6P (and various other daydream-ready phones).
But achieving sub-20ms motion-to-photon latency by drawing directly into the front buffer and syncing the rendering of each eye with the display scan-out is also possible on the Nexus5X, as I figured out when I developed a method called "eye-alternating front buffer rendering with vertex displacement distortion correction".

How to render and retrieve image data on Google Project Tango?

I'd like to render the live image data on a GL surface (as shown in various Project Tango samples), and at the same time record (encode) it via a MediaCodec.
(On an Android Lollipop device, I've accomplished that using the camera2 interface and multiple surface targets, which works fine, but thus far Tango is pre-Lollipop...)
From other answers, it appears that you have to use the C API to access the image data.
The C API provides two camera frame functions -- TangoService_connectTextureId() and TangoService_connectOnFrameAvailable(). However, the documentation states "Use either TangoService_connectTextureId() or TangoService_connectOnFrameAvailable() but not both."
Why not both?
How do I best render and retrieve the image data?
The Pythagoras release now allows for simultaneous use of color and color texture callbacks now. That said, you want to use the connectOnFrameAvailable if you want to process the image, you'd end up doing extra unnecessary work if you try and peel it out of the texture.

Handleling image size on multiple device display on cordova-ionic-angular app

I'm building a new app with this great tool and i have a question to solve.
What is the best way to handle imnage size for multiple scren and multiple devices.
Apple = retina and non-retina
Android = ldpi, mdpi, hdpi, xhdpi, xxhdpi and tablets (all this with multiple screen resolution)
BlackBerry10 = one resolution (but not equal to the others)
WindowsPhone8 = one resolution (but not equal to the others)
For this case, what is the best way ?
Use SVG images (Optimizacion, perfomance, size of app) ??
Dedicate CSS tags for device pixel ratio (CSS Image Replacement) (the designer can kill me :smile: lol ) see the list http://bjango.com/articles/min-device-pixel-ratio/
CSS Sprite sheet
Another way
Before the decision, think in what is the best to apply in all devices.
Thanks in advance
There really isn't a single way to do this since each implementation of an image will require a different approach depending on its purpose. SVGs are great but not everything works as an SVG.
Media queries will be your ally.
See this: supporting multiple resolution and density of images in phonegap
and this for an alternate approach: Angular.js data-bind background images using media queries
There are also some nice polyfills for the html5 picture element which you might find useful.
See: https://github.com/scottjehl/picturefill
...and its angularjs directive implementation https://github.com/tinacious/angular-picturefill

Controlling the aspect ratio in DirectShow (full screen mode)

I'm using DirectShow with a simple approach (IGraphBuilder RenderFile) and try to control everything else with querying supplemental interfaces.
The option in question is aspect ratio. I thought that it is maintained by default, but actually the same program behaves differently on different machines (maybe versions of DirectX). This is not a huge problem for a video in a window, because I can maintain the aspect ratio of my window by myself (based on the video size), but for full-screen mode I can not understand how can I control.
I found that there are at least two complex options: for VMR video and with adding overlay mixer, but is there a known way for doing this for IGraphBuilder' RenderFile video?
When you do IGraphBuilder::RenderFile, it internally adds a video renderer filter to the graph. It is typically a VMR-7 Video Renderer Filter:
In Windows XP and later, the Video Mixing Renderer 7 (VMR-7) is the
default video renderer. It is called the VMR-7 because internally it
uses DirectDraw 7.
At this point you can enumerate graph's filters, locate VMR-7 and use its interfaces such as IVMRAspectRatioControl to specify mode of interest.

Resources