Daydream Async Reprojection Nexus6P - daydream

even though the google nexus 6P isn't classified as "Daydream-Ready" it can be used as a development platform for Daydream. Has anyone tested Async Reprojection on the Nexus 6P and can confirm that it supports Front-Buffer bzw. Single-Buffer rendering, or supports the EGL_MUTABLE_RENDER_BUFFER_BIT_KHR Extension on Android 7 ?

confirmed it on 6P. the Async Reprojection is the Front-buffer rendering feature. the latency is about 20ms compare to 80+ms without it.
it will render with Timewrapping on a new thread.

I found this Site http://opengles.gpuinfo.org/gles_generatereport.php?reportID=932 which lists EGL extensions for most Phones. According to the Specs, EGL_MUTABLE_RENDER_BUFFER_BIT_KHR btw. EGL_KHR_mutable_render_buffer is supported on the Nexus 6P. The other Phones that support "Front Buffer Rendering" and so would be capable of "Async Reprojection" are: Nexus6P, Nexus 5X and Google Pixel. Surprisingly these are the only few Phones, even though as of 28.11.2016 the Moto Z is advertised as "daydream-ready". Probably the data base for the Moto Z hasn't been updated yet. So the HW of the Nexus6P is capable of async reprojection (and so sub-20ms-Motion to Photon Latency) and probably supports daydream async reprojection, even though not classified as daydream-ready.

I can now also confirm that both on the Nexus5X and Nexus6P it is possible to create a valid EGL Config that allows rendering to the front buffer. Ether by adding
EGL_SURFACE_TYPE,EGL_MUTABLE_RENDER_BUFFER_BIT_KHR
to the context attribute list and then toggeling between front and back buffer, or by simply adding
EGL_RENDER_BUFFER,EGL_SINGLE_BUFFER to the surfaceAttribs list.
This will create a surface that only works in single buffer mode,and may also work on all Android 7 devices, even devices without the "mutable" extension.
But I couldn't test the second approach on phones with android 7 but without "mutable" extension.
Of course, since google decided not to classify the Nexus5X as "Daydream-ready", Async reprojection is only working on the Nexus6P (and various other daydream-ready phones).
But achieving sub-20ms motion-to-photon latency by drawing directly into the front buffer and syncing the rendering of each eye with the display scan-out is also possible on the Nexus5X, as I figured out when I developed a method called "eye-alternating front buffer rendering with vertex displacement distortion correction".

Related

Limit the frame rate on an aframe project

I am developing an aframe project on my MacBook pro, late 2013. When running the project, the fan of my computer always spins fast, regardless which browser I use (firefox, safari, chrome) and the project size (also happens with a project just containing a simple a-box).
aframe-stats show me that my project (1028244 vertices, 342748 faces) still runs with 20 fps.
Is it somehow possible to limit the frame rate to 10fps in order to keep my computer quite? Or any other way to limit the flop-consumption of the aframe project? I already tried a native approach with sudo cputhrottle plugin-container 10 but that did not just throttle the aframe-renderer but the whole firefox browser. Can I pull the break somewhere in the JavaScript or the Browser settings?
It's difficult to say without your project code. Large data sets will simply crank out even a high spec macbook pro. I have found it helpful to pause any rendering whenever possible to quiet the users' machines.
I personally removed automated next animation frame rendering in favor of waiting for controls and objects to change.
For example:
this.controls.addEventListener( 'change', function(e){ addToRenderStack(); });
A simple function addtorenderstack puts in a new value in a list for a render, with the expectation that the render will occur at some point in the future and not right away. the list can also be used to log who requested the render in the call stack, and narrow down performance hogs.
addtorenderstack places a render request in a list. In the requestanimationframe loop, if the list has any length, a render is called on the scene. The stack is immediately cleared rather than processed one by one. If controls or animations continue to make render requests, the list will have a length again and request animationframe will process them in the same way with another render.
In this way, the code only renders when absolutely required. This saved me much grinding on framerate and the fans only come on during intensive operations and then shutdown when its complete, much like a typical 3d game experience.
Your mileage may vary depending on what's happening in your app. I work in engineering so often the view of the 3d world is stopped as an engineer examines or shows a model.

Does cobalt support YouTube 360 Video(Spherical Video)

Dose cobalt support youtube 360 video(Spherical Video)? If yes, how it's been implemented, is there any document for it? Does it need the platform to do extra things to support it?
Almost. There is still some small remaining work that prevents this from being a simple yes, but the vast bulk of the work is done and has been shown to function.
A document will soon be appearing in the source tree explaining all this, but here is a preview...
In order to support spherical video, a platform will have to support decode-to-texture, introduced in Starboard API version 4. Cobalt will choose between punch-out and decode-to-texture when creating an SbPlayer, based on whether a mesh transform has been applied to the video tag. In decode-to-texture mode, every frame, the video texture will then be queried from the player, and rendered into the UI graphics plane with the current transform applied.

GL_FEEDBACK mode doesn't work on specific Windows platforms

We've encountered a strange problem on newer laptops using built-in graphic cards.
In order to draw true-type fonts we obtain the glyph outlines using wglUseFontOutlines and then draw them with in glRenderMode(GL_FEEDBACK).
Afterwards we parse the feedback buffer. This has worked for many years.
Now we have a problem with glyphs containing holes (only on platforms with built-in graphic cards):
wglUseFontOutlines works perfectly. If we just draw the returned display lists, everything is fine. However, the token stream generated with GL_FEEDBACK is corrupt. The debugger shows nothing unusual, all functions return with success and the parsing itself works fine too. It is really the binary data generated by GL_FEEDBACK mode, which is wrong.
Has anyone else encountered this problem?
And is there an alternative way to obtain outlines and fillings for true type fonts on Windows?
I'm just guessing into the blue here: The GL_SELECT and GL_FEEDBACK rendering modes were usually not supported by widespread GPU driver OpenGL implementations. Only a handful graphics cards from the previous century actually did support these rendering modes. Hence you would almost always fall back into a software implementation when using those modes.
However given modern GPU's vastly more flexible feedback mechanisms, the latest drivers could actually try to implement those rendering modes using GPU features (somewhat weird, because those modi have been removed from modern OpenGL profiles). Anyway, this could be the reason why you're experiencing these problems.
In order to draw true-type fonts we obtain the glyph outlines using wglUseFontOutlines and then draw them with in glRenderMode(GL_FEEDBACK). Afterwards we parse the feedback buffer.
That's a cool Rube-Goldberg machine. Why don't you simply cut the middleman and obtain the glyph outlines directly using the appropriate Windows GDI functions (GetGlyphOutline) for this? This is what wglUseFontOutlines is using internally anyway.

How to render and retrieve image data on Google Project Tango?

I'd like to render the live image data on a GL surface (as shown in various Project Tango samples), and at the same time record (encode) it via a MediaCodec.
(On an Android Lollipop device, I've accomplished that using the camera2 interface and multiple surface targets, which works fine, but thus far Tango is pre-Lollipop...)
From other answers, it appears that you have to use the C API to access the image data.
The C API provides two camera frame functions -- TangoService_connectTextureId() and TangoService_connectOnFrameAvailable(). However, the documentation states "Use either TangoService_connectTextureId() or TangoService_connectOnFrameAvailable() but not both."
Why not both?
How do I best render and retrieve the image data?
The Pythagoras release now allows for simultaneous use of color and color texture callbacks now. That said, you want to use the connectOnFrameAvailable if you want to process the image, you'd end up doing extra unnecessary work if you try and peel it out of the texture.

jQuery, change source to animate the rotation of a line. faster alternatives?

I am making digital instruments for a car. These instruments will be constantly updated by information through ajax. These instruments will be served from a server onboard the vehicle through WLAN (fast) to my iPhone 3G. Is imperative to the success of the project that the updating of the tachometer is smooth and very responsive. Otherwise, it will look retarded.
The first problem I encountered was when I made this demo where tachometer moves quickly back and forth between zero and a thousand RPM: http://www.kingoslo.com/instruments/ When viewed on my iPhone 3G, the arrow simply doesn't move back and forth smoothly enough.
This javascript works by changing the source of the arrow img-element (which is semi transparant {4 color png} floating on top the static picture of the scale {16 color png}, by the way).
I've been made aware of new image editing features in HTML5, and wondered if any of those, or any other methods will be more responsive. Also, I am getting an iPhone 4 for xmas, so that may be a bit faster, but I've got the feeling that it still will fall short for the current build, especially when I add the constant ajax updating that is required to keep the instruments change values as the driver drives along.
Thank you for your time. Any help is greatly appreciated.
Kind regards,
Marius
I think using canvas will result in much faster animation - it was created to handle drawing, whilst manipulating DOM elements is comparably expensive.
Mobile Safari is compatible with canvas.
Alternatively, you could try incorporating all the angles as one large CSS sprite, and then just manipulating its background-position CSS property (element.style.backgroundPosition in the JavaScript DOM API).

Resources