ARCore ArSession_setCameraTextureNames - opengl-es

If calling ArSession_setCameraTextureNames with a list of textures, saying 2, like this
uint _textureHandles[2] = { 0, 0 };
glGenTextures(2, textureHandles);
ArSession_setCameraTextureNames(ar_session, 2, textureHandles);
How does the application know which texture is being used for the latest frame ?
Any chance that order of texture will be messed up and ring buffer of textures could be mapped to wrong frames ?
Thanks

Related

AMD OpenGL and HDR display problem on Windows

I have been using OpenGL to display HDR content following explanation from nvidia:
https://on-demand.gputechconf.com/gtc/2017/presentation/s7394-tom-true-programming-for-high-dynamic-range.pdf
And it works great, but only on nVidia GPUs.
Using the same method:
Specify WGL_PIXEL_TYPE_ARB = WGL_TYPE_RGBA_FLOAT_ARB
with color depth 16 (WGL_RED_BITS_ARB = 16, WGL_GREEN_BITS_ARB = 16, WGL_BLUE_BITS_ARB = 16)
On AMD GPUs it displays SDR image.
That is to say it clamps the fragment shader values to 1.0, while on nvidia gpus it allows values to ~25.0 (or 10.000 nits as i understand it), and displays it correctly.
This is using the same TV (LG B9) and same OS (Wind 10).
Note that other apps, like Chrome displays HDR content correctly on AMD gpus, and directX tests apps also.
Tried bunch of different AMD GPUs, drivers settings texture formats, pixel types etc, with no luck.
Read thru whole https://gpuopen.com/ for clues, no luck.
Anyone have an idea or example how to create a proper OpenGL HDR Context/configuration?
I'll try an minimal example here, but its part of larger process and in Delphi, so it will be for orientation only
const
PixelaAttribList: array[0..20] of Integer =( //
WGL_DRAW_TO_WINDOW_ARB, 1, //
WGL_DOUBLE_BUFFER_ARB, 1, //
WGL_SUPPORT_OPENGL_ARB, 1, //
WGL_ACCELERATION_ARB, WGL_FULL_ACCELERATION_ARB, //
WGL_SWAP_METHOD_ARB, WGL_SWAP_EXCHANGE_ARB, //
WGL_PIXEL_TYPE_ARB, WGL_TYPE_RGBA_FLOAT_ARB, //
WGL_RED_BITS_ARB, 16, //
WGL_GREEN_BITS_ARB, 16, //
WGL_BLUE_BITS_ARB, 16, //
WGL_ALPHA_BITS_ARB, 0, //
0);
var
piFormats: PGLint;
Begin
wglChoosePixelFormatARB(DC, #PixelaAttribList, NIL, 100, piFormats, #nNumFormats);
if nNumFormats = 0 then
exit;
if not SetPixelFormat(DC, piFormats^, nil) then
exit;
hrc:= wglCreateContextAttribsARB(DC, 0, nil);
if Result <> 0 then
ActivateRenderingContext(DC, hrc);
After the code i tested format with
wglGetPixelFormatAttribivARB
and I get 16 bit per color, so exactly whats needed.
Fragment shader is simple:
gl_FragColor = vec4(25.0,25.0,25.0,1.0);
Regards

Reading Pixels in WebGL 2 as Float values

I need to read the pixels of my framebuffer as float values.
My goal is to get a fast transfer of lots of particles between CPU and GPU and process them in realtime. For that I store the particle properties in a floating point texture.
Whenever a new particle is added, I want to get the current particle array back from the texture, add the new particle properties and then fit it back into the texture (this is the only way I could think of to dynamically add particles and process them GPU-wise).
I am using WebGL 2 since it supports reading back pixels to a PIXEL_PACK_BUFFER target. I test my code in Firefox Nightly. The code in question looks like this:
// Initialize the WebGLBuffer
this.m_particlePosBuffer = gl.createBuffer();
gl.bindBuffer(gl.PIXEL_PACK_BUFFER, this.m_particlePosBuffer);
gl.bindBuffer(gl.PIXEL_PACK_BUFFER, null);
...
// In the renderloop, bind the buffer and read back the pixels
gl.bindBuffer(gl.PIXEL_PACK_BUFFER, this.m_particlePosBuffer);
gl.readBuffer(gl.COLOR_ATTACHMENT0); // Framebuffer texture is bound to this attachment
gl.readPixels(0, 0, _texSize, _texSize, gl.RGBA, gl.FLOAT, 0);
I get this error in my console:
TypeError: Argument 7 of WebGLRenderingContext.readPixels could not be converted to any of: ArrayBufferView, SharedArrayBufferView.
But looking at the current WebGL 2 Specification, this function call should be possible. Using the type gl.UNSIGNED_BYTE also returns this error.
When I try to read the pixels in an ArrayBufferView (which I want to avoid since it seems to be way slower) it works with the format/type combination of gl.RGBA and gl.UNSIGNED_BYTE for a Uint8Array() but not with gl.RGBA and gl.FLOAT for a Float32Array() - this is as expected since it's documented in the WebGL Specification.
I am thankful for any suggestions on how to get my float pixel values from my framebuffer or on how to otherwise get this particle pipeline going.
Did you try using this extension?
var ext = gl.getExtension('EXT_color_buffer_float');
The gl you have is webgl1,not webgl2.Try:
var gl = document.getElementById("canvas").getContext('webgl2');
In WebGL2 the syntax for glReadPixel is
void gl.readPixels(x, y, width, height, format, type, ArrayBufferView pixels, GLuint dstOffset);
so
let data = new Uint8Array(gl.drawingBufferWidth * gl.drawingBufferHeight * 4);
gl.readPixels(0, 0, gl.drawingBufferWidth, gl.drawingBufferHeight, gl.RGBA, gl.UNSIGNED_BYTE, pixels, 0);
https://developer.mozilla.org/en-US/docs/Web/API/WebGLRenderingContext/readPixels

Win32 LayeredWindow gives bad visual effect

I'm developing a UI system that has all those smart features like panel tearing off and docking, etc. Right now my task is to create an overlay on the screen that shows the position where the teared off or dockable panel would land. Pretty much same thing that visual studio has got.
For that I'm using a custom layered window class that would show up when it is needed. After that I've started digging to achieve the needed effect.
I was working with standart GDI functions before and basicly they are ok. But this time I followed the documentation advice to use UpdateLayeredWindow for my tasks and to load 32bit image from bitmap instead of drawing it with GDI functions.
So here I have a 128x128pixel wide bmp with 222 in alpha channel and 255 0 0 in RGB
Here are methods which I use for initialization and drawing.
void Init(HDC in_hdc,HWND in_hwnd)
{
bf = { 0, 0, 200, AC_SRC_ALPHA };
hwnd = in_hwnd;
hdc_mem = CreateCompatibleDC(in_hdc);
hBitmap_mem = CreateCompatibleBitmap(in_hdc, canvas_size.cx, canvas_size.cy);
hBitmap_mem_default = (HBITMAP)SelectObject(hdc_mem, hBitmap_mem);
hdc_bitmap = CreateCompatibleDC(in_hdc);
}
void DrawArea(RECT& in_rect)
{
hBitmap_area_default = (HBITMAP)SelectObject(hdc_bitmap, hBitmap_area);
AlphaBlend(hdc_mem, in_rect.left, in_rect.top, in_rect.right, in_rect.bottom, hdc_bitmap, 0, 0, 2, 2, bf);
hBitmap_area = (HBITMAP)SelectObject(hdc_bitmap, hBitmap_area_default);
}
void Update()
{
POINT p = { 0, 0 };
HDC hdc_screen = GetDC(0);
UpdateLayeredWindow(hwnd, hdc_screen, &p, &canvas_size, hdc_mem, &p, 0, &bf, ULW_ALPHA);
}
The window style has this extras
WS_EX_LAYERED|WS_EX_TRANSPARENT|WS_EX_TOPMOST
And here is what I get.
So as you can see the blending that takes place DOES take into account per-pixel alpha, but it does a bad blending job.
Any ideas how to tune it?
I suspect the problem is in the source bitmap. This is the kind of effect you get when the RGB values aren't premultiplied with the alpha. But ignore that because there is a far simpler way of doing this.
Create a layered window with a solid background colour by setting hbrBackground in the WNDCLASSEX structure.
Make the window partially transparent by calling SetLayeredWindowAttributes.
Position the window where you want it.
That's it.
This answer has code that illustrates the technique for a slightly different purpose.

Does webgl drawArray() empty/discard the buffers?

Trying to speed up the display of many near-identical objects in WebGL, I tried (naively, I guess), to re-use the buffers content. In the drawing routine of each object, I have (somewhat simplified):
if (! dataBuffered) {
dataBuffered = true;
:
: gl stuff here: texture loading, buffer binding and filling
:
}
// set projection and model-view matrices
gl.uniformMatrix4fv (shaderProgram.uPMatrix, false, pMatrix);
gl.uniformMatrix4fv (shaderProgram.uMVMatrix, false, mvMatrix);
// draw rectangle filled with texture
gl.drawArrays(gl.TRIANGLE_STRIP, 0, starVertexPositionBuffer.numItems);
My idea was that the texture, vertex, and texture coordinates buffer are the same, but the model-view matrix changes (same object in different places). But, alas, nothing shows up. When I comment the dataBuffered = true, it's visible.
So my question is, does drawArray() discard or empty the buffers? What else is happening? (I'm working along the lessons at learningwebgl.com, if that matters.)
Short answer is, Yes, you can reuse all the state you've set up for more than one gl.drawArrays().
http://omino.com/experiments/webgl/simplestWebGlReuseBuffers.html is a little example where it just changes one uniform float (Y-scale) and redraws, twice for every tick.
(In this case there's no textures, but some other state stays sticky.)
Hope that helps!
uniformSetFloat(gl,prog,"scaleY",1.0);
gl.drawArrays(gl.TRIANGLES, 0, posPoints.length / 3);
uniformSetFloat(gl,prog,"scaleY",0.2);
gl.drawArrays(gl.TRIANGLES, 0, posPoints.length / 3);

android OpenGLES 1.x CameraPreview to Surfacetexture

I am trying to send the camera preview to a surfacetexture object and render it on a square. I have running code for GLES20 but didnt find anything for 1.x.
Basically it should work like this, right?
// setup texture
gl.glActiveTexture(GL10.GL_TEXTURE0);
gl.glGenTextures(1, textures, 0);
gl.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, textures[0]);
gl.glTexParameterf(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, ...);
...
// setup surfacetexture object
surface = new SurfaceTexture(textures[0]);
surface.setOnFrameAvailableListener(this);
// setup camera
mCamera = Camera.open(0);
Camera.Parameters param = mCamera.getParameters();
List<Size> psize = param.getSupportedPreviewSizes();
//find previewsize to match glsurface from renderer
param.setPreviewSize(psize.get(i).width, psize.get(i).height);
mCamera.setParameters(param);
// set the texture and start preview
mCamera.setPreviewTexture(surface);
mCamera.startPreview();
// in the "onFrameAvailable" handler, i switch a flag to mark a new frame
updateSurface = true;
// and in the renderloop i update and redraw
if (updateSurface) {
surface.updateTexImage();
updateSurface = false;
}
gl.glActiveTexture(GL10.GL_TEXTURE0);
gl.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, textures[0]);
// Draw square
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBufferFloor);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices.length / 3);
The square gets drawn but is completely white. I dont receive glErrors or other exceptions. The "onFrameAvailable" handler gets called too.
If i use glTeximage with a loaded bitmap, it is correctly drawn on the square.
ANY ideas? Thank you!
I'm facing the same problem. Maybe I'm wrong, but it seems that SurfaceTexture is not compatible with GLES10. Surface texture uses GL_TEXTURE_EXTERNAL_OES, thereby it a custom fragment shader that is able to use this texture ("#extension GL_OES_EGL_image_external : require ").
As glUseProgram(...), etc are not avaible in GLES10, we cannot use custom shaders.
As I said, maybe I'm wrong... Good luck
EDIT : I finaly get it to work. You should use "gl.glEnable(GLES11Ext.GL_TEXTURE_EXTERNAL_OES);"

Resources