Using GL_RED_INTEGER for object picking (GLES 3.0) [duplicate] - opengl-es

This question already has answers here:
How to render to a unsigned integer format
(3 answers)
Closed 6 months ago.
I have a scene where I draw a bunch of objects. I want the user to be able to select each object via a mouse. What I want to do is render the scene to a texture, draw each object as a purely solid color (in which the ID of the object is encoded). Then I will read back the texture's bits, take the ID at the mouse's position and use that to select the object.
I'm creating my offscreen buffer like so:
XGL(glTexImage2D(GL_TEXTURE_2D, 0, GL_R16UI, mWidth, mHeight,0,GL_RED_INTEGER, GL_UNSIGNED_SHORT, mBits));
But my problem is... I have no idea how to make the shader (GLSL3 3.0) to write an integer instead of normalized floats. I could encode the integer ID number into a normalized 4-float color, and then convert it back when I read the pixels, but I am worried that floating point "fuzziness" will cause small inaccuracies (i.e. ID (int)3 becomes ID (float)2.999999999999) when converting back and forth.
Is there a way to simply tell the shader, "here's the 4 bytes I want to write into the color buffer, never mind the floating point conversion stuff?"

The format of the framebuffer has only 1 channel and is integral. So define an integral output variable of type uint in the fragment shader:
#version 300 es
layout(location = 0) out uint fragOuput;
void main() {
fragOuput = ...;
}

Related

Visual C++: Good way to draw and animated fill path to screen?

I want to use Visual C++ to animate fill paths to screen. I have done it with C# before, but now switch to C++ for better perfomance and want do more complex works in the future.
Here is the concept in C#:
In a Canvas I have a number of Path. These paths are closed geometries combine of LineTo and QuadraticBezierTo functions.
Firstly, I fill Silver color for all path.
Then for each path, I fill Green color from one end to other end (up/down/left/right direction) (imagine a progress bar increase its value from min to max). I do it by set the Fill brush of the path to a LinearGradientBrush with two color Green and Silver with same offset, then increase the offset from 0 to 1 by Timer.
When a path is fully green, continue with next path.
When all path is fill with Green, come back first step.
I want to do same thing in Visual C++. I need to know an effective way to:
Create and store paths in a collection to reuse. Because the path is quite lot of point, recreate them repeatly take lots of CPU usage.
Draw all paths to a window.
Do animation fill like step 2, 3, 4 in above concept.
So, what I need is:
A suitable way to create and store closed paths. Note: paths are combine of points connect by functions same with C# LineTo and QuadraticBezierTo function.
Draw and animated fill the paths to screen.
Can you please suggest one way to do above step? (outline what I have to read, then I can study about it myself). I know basic of Visual C++, Win32 GUI and a little about draw context (HDC) and GDI, but only start to learn Graphic/Drawing.
Sorry about my English! If anythings I explain dont clear, please let me know.
how many is quite lot of point ? what is the target framerate? for low enough counts you can use GDI for this otherwise you need HW acceleration like OpenGL,DirectX.
I assume 2D so You need:
store your path as list of segments
for example like this:
struct path_segment
{
int p0[2],p1[2],p2[2]; // points
int type; // line/bezier
float length; // length in pixels or whatever
};
const int MAX=1024; // max number of segments
path_segment path[MAX]; // list of segments can use any template like List<path_segment> path; instead
int paths=0; // actual number of segments;
float length=0.0; // while path length in pixels or whatever
write functions to load and render path[]
The render is just for visual check if you load is OK ... for now atlest
rewrite the render so
it take float t=<0,1> as input parameter which will render path below t with one color and the rest with other. something like this:
int i;
float l=0.0,q,l0=t*length; // separation length;
for (i=0;i<paths;i++)
{
q=l+path[i].length;
if (q>=l0)
{
// split/render path[i] to < 0,l-l0> with color1
// split/render path[i] to <l-l0,q-l0> with color2
// if you need split parameter in <0,1> then =(l-l0)/path[i].length;
i++; break;
}
else
{
//render path[i] with color1
}
l=q;
}
for (;i<paths;i++)
{
//render path[i] with color2
}
use backbuffer for speedup
so render whole path with color1 to some bitmap. On each animation step just render the newly added color1 stuff. And on each redraw just copy the bitmap to screen instead of rendering the same geometry over and over. Of coarse if you have zoom/pan/resize capabilities you need to redraw the bitmap fully on each of those changes ...

Reading Pixels in WebGL 2 as Float values

I need to read the pixels of my framebuffer as float values.
My goal is to get a fast transfer of lots of particles between CPU and GPU and process them in realtime. For that I store the particle properties in a floating point texture.
Whenever a new particle is added, I want to get the current particle array back from the texture, add the new particle properties and then fit it back into the texture (this is the only way I could think of to dynamically add particles and process them GPU-wise).
I am using WebGL 2 since it supports reading back pixels to a PIXEL_PACK_BUFFER target. I test my code in Firefox Nightly. The code in question looks like this:
// Initialize the WebGLBuffer
this.m_particlePosBuffer = gl.createBuffer();
gl.bindBuffer(gl.PIXEL_PACK_BUFFER, this.m_particlePosBuffer);
gl.bindBuffer(gl.PIXEL_PACK_BUFFER, null);
...
// In the renderloop, bind the buffer and read back the pixels
gl.bindBuffer(gl.PIXEL_PACK_BUFFER, this.m_particlePosBuffer);
gl.readBuffer(gl.COLOR_ATTACHMENT0); // Framebuffer texture is bound to this attachment
gl.readPixels(0, 0, _texSize, _texSize, gl.RGBA, gl.FLOAT, 0);
I get this error in my console:
TypeError: Argument 7 of WebGLRenderingContext.readPixels could not be converted to any of: ArrayBufferView, SharedArrayBufferView.
But looking at the current WebGL 2 Specification, this function call should be possible. Using the type gl.UNSIGNED_BYTE also returns this error.
When I try to read the pixels in an ArrayBufferView (which I want to avoid since it seems to be way slower) it works with the format/type combination of gl.RGBA and gl.UNSIGNED_BYTE for a Uint8Array() but not with gl.RGBA and gl.FLOAT for a Float32Array() - this is as expected since it's documented in the WebGL Specification.
I am thankful for any suggestions on how to get my float pixel values from my framebuffer or on how to otherwise get this particle pipeline going.
Did you try using this extension?
var ext = gl.getExtension('EXT_color_buffer_float');
The gl you have is webgl1,not webgl2.Try:
var gl = document.getElementById("canvas").getContext('webgl2');
In WebGL2 the syntax for glReadPixel is
void gl.readPixels(x, y, width, height, format, type, ArrayBufferView pixels, GLuint dstOffset);
so
let data = new Uint8Array(gl.drawingBufferWidth * gl.drawingBufferHeight * 4);
gl.readPixels(0, 0, gl.drawingBufferWidth, gl.drawingBufferHeight, gl.RGBA, gl.UNSIGNED_BYTE, pixels, 0);
https://developer.mozilla.org/en-US/docs/Web/API/WebGLRenderingContext/readPixels

Export a Uint8 array as an image using Images in Julia

I recently asked how to convert Float32 or Uint8 arrays into images in the Images package. I got an answer for the Float32 case, but am still having trouble figuring out how to save a Uint8 array.
As an example, let's create a random Uint8 array using the traditional Matlab scheme where the dimensions are (m,n,3):
array = rand(Uint8, 50, 50, 3);
img = convert(Image, array);
Using the same approach as works for the Float32 case,
imwrite(img, "out.png")
fails with message
ERROR: method 'mapinfo' has no method matching mapinfo(::Type{ImageMagick}, ::Image{Uint8, 3, Image{Uint8, 3, Array{Uint8, 3}}}).
I checked the documentation, and it says
If data encodes color information along one of the dimensions of the array (as opposed to using a ColorValue array, from the Color.jl package), be sure to specify the "colordim" and "colorspace" in properties.
However, inspecting the img object previously created shows that it has colordim = 3 and colorspace = RGB already set up, so this can't be the problem.
I then searched the documentation for all instances of MapInfo. In core.md there is one occurrence:
scalei: a property that controls default contrast scaling upon display. This should be a MapInfo value, to be used for setting the contrast upon display. In the absence of this property, the range 0 to 1 will be used.
But there was no information on what exactly a MapInfo object is, so I looked further, and in function_reference.md it says:
Here is how to directly construct the major concrete MapInfo types:
MapNone(T), indicating that the only form of scaling is conversion to type T. This is not very safe, as values "wrap around": for example, converting 258 to a Uint8 results in 0x02, which would look dimmer than 255 = 0xff.
...
and some other examples. So I tried to specify scalei = MapNone(Uint8) as follows:
img2 = Image(img, colordim = 3, colorspace = "RGB", scalei = MapNone(Uint8));
imwrite(img, "out.png")
but got the same error again.
How do you encode Uint8 image data using Images in Julia?
You can convert back and forth between arrays of primitive types such as UInt8 and arrays of color types. These conversions are achieved in a unified way via two functions: colorview and channelview.
Example
Convert array of UInt8 to array of RGB:
arr = rand(UInt8, 3, 50, 50)
img = colorview(RGB, arr / 255)
Convert back to channel view:
channelview(img)
Notes
In this example the RGB color type requires that the entries of the array live in [0,1] as floating point. I manually converted UInt8 to Float64 using an explicit division by 255. There is probably a more generic way of achieving this result with reinterpret or some other function in Images.jl
The colorview and channelview functions assume that the channel dimension is the first dimension of the array. You can use permutedims in case your channels live in a different dimension, or use some function in Images.jl (maybe reinterpretc?) to do it efficiently without memory copies.

Drawing PixelFormat32bppPARGB images with GDI+ uses conventional formula instead of premultiplied one

Here is some minimal code to show an issue:
static const int MAX_WIDTH = 320;
static const int MAX_HEIGHT = 320;
Gdiplus::Bitmap foregroundImg(MAX_WIDTH,MAX_HEIGHT,PixelFormat32bppPARGB);
{
Gdiplus::Graphics g(&foregroundImg);
g.Clear(Gdiplus::Color(10,255,255,255));
}
Gdiplus::Bitmap softwareBitmap(MAX_WIDTH,MAX_HEIGHT,PixelFormat32bppPARGB);
Gdiplus::Graphics g(&softwareBitmap);
g.SetCompositingMode(Gdiplus::CompositingModeSourceOver);
g.SetCompositingQuality(Gdiplus::CompositingQualityDefault);
g.Clear(Gdiplus::Color(255,0,0,0));
g.DrawImage(foregroundImg,0,0);
CLSID encoder;
GetEncoderClsid(L"image/png",&encoder);
softwareBitmap.Save(L"d:\\image.png",&encoder);
As result I'm getting image filled by RGB values equals to 10. It seems GDI+ uses the conventional algorithm:
255*(10/255) + 0*(1-10/255) == 10.
But I'm expecting that premultiplied algorithm will be used (because foreground image has the premultiplied PixelFormat32bppPARGB format):
255 + 0*(1-10/255) == 255
So my question, why GDI+ uses conventional formula when image is in premultiplied alpha format? And is there any workaround to make GDI+ to use the premultiplied alpha algorithm?
The format of your foreground image doesn't matter (given that it has alpha) because you're setting it to a Gdiplus::Color. Color values are defined as non-premultiplied, so gdiplus multiplies the components by the alpha value when it clears the foreground image. The alternative would be for Color values to have different meaning depending on the format of the render target, and that way lies madness.
You might be able to do what you intend by setting the source image bits directly, or you might not. Components with values greater than 100% aren't really valid in gdiplus's rendering model, so I'd not be surprised if it caps them during rendering. If you really want this level of control over the rendering, you'll have to lock the bitmap bits and do it yourself.

Does webgl drawArray() empty/discard the buffers?

Trying to speed up the display of many near-identical objects in WebGL, I tried (naively, I guess), to re-use the buffers content. In the drawing routine of each object, I have (somewhat simplified):
if (! dataBuffered) {
dataBuffered = true;
:
: gl stuff here: texture loading, buffer binding and filling
:
}
// set projection and model-view matrices
gl.uniformMatrix4fv (shaderProgram.uPMatrix, false, pMatrix);
gl.uniformMatrix4fv (shaderProgram.uMVMatrix, false, mvMatrix);
// draw rectangle filled with texture
gl.drawArrays(gl.TRIANGLE_STRIP, 0, starVertexPositionBuffer.numItems);
My idea was that the texture, vertex, and texture coordinates buffer are the same, but the model-view matrix changes (same object in different places). But, alas, nothing shows up. When I comment the dataBuffered = true, it's visible.
So my question is, does drawArray() discard or empty the buffers? What else is happening? (I'm working along the lessons at learningwebgl.com, if that matters.)
Short answer is, Yes, you can reuse all the state you've set up for more than one gl.drawArrays().
http://omino.com/experiments/webgl/simplestWebGlReuseBuffers.html is a little example where it just changes one uniform float (Y-scale) and redraws, twice for every tick.
(In this case there's no textures, but some other state stays sticky.)
Hope that helps!
uniformSetFloat(gl,prog,"scaleY",1.0);
gl.drawArrays(gl.TRIANGLES, 0, posPoints.length / 3);
uniformSetFloat(gl,prog,"scaleY",0.2);
gl.drawArrays(gl.TRIANGLES, 0, posPoints.length / 3);

Resources