Texture crop replacement in OpenGL ES 2.0? - opengl-es

I am trying to upgrade code which uses OpenGLES1 to ES2. The conversion is more or less understandable however for these lines I could not find alternative in ES2. Not even sure why it is necessary.
GLint crop[4] = { 0, h, w, -h };
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_CROP_RECT_OES, crop);

if it is for drawing animation then use UV coordinates, it is faster.
If you need part of the texture use
Bitmap newbitmap = Bitmap.createBitmap(oldbitmap, x,y,w,h)
GLUtils.texImage2D(GL_TEXTURE_2D, 0, newbitmap , 0);

Related

Using SetDIBitsToDevice with RGB array. Is it faster than BitBlt?

I have an array defined as:
COLORREF* array = (COLORREF*)malloc(SCR_WIDTH * SCR_HEIGHT * sizeof(COLORREF));
An algorithm updates this array with RGB colors continuously.
I'm just looking for a fast way to display this image on the screen, in particular BitBlt vs. SetDIBitsToDevice.
I've been successful using BitBlt:
map = CreateBitmap(SCR_WIDTH, SCR_HEIGHT, 1, 32, (void*)array);
src = CreateCompatibleDC(hdc);
oldBitmap = SelectObject(src, map);
BitBlt(hdc, 0, 0, SCR_WIDTH, SCR_HEIGHT, src, 0, 0, SRCCOPY);
SelectObject(src, oldBitmap);
DeleteDC(src);
DeleteObject(map);
However, I think that SetDIBitsToDevice would be faster since you can draw the pixels directly to the screen without the need to create a compatible DC (correct me if I'm wrong).
The problem that I'm having is that SetDIBitsToDevice needs these two arguments and I can't find how to obtain/create them from the RGB array:
const VOID *lpvBits,
const BITMAPINFO *lpbmi,
Any help is appreciated. I'm also open to suggestions that could work even faster (preferably using WINGDI).
Thanks a lot!

How can I properly create an array texture in OpenGL (Go)?

I have a total of two textures, the first is used as a framebuffer to work with inside a computeshader, which is later blitted using BlitFramebuffer(...). The second is supposed to be an OpenGL array texture, which is used to look up textures and copy them onto the framebuffer. It's created in the following way:
var texarray uint32
gl.GenTextures(1, &texarray)
gl.ActiveTexture(gl.TEXTURE0 + 1)
gl.BindTexture(gl.TEXTURE_2D_ARRAY, texarray)
gl.TexParameteri(gl.TEXTURE_2D_ARRAY, gl.TEXTURE_MIN_FILTER, gl.LINEAR)
gl.TexImage3D(
gl.TEXTURE_2D_ARRAY,
0,
gl.RGBA8,
16,
16,
22*48,
0,
gl.RGBA, gl.UNSIGNED_BYTE,
gl.Ptr(sheet.Pix))
gl.BindImageTexture(1, texarray, 0, false, 0, gl.READ_ONLY, gl.RGBA8)
sheet.Pix is just the pixel array of an image loaded as a *image.NRGBA
The compute-shader looks like this:
#version 430
layout(local_size_x = 1, local_size_y = 1) in;
layout(rgba32f, binding = 0) uniform image2D img;
layout(binding = 1) uniform sampler2DArray texAtlas;
void main() {
ivec2 iCoords = ivec2(gl_GlobalInvocationID.xy);
vec4 c = texture(texAtlas, vec3(iCoords.x%16, iCoords.y%16, 7));
imageStore(img, iCoords, c);
}
When i run the program however, the result is just a window filled with the same color:
So my question is: What did I do wrong during the shader creation and what needs to be corrected?
For any open code questions, here's the corresponding repo
vec4 c = texture(texAtlas, vec3(iCoords.x%16, iCoords.y%16, 7))
That can't work. texture samples the texture at normalized coordinates, so the texture is in [0,1] (in the st domain, the third dimension is the layer and is correct here), coordinates outside of that ar handled via the GL_WRAP_... modes you specified (repeat, clamp to edge, clamp to border). Since int % 16 is always an integer, and even with repetition only the fractional part of the coordinate will matter, you are basically sampling the same texel over and over again.
If you need the full texture sampling (texture filtering, sRGB conversions etc.), you have to use the normalized coordinates instead. But if you only want to access individual texel data, you can use texelFetch and keep the integer data instead.
Note, since you set the texture filter to GL_LINEAR, you seem to want filtering, however, your coordinates appear as if you would want at to access the texel centers, so if you're going the texture route , thenvec3(vec2(iCoords.xy)/vec2(16) + vec2(1.0/32.0) , layer) would be the proper normalization to reach the texel centers (together with GL_REPEAT), but then, the GL_LINEAR filtering would yield identical results to GL_NEAREST.

Reading depth buffer using OpenGLES3

In OpenGL I am able to read the z-buffer values, using glReadPixels, like so:
glReadPixels(scrx, scry, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &depth);
If I do the same in OpenGL ES 3.2 I get a GL_INVALID_OPERATION error.
Checking the specification, I see that OpenGL allows GL_DEPTH_COMPONENT, but OpenGLES3 does not.
As a work-around, I copied the fragment depth to the alpha value in the colour buffer using this GLSL:
#version 320 es
...
outCol = vec4(psCol.rgb, gl_FragCoord.z);
After doing glReadPixels() on the GL_RGBA part of the framebuffer, I use rgba[3]/255.0 as the depth value.
Although this works, the 8-bit precision on the alpha value is insufficient for my purpose of picking what is under the mouse cursor.
Is there a way to get Z values from the framebuffer in OpenGL ES3?
There is an OpenGL ES Extension NV_read_depth which allows reading from the depth buffer by glReadPixels. The extension is written against the OpenGL ES 2.0 Specification, but is still not standard in OpenGL ES 3.2.
Get a set of the OpenGL es extension names by:
std::set<std::string> ogl_es_extensins;
GLint no_of_extensions = 0;
glGetIntegerv(GL_NUM_EXTENSIONS, &no_of_extensions);
for ( int i = 0; i < no_of_extensions; ++i )
{
std::string ext_name = reinterpret_cast<const char*>(glGetStringi(GL_EXTENSIONS, i));
ogl_es_extensins.insert(ext_name);
}
Note, you can either try to read the depth buffer (NV_read_depth) or the depth and stencil buffer (NV_read_depth_stencil);

seeing through triangles in GLKit

I am working on a simple iOS application to learn about OpenGLES 2.0. In the project, I'm rendering 4 triangles in the shape of a pyramid, with some sliders to adjust the height of the apex of the pyramid, and to rotate the modalViewMatrix about the y axis. I am trying to find the reason why.. after rotating this object counter-clockwise to the point where triangles appear in front of other triangles, I can see through the near triangles. However, when rotating in the clockwise direction to the same point, the near triangles are opaque and occlude the furthest triangles.
I assumed that the reason was a lack of a depth render buffer but after setting the property view.drawableDepthFormat = GLKViewDrawableDepthFormat16; the behavior persists.
For reference, this is my drawRect function where drawing is done. The only other code is done in viewDidLoad and in Global scope of the xcode project here.
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
[self.baseEffect prepareToDraw];
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindBuffer(GL_ARRAY_BUFFER,pos);
glEnableVertexAttribArray(GLKVertexAttribPosition);
const GLvoid * off1 = NULL + offsetof(SceneVertex, position) ;
glVertexAttribPointer(GLKVertexAttribPosition, // Identifies the attribute to use
3, // number of coordinates for attribute
GL_FLOAT, // data is floating point
GL_FALSE, // no fixed point scaling
sizeof(SceneVertex), // total num bytes stored per vertex
off1);
glEnableVertexAttribArray(GLKVertexAttribNormal);
const GLvoid * off2 = NULL + offsetof(SceneVertex, normal) ;
glVertexAttribPointer(GLKVertexAttribNormal, // Identifies the attribute to use
3, // number of coordinates for attribute
GL_FLOAT, // data is floating point
GL_FALSE, // no fixed point scaling
sizeof(SceneVertex), // total num bytes stored per vertex
off2);
GLenum error = glGetError();
if(GL_NO_ERROR != error)
{
NSLog(#"GL Error: 0x%x", error);
}
int sizeOfTries = sizeof(triangles);
int sizeOfSceneVertex = sizeof(SceneVertex);
int numArraysToDraw = sizeOfTries / sizeOfSceneVertex;
glDrawArrays(GL_TRIANGLES, 0, numArraysToDraw);
}
It's not enough just to have a depth buffer, you need to tell OpenGL how you want to use it. Try adding the following lines:
glEnable(GL_DEPTH_TEST); // Enable depth testing
glDepthMask(GL_TRUE); // Enable depth write
glDepthFunc(GL_LEQUAL); // Choose the depth comparison function
While we're here, I'd recommend GLKViewDrawableDepthFormat24 over GLKViewDrawableDepthFormat16 for most use cases (better precision).
I'd also recommend familiarizing yourself with xcode's frame capture feature (doc), it really is an invaluable way to figure out what is going on when rendering is not working as intended.

Storing floats in a texture in OpenGL ES

In WebGL, I am trying to create a texture with texels each consisting of 4 float values. Here I attempt to to create a simple texture with one vec4 in it.
var textureData = new Float32Array(4);
var texture = gl.createTexture();
gl.activeTexture( gl.TEXTURE0 );
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(
// target, level, internal format, width, height
gl.TEXTURE_2D, 0, gl.RGBA, 1, 1,
// border, data format, data type, pixels
0, gl.RGBA, gl.FLOAT, textureData
);
My intent is to sample it in the shader using a sampler like so:
uniform sampler2D data;
...
vec4 retrieved = texture2D(data, vec2(0.0, 0.0));
However, I am getting an error during gl.texImage2D:
WebGL: INVALID_ENUM: texImage2D: invalid texture type
WebGL error INVALID_ENUM in texImage2D(TEXTURE_2D, 0, RGBA, 1, 1, 0, RGBA, FLOAT,
[object Float32Array])
Comparing the OpenGL ES spec and the OpenGL 3.3 spec for texImage2D, it seems like I am not allowed to use gl.FLOAT. In that case, how would I accomplish what I am trying to do?
You can create a byte array from your float array. Each float should take 4bytes (32bit float). This array can be put into texture using a standard RGBA format with unsigned byte. This will create a texture where each texel contains a single 32bit floating number which seems to be exactly what you want.
The only problem is your floating value is split into 4 floating values when you retrieve it from texture in your fragment shader. So what you are looking for is most likely "how to convert vec4 into a single float".
You should note what you are trying to do with internal format being RGBA consisting of 32bit floats will not work as your texture will always be 32bit per texel so even somehow forcing floats into a texture should result into clamping or precision loss. And then even if the texture texel would consist of 4 RGBA 32bit floats your shader would most likely treat them as lowp using texture2D at some point.
The solution to my problem is actually quite simple! I just needed to type
var float_texture_ext = gl.getExtension('OES_texture_float');
Now WebGL can use texture floats!
This MDN page tells us why:
Note: In WebGL, unlike in other GL APIs, extensions are only available if explicitly requested.

Resources