OpenGLES2: How to load and access a big float array - opengl-es

I have a large WxH float array:
float floatArray[W][H];
I want to access it in a fragment shader and I need to load/access it through a texture due to its size:
vec4 v4 = texture2D(tex, v_texCoord);
//Getting v4.x as floatArray[v_texCoord.x * W][v_texCoord.y * H]
I load the texture like this:
int texturenames[1];
glGenTextures(1, texturenames);
glActiveTexture(GL_TEXTURE0 + texturenames[0]);
glBindTexture(GL_TEXTURE_2D, texturenames[0]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, w, h, 0, GL_LUMINANCE, GL_FLOAT, floatArray);
glUniform1i(glGetUniformLocation(program_, "tex"), texturenames[0]);
I don't get the right values. Note that the third (internalformat) and seventh (format) parameters of glTexImage2D are GL_LUMINANCE.
void glTexImage2D(GLenum target,
GLint level,
GLint internalformat,
GLsizei width,
GLsizei height,
GLint border,
GLenum format,
GLenum type,
const GLvoid * data);
How can I load and access a big float array in OpenGLES2?

Short answer - you can't. OpenGL ES 2.0 doesn't support floating point texturing.
Given you only want a single channel perhaps you could encode it in an RGBA unorm texture and recover the value algorithmically in the shader, but it sounds horribly expensive on a mobile GPU.
OpenGL ES 3.0 does support float texturing, so that might provide more luck.

Related

Scale OpenGL texture and return bitmap in CPU memory

I have a texture on the GPU defined by an OpenGL textureID and target.
I need for further processing a 300 pixel bitmap in CPU memory (width 300 pixel, height proportional depending on the source width).
The pixel format should be RGBA, ARGB or BGRA with float components.
How can this be done?
Thanks for your reply.
I tried the following. But I get only white pixels back:
glEnable(GL_TEXTURE_2D);
// create render texture
GLuint renderedTexture;
glGenTextures(1, &renderedTexture);
glBindTexture(GL_TEXTURE_2D, renderedTexture);
glTexImage2D(GL_TEXTURE_2D, 0,GL_RGB, (GLsizei)analyzeWidth, (GLsizei)analyzeHeight, 0,GL_RGBA, GL_FLOAT, 0);
unsigned int fbo;
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, renderedTexture, 0);
// draw texture
glBindTexture(GL_TEXTURE_2D, inTextureId);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
// Draw a textured quad
glBegin(GL_QUADS);
glTexCoord2f(0, 0); glVertex3f(0, 0, 0);
glTexCoord2f(0, 1); glVertex3f(0, 1, 0);
glTexCoord2f(1, 1); glVertex3f(1, 1, 0);
glTexCoord2f(1, 0); glVertex3f(1, 0, 0);
glEnd();
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if(status == GL_FRAMEBUFFER_COMPLETE)
{
}
unsigned char *buffer = CGBitmapContextGetData(mainCtx);
glReadPixels(0, 0, (GLsizei)analyzeWidth, (GLsizei)analyzeHeight, GL_RGBA, GL_FLOAT, buffer);
glDisable(GL_TEXTURE_2D);
glBindFramebuffer(GL_FRAMEBUFFER, 0); //
glDeleteFramebuffers(1, &fbo);
Create a second texture with the size 300*width/height x 300
Create a Framebuffer Object and attach the new texture as color buffer.
Set approrpiate texture filters for the (unscaled) source texture. You have the choice between point sampling (GL_NEAREST) and bilinear filtering (GL_LINEAR). If you are downscaling by more than a factor of 2 you might consider also using mipmapping, and might want to call glGenerateMipmap on the source texture first, and use one of the GL_..._MIPMAP_... minification filters. However, the availability of mipmapping will depend on how the source texture was created, if it is an immutable texture object without the mipmap pyramid, this won't work.
Render a textured object (with the original source texture) to the new texture. Most intuitive geometry would be a viewport-filling rectangle, most efficient would be a single triangle.
Read back the scaled texture with glReadPixels (via the FBO) or glGetTexImage (directly from the texture). For improved performance, you might consider asynchronous readbacks via Pixel Buffer Objects.

GPUImage replace colors with colors from textures

Looking at GPUImagePosterizeFilter it seems like an easy adaptation to replace colors with pixels from textures. Say I have an image that is made from 10 greyscale colors. I would like to replace each of the pixel ranges from the 10 colors with pixels from 10 different texture swatches.
What is the proper way to create the textures? I am using the code below (I am not sure on the alpha arguments sent to CGBitmapContextCreate).
CGImageRef spriteImage = [UIImage imageNamed:fileName].CGImage;
size_t width = CGImageGetWidth(spriteImage);
size_t height = CGImageGetHeight(spriteImage);
GLubyte * spriteData = (GLubyte *) calloc(width*height*4, sizeof(GLubyte));
CGContextRef spriteContext = CGBitmapContextCreate(spriteData, width, height, 8, width*4, CGImageGetColorSpace(spriteImage), kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(spriteContext, CGRectMake(0, 0, width, height), spriteImage);
CGContextRelease(spriteContext);
GLuint texName;
glGenTextures(1, &texName);
glBindTexture(GL_TEXTURE_2D, texName);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData);
free(spriteData);
return texName;
What is the proper way to pass the texture to the filter? In my main I have added:
uniform sampler2D fill0Texture;
In the code below texture is whats passed from the function above.
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, texture);
glUniform1i(fill0Uniform, 1);
When ever I try to get an image from the spriteContext its nil and when I try using pixels from fill0Texture they are always black. I have thought about doing this with 10 chroma key iterations, but I think replacing all the pixels in a modified GPUImagePosterizeFilter is the way to go.
In order to match colors against the output from the PosterizeFilter, I am using the following code.
float testValue = 1.0 - (float(idx) / float(colorLevels));
vec4 keyColor = vec4(testValue, testValue, testValue, 1.0);
vec4 replacementColor = texture2D( tx0, textureCoord(idx));
float select = step(distance(keyColor,srcColor),.1);
return select * replacementColor;
If the color(already Posterized) passed in matches then the replacement color is returned. The textureCoord(idx) call looks up the replacement color from a gltexture.

Texture is all black

As far as I can tell, my first attempt to draw a texture on a triangle is being setup correctly, but it shows up as all black.
I am sending the image to OpenGL as such:
GLuint gridTexture;
glGenTextures(1, &gridTexture);
glBindTexture(GL_TEXTURE_2D, gridTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, size.x,
size.y, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
While I'm not sure how to test that "pixels" holds what I'd expect, I do know that the size.x and size.y variables are logging correctly for the PNG I'm using so I assume the pixels is working well also since they are both extracted together in my resource loader
My shaders are simple:
attribute vec4 Position;
attribute vec4 SourceColor;
attribute vec2 TextureCoordinate;
varying vec4 DestinationColor;
varying vec2 TextureCoordOut;
uniform mat4 Projection;
uniform mat4 Modelview;
void main(void)
{
DestinationColor = SourceColor;
gl_Position=Projection*Modelview*Position;
TextureCoordOut = TextureCoordinate;
}
fragment:
varying lowp vec4 DestinationColor;
varying mediump vec2 TextureCoordOut;
uniform sampler2D Sampler;
void main(void)
{
gl_FragColor = texture2D(Sampler, TextureCoordOut) * DestinationColor;
// gl_FragColor = DestinationColor; //this works and I see varied colors fine
}
I send texture coordinates from client memory like this:
glEnableVertexAttribArray(textCoordAttribute));
glVertexAttribPointer(textCoordAttribute, 2, GL_FLOAT, GL_FALSE, sizeof(vec2),&texs[0]);
The triangle and its vertices with texture coordinates are like this; I know the coordinates aren't polished, I just want to see something on the screen:
//omitting structures that I use to hold my vertex data, but you can at least see the vertices and the coordinates I am associating with them. The triangle draws fine, and if I disable the texture2D() function in the frag shader I can see the colors of the vertices so everything appears to be working except the texture itself.
top.Color=vec4(1,0,0,1);
top.Position=vec3(0,300,0);
texs.push_back(vec2(0,1));
right.Color=vec4(0,1,0,1);
right.Position=vec3(300,0,0);
texs.push_back(vec2(1,0));
left.Color=vec4(0,0,1,1);
left.Position=vec3(-300,0,0);
texs.push_back(vec2(0,0));
verts.push_back(top);
verts.push_back(right);
verts.push_back(left);
For good measure I tried binding the texture again with glBindTexture before drawing to make it was "active" but that made no difference.
I think there is probably a very simple step I am not doing somewhere but I can't find it anywhere.
For people suffering from textures showing up black, another reason I found was if you don't set up these simple parameters when creating the texture (before glTexImage2D), the texture shows up black
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_FALSE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
The issue is resolved by making my texture dimensions a power of 2 in length and width.
you must have a bound data to the texture before setting the filter. You must call glTexImage2D first
GLuint gridTexture;
glGenTextures(1, &gridTexture);
glBindTexture(GL_TEXTURE_2D, gridTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, size.x,
size.y, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

How to efficiently copy depth buffer to texture on OpenGL ES

I'm trying to get some shadowing effects to work in OpenGL ES 2.0 on iOS by porting some code from standard GL. Part of the sample involves copying the depth buffer to a texture:
glBindTexture(GL_TEXTURE_2D, g_uiDepthBuffer);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 0, 0, 800, 600, 0);
However, it appears the glCopyTexImage2D is not supported on ES. Reading a related thread, it seems I can use the frame buffer and fragment shaders to extract the depth data. So I'm trying to write the depth component to the color buffer, then copying it:
// clear everything
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
// turn on depth rendering
glUseProgram(m_BaseShader.uiId);
// this is a switch to cause the fragment shader to just dump out the depth component
glUniform1i(uiBaseShaderRenderDepth, true);
// and for this, the color buffer needs to be on
glColorMask(GL_TRUE,GL_TRUE,GL_TRUE,GL_TRUE);
// and clear it to 1.0, like how the depth buffer starts
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
// draw the scene
DrawScene();
// bind our texture
glBindTexture(GL_TEXTURE_2D, g_uiDepthBuffer);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 0, 0, width, height, 0);
Here is the fragment shader:
uniform sampler2D sTexture;
uniform bool bRenderDepth;
varying lowp float LightIntensity;
varying mediump vec2 TexCoord;
void main()
{
if(bRenderDepth) {
gl_FragColor = vec4(vec3(gl_FragCoord.z), 1.0);
} else {
gl_FragColor = vec4(texture2D(sTexture, TexCoord).rgb * LightIntensity, 1.0);
}
}
I have experimented with not having the 'bRenderDepth' branch, and it doesn't speed it up significantly.
Right now pretty much just doing this step its at 14fps, which obviously is not acceptable. If I pull out the copy its way above 30fps. I'm getting two suggestions from the Xcode OpenGLES analyzer on the copy command:
file://localhost/Users/xxxx/Documents/Development/xxxx.mm: error:
Validation Error: glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 0, 0,
960, 640, 0) : Height<640> is not a power of two
file://localhost/Users/xxxx/Documents/Development/xxxx.mm: warning:
GPU Wait on Texture: Your app updated a texture that is currently
used for rendering. This caused the CPU to wait for the GPU to
finish rendering.
I'll work to resolve the two above issues (perhaps they are the crux if of it). In the meantime can anyone suggest a more efficient way to pull that depth data into a texture?
Thanks in advance!
iOS devices generally support OES_depth_texture, so on devices where the extension is present, you can set up a framebuffer object with a depth texture as its only attachment:
GLuint g_uiDepthBuffer;
glGenTextures(1, &g_uiDepthBuffer);
glBindTexture(GL_TEXTURE_2D, g_uiDepthBuffer);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, width, height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, NULL);
// glTexParameteri calls omitted for brevity
GLuint g_uiDepthFramebuffer;
glGenFramebuffers(1, &g_uiDepthFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, g_uiDepthFramebuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, g_uiDepthBuffer, 0);
Your texture then receives all the values being written to the depth buffer when you draw your scene (you can use a trivial fragment shader for this), and you can texture from it directly without needing to call glCopyTexImage2D.

Fastest possible OpenCV 2 OpenGL context

I've been searching through the net for a few day looking for the fastest possible way to take a OpenCV webcam capture and display it on an OpenGL context. So far this seems to work OK until I need to zoom.
void Camera::DrawIplImage1(IplImage *image, int x, int y, GLfloat xZoom, GLfloat yZoom)
{
GLenum format;
switch(image->nChannels) {
case 1:
format = GL_LUMINANCE;
break;
case 2:
format = GL_LUMINANCE_ALPHA;
break;
case 3:
format = GL_BGR;
break;
default:
return;
}
yZoom =- yZoom;
glRasterPos2i(x, y);
glPixelZoom(xZoom, yZoom); //Slow when not (1.0f, 1.0f);
glDrawPixels(image->width, image->height, format, GL_UNSIGNED_BYTE, image->imageData);
}
I've heard that maybe taking the FBO approach would be even faster. Any ideas out there on the fastest possible way to get an OpenCV webcam capture to an OpenGL context. I will test everything I see and post results.
Are you sure your openGL implementation needs ^2 textures? Even very poor PC implementations (yes Intel) can manage arbitrary sizes now.
Then the quickest is probably to use a openGL Pixel buffer
Sorry the code is from Qt, so the function names are slightly different but the sequence is the same
Allocate the opengl Texture
glEnable(GL_TEXTURE_2D);
glGenTextures(1,&texture);
glBindTexture(GL_TEXTURE_2D,texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, glFormat, width, height, 0, glFormatExt, glType, NULL );
glDisable(GL_TEXTURE_2D);
Now get a pointer to the texture to use the memeory
glbuffer.bind();
unsigned char *dest = (unsigned char*)glbuffer.map(QGLBuffer::ReadWrite);
// creates an openCV image but the pixel data is stored in an opengl buffer
cv::Mat opencvImage(rows,cols,CV_TYPE,dest);
.... do stuff ....
glbuffer.unmap(); // pointer is no longer valid - so neither is openCV image
Then to draw it - this should be essentially instant because the data was copied to the GPU in the mapped calls above
glBindTexture(GL_TEXTURE_2D,texture);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0,0, width, height, glFormatExt, glType, 0);
glbuffer.release();
By using different types for glFormat and glFormatExt you can have the graphics card automatically convert between opencVs BGR and typical RGBA display formats for you in hardware.

Resources