How to use glReadPixels to read pixels to bitmap in Android NDK? - opengl-es

I use glReadPixels to read pixels data to bitmap, and get a wrong bitmap.
Main code is blow:
jni code
jint size = width * height * 4;
GLubyte *pixels = static_cast<GLubyte *>(malloc(size));
glReadPixels(
0,
0,
width,
height,
GL_RGBA,
GL_UNSIGNED_BYTE,
pixels
)
kotlin code
val bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888)
var dataBuf = ByteBuffer.wrap(pixels)
dataBuf.rewind()
bitmap.copyPixelsFromBuffer(dataBuf)
And get the wrong bitmap like blow
The correct one should like this
Anyone can tell me where is wrong?

The reason is that the texture has been rotated and the sort to read pixels has been changed.

Related

glReadPixels always returns a black image

Some times ago I wrote a code to draw an OpenGL scene to a bitmap in Delphi RAD Studio XE7, that worked well. This code draw and finalize a scene, then get the pixels using the glReadPixels function. I recently tried to compile the exactly same code on Lazarus, however I get only a black image.
Here is the code
// create main render buffer
glGenFramebuffers(1, #m_OverlayFrameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, m_OverlayFrameBuffer);
// create and link color buffer to render to
glGenRenderbuffers(1, #m_OverlayRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, m_OverlayRenderBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA8, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0,
GL_RENDERBUFFER,
m_OverlayRenderBuffer);
// create and link depth buffer to use
glGenRenderbuffers(1, #m_OverlayDepthBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, m_OverlayDepthBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER,
GL_DEPTH_ATTACHMENT,
GL_RENDERBUFFER,
m_OverlayDepthBuffer);
// check if render buffers were created correctly and return result
Result := (glCheckFramebufferStatus(GL_FRAMEBUFFER) = GL_FRAMEBUFFER_COMPLETE);
...
// flush OpenGL
glFinish;
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glPixelStorei(GL_PACK_ROW_LENGTH, 0);
glPixelStorei(GL_PACK_SKIP_ROWS, 0);
glPixelStorei(GL_PACK_SKIP_PIXELS, 0);
// create pixels buffer
SetLength(pixels, (m_pOwner.ClientWidth * m_Factor) * (m_pOwner.ClientHeight * m_Factor) * 4);
// is alpha blending or antialiasing enabled?
if (m_Transparent or (m_Factor <> 1)) then
// notify that pixels will be read from color buffer
glReadBuffer(GL_COLOR_ATTACHMENT0);
// copy scene from OpenGL to pixels buffer
glReadPixels(0,
0,
m_pOwner.ClientWidth * m_Factor,
m_pOwner.ClientHeight * m_Factor,
GL_RGBA,
GL_UNSIGNED_BYTE,
pixels);
As I already verified and I'm 100% sure that something is really drawn on my scene (also on the Lazarus side), the GLext is well initialized and the framebuffer is correctly built (the condition glCheckFramebufferStatus(GL_FRAMEBUFFER) = GL_FRAMEBUFFER_COMPLETE returns effectively true), I would be very grateful if someone could point me what I'm doing wrong in my code, knowing one more time that on the same computer it works well in Delphi RAD Studio XE7 but not in Lazarus.
Regards

GPUImage replace colors with colors from textures

Looking at GPUImagePosterizeFilter it seems like an easy adaptation to replace colors with pixels from textures. Say I have an image that is made from 10 greyscale colors. I would like to replace each of the pixel ranges from the 10 colors with pixels from 10 different texture swatches.
What is the proper way to create the textures? I am using the code below (I am not sure on the alpha arguments sent to CGBitmapContextCreate).
CGImageRef spriteImage = [UIImage imageNamed:fileName].CGImage;
size_t width = CGImageGetWidth(spriteImage);
size_t height = CGImageGetHeight(spriteImage);
GLubyte * spriteData = (GLubyte *) calloc(width*height*4, sizeof(GLubyte));
CGContextRef spriteContext = CGBitmapContextCreate(spriteData, width, height, 8, width*4, CGImageGetColorSpace(spriteImage), kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(spriteContext, CGRectMake(0, 0, width, height), spriteImage);
CGContextRelease(spriteContext);
GLuint texName;
glGenTextures(1, &texName);
glBindTexture(GL_TEXTURE_2D, texName);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData);
free(spriteData);
return texName;
What is the proper way to pass the texture to the filter? In my main I have added:
uniform sampler2D fill0Texture;
In the code below texture is whats passed from the function above.
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, texture);
glUniform1i(fill0Uniform, 1);
When ever I try to get an image from the spriteContext its nil and when I try using pixels from fill0Texture they are always black. I have thought about doing this with 10 chroma key iterations, but I think replacing all the pixels in a modified GPUImagePosterizeFilter is the way to go.
In order to match colors against the output from the PosterizeFilter, I am using the following code.
float testValue = 1.0 - (float(idx) / float(colorLevels));
vec4 keyColor = vec4(testValue, testValue, testValue, 1.0);
vec4 replacementColor = texture2D( tx0, textureCoord(idx));
float select = step(distance(keyColor,srcColor),.1);
return select * replacementColor;
If the color(already Posterized) passed in matches then the replacement color is returned. The textureCoord(idx) call looks up the replacement color from a gltexture.

Save image from opengl (pyqt) in high resolution:

I have a PyQt application in which an QtOpenGL.QGLWidget shows a wind turbine.
I would like to save the turbine as an image in high resulution, but so far I am only able to save the rendered screen image, i.e. in my case 1680x1050 minus borders,toolbars etc.
glPixelStorei(GL_PACK_ALIGNMENT, 1)
data = glReadPixels(0, 0, self.width, self.height, GL_RGBA, GL_UNSIGNED_BYTE)
How can I get around this limitation?
EDIT
I have tried using a framebuffer,
from __future__ import division
import OpenGL
from OpenGL.GL import *
from OpenGL.GLU import *
from OpenGL.GLUT import *
from PIL import Image
import time, sys
import numpy as np
WIDTH = 400
HEIGHT = 300
def InitGL():
glMatrixMode(GL_PROJECTION)
gluPerspective(45.0, float(WIDTH) / float(HEIGHT), 0.1, 100.0)
glMatrixMode(GL_MODELVIEW)
def DrawGLScene():
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
glLoadIdentity()
glTranslatef(0, 0., -3)
glutWireTeapot(1)
glFlush()
def capture_screen():
DrawGLScene()
glPixelStorei(GL_PACK_ALIGNMENT, 1)
data = glReadPixels(0, 0, WIDTH, HEIGHT, GL_RGBA, GL_UNSIGNED_BYTE)
image = Image.fromstring("RGBA", (WIDTH, HEIGHT), data)
image.transpose(Image.FLIP_TOP_BOTTOM).show()
def capture_fbo(width=800, height=600):
fbo = glGenFramebuffers(1)
render_buf = glGenRenderbuffers(1)
glBindRenderbuffer(GL_RENDERBUFFER, render_buf)
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA, width, height);
glBindFramebuffer(GL_FRAMEBUFFER, fbo)
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, render_buf);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
glMatrixMode(GL_MODELVIEW)
glLoadIdentity()
glTranslatef(1, 1., -3)
glScale(width / WIDTH, height / HEIGHT, 1)
glutWireTeapot(1.0)
glFlush()
glReadBuffer(GL_COLOR_ATTACHMENT0);
data = glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE)
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
image = Image.fromstring("RGBA", (width, height), data)
image.transpose(Image.FLIP_TOP_BOTTOM).show()
glDeleteFramebuffers(1, [fbo]);
glDeleteRenderbuffers(1, [render_buf]);
glutInit(sys.argv)
glutInitDisplayMode(GLUT_RGBA | GLUT_DEPTH)
glutInitWindowSize(WIDTH, HEIGHT)
window = glutCreateWindow("")
glutDisplayFunc(DrawGLScene)
InitGL()
DrawGLScene()
capture_screen()
capture_fbo()
glutMainLoop()
but nothing is drawn at the areas outside the normal screen window area
The way glReadPixels works is it reads the pixels from the currently selected framebuffer.
Not sure what a framebuffer is? Here's the wiki page: http://www.opengl.org/wiki/Framebuffer_Object
A quick explanation is, you can think of the framebuffer as a "canvas" that your graphics card draws on. You can have as many framebuffers as you want (sorta... most drivers define a limit somewhere), but only one will be displayed. This is the selected framebuffer and can only be as large as the window that you are displaying on screen.
Here's a tutorial on making a framebuffer: http://www.lighthouse3d.com/tutorials/opengl-short-tutorials/opengl_framebuffer_objects/
Modify it so that the framebuffer you are creating is the resolution that you want to take a screenshot from.
Once you have the framebuffer set up, you can draw to it like so:
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, buffer);
/* drawing code */
glReadPixels();
If you want to be able to still see something on screen, you will need to draw to the default frame buffer after drawing to the new framebuffer:
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0); //"0" means return to default framebuffer
/* same drawing code */
Note that this means you will be drawing your entire scene twice! Make sure your framerate can handle it.

How to use renderInContext: with CGBitmapContextCreate and Retina?

I have manually created a CGBitmapContext:
bitmapContext = CGBitmapContextCreate( myImageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
And drawing a layer to it:
[self.myView.layer renderInContext:bitmapContext];
However, on Retina my layer renders only at half the original size.
Setting the contentScaleFactor property on the UIView doesn't change anything.
What's the right way to do this?
Of course the answer came to me the minute I asked the question. Simply do this:
float scale = self.myView.contentScaleFactor;
CGContextScaleCTM(context, scale, scale);
[self.myView.layer renderInContext:context];

Placing an image on a window while controlling its size - directX, c++

All code is pseudo
I have an image sized 1024 x 768.
It is a jpg
I have a window sized 1024 x 768.
I create a sprite, get a texture for it, then draw it to the screen.
for example:
D3DXCreateSprite( gfx.d3dDevice, &sprite );
D3DXCreateTextureFromFile( gfx.d3dDevice, _fileName, &gTexture );
....
pos.x = 0.0f;
pos.y = 0.0f;
pos.z = 0.0f;
sprite->Draw( scenes[ 0 ]->backgroundImage->gTexture, NULL, NULL, &pos, 0xFFFFFFFF );
When it comes out on to the screen, the picture is warped and does not look how it should:
My question would be:
How do I control the size of the output of the image when none of the texture, sprite or draw functions seem to have functionality for it..?
I thought maybe it was something to do with Rect but that just clips images //shrugs//
May be this I need to use:
http://msdn.microsoft.com/en-us/library/ms887494.aspx
directx texture dimensions
result = D3DXCreateTextureFromFileEx(
gfx.d3dDevice,
_fileName,
1024,
768,
D3DX_DEFAULT,
0,
D3DFMT_UNKNOWN,
D3DPOOL_MANAGED,
D3DX_DEFAULT,
D3DX_DEFAULT,
0,
NULL,
NULL,
&gTexture
);
D3DXCreateTextureFromFileEx allows a texture to be created without any size warping:
this was helpful: directx texture dimensions
and this was extremely helpful: http://msdn.microsoft.com/en-us/library/ms887494.aspx

Resources