How to use renderInContext: with CGBitmapContextCreate and Retina? - calayer

I have manually created a CGBitmapContext:
bitmapContext = CGBitmapContextCreate( myImageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
And drawing a layer to it:
[self.myView.layer renderInContext:bitmapContext];
However, on Retina my layer renders only at half the original size.
Setting the contentScaleFactor property on the UIView doesn't change anything.
What's the right way to do this?

Of course the answer came to me the minute I asked the question. Simply do this:
float scale = self.myView.contentScaleFactor;
CGContextScaleCTM(context, scale, scale);
[self.myView.layer renderInContext:context];

Related

How to use glReadPixels to read pixels to bitmap in Android NDK?

I use glReadPixels to read pixels data to bitmap, and get a wrong bitmap.
Main code is blow:
jni code
jint size = width * height * 4;
GLubyte *pixels = static_cast<GLubyte *>(malloc(size));
glReadPixels(
0,
0,
width,
height,
GL_RGBA,
GL_UNSIGNED_BYTE,
pixels
)
kotlin code
val bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888)
var dataBuf = ByteBuffer.wrap(pixels)
dataBuf.rewind()
bitmap.copyPixelsFromBuffer(dataBuf)
And get the wrong bitmap like blow
The correct one should like this
Anyone can tell me where is wrong?
The reason is that the texture has been rotated and the sort to read pixels has been changed.

Unity3d UI issue with Xiaomi

In Xiaomi devices, there are drawn an image outside of camera's letterbox
In other devices everything is correct
I attached both sumsung and xiaomi images, the screenshot that looks ugly is xiaomi, and good look in samsung
float targetaspect = 750f / 1334f;
// determine the game window's current aspect ratio
float windowaspect = (float)Screen.width / (float)Screen.height;
// current viewport height should be scaled by this amount
float scaleheight = windowaspect / targetaspect;
// obtain camera component so we can modify its viewport
Camera camera = GetComponent<Camera>();
// if scaled height is less than current height, add letterbox
if (scaleheight < 1.0f)
{
Rect rect = camera.rect;
rect.width = 1.0f;
rect.height = scaleheight;
rect.x = 0;
rect.y = (1.0f - scaleheight) / 2.0f;
camera.rect = rect;
}
try setting the image to clamp instead of repeat.
this will give the result of black borders but you won't have that weird texture
I don't know what caused that problem, however i solved it in a tricky way. I just added second camera to display black background. Only My main camera's viewport is letterboxed, but not second camera. So it made display to look good

glTexSubImage2D shifting NSImage by a pixel

I’m working on an app that creates it’s own texture atlas. The elements on the atlas can vary in size but are placed in a grid pattern.
It’s all working fine except for the fact that when I write over the section of the atlas with a new element (the data from an NSImage), the image is shifted a pixel to the right.
The code I’m using to write the pixels onto the atlas is:
-(void)writeToPlateWithImage:(NSImage*)anImage atCoord:(MyGridPoint)gridPos;
{
static NSSize insetSize; //ultimately this is the size of the image in the box
static NSSize boundingBox; //this is the size of the box that holds the image in the grid
static CGFloat multiplier;
multiplier = 1.0;
NSSize plateSize = NSMakeSize(atlas.width, atlas.height);//Size of entire atlas
MyGridPoint _gridPos;
//make sure the column and row position is legal
_gridPos.column= gridPos.column >= m_numOfColumns ? m_numOfColumns - 1 : gridPos.column;
_gridPos.row = gridPos.row >= m_numOfRows ? m_numOfRows - 1 : gridPos.row;
_gridPos.column = gridPos.column < 0 ? 0 : gridPos.column;
_gridPos.row = gridPos.row < 0 ? 0 : gridPos.row;
insetSize = NSMakeSize(plateSize.width / m_numOfColumns, plateSize.height / m_numOfRows);
boundingBox = insetSize;
//…code here to calculate the size to make anImage so that it fits into the space allowed
//on the atlas.
//multiplier var will hold a value that sizes up or down the image…
insetSize.width = anImage.size.width * multiplier;
insetSize.height = anImage.size.height * multiplier;
//provide a padding around the image so that when mipmaps are created the image doesn’t ‘bleed’
//if it’s the same size as the grid’s boxes.
insetSize.width -= ((insetSize.width * (insetPadding / 100)) * 2);
insetSize.height -= ((insetSize.height * (insetPadding / 100)) * 2);
//roundUp() is a handy function I found somewhere (I can’t remember now)
//that makes the first param a multiple of the the second..
//here we make sure the image lines are aligned as it’s a RGBA so we make
//it a multiple of 4
insetSize.width = (CGFloat)roundUp((int)insetSize.width, 4);
insetSize.height = (CGFloat)roundUp((int)insetSize.height, 4);
NSImage *insetImage = [self resizeImage:[anImage copy] toSize:insetSize];
NSData *insetData = [insetImage TIFFRepresentation];
GLubyte *data = malloc(insetData.length);
memcpy(data, [insetData bytes], insetData.length);
insetImage = NULL;
insetData = NULL;
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, atlas.textureIndex);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1); //have also tried 2,4, and 8
GLint Xplace = (GLint)(boundingBox.width * _gridPos.column) + (GLint)((boundingBox.width - insetSize.width) / 2);
GLint Yplace = (GLint)(boundingBox.height * _gridPos.row) + (GLint)((boundingBox.height - insetSize.height) / 2);
glTexSubImage2D(GL_TEXTURE_2D, 0, Xplace, Yplace, (GLsizei)insetSize.width, (GLsizei)insetSize.height, GL_RGBA, GL_UNSIGNED_BYTE, data);
glGenerateMipmap(GL_TEXTURE_2D);
free(data);
glBindTexture(GL_TEXTURE_2D, 0);
glGetError();
}
The images are RGBA, 8bit (as reported by PhotoShop), here's a test image I've been using:
and here's a screen grab of the result in my app:
Am I unpacking the image incorrectly...? I know the resizeImage: function works as I've saved it's result to disk as well as bypassed it so the problem is somewhere in the gl-code...
EDIT: just to clarify, the section of the atlas being rendered is larger than the box diagram. So the shift is occurring withing the area that's written to with glTexSubImage2D.
EDIT 2: Sorted, finally, by offsetting the copied data that goes into the section of the atlas.
I don't fully understand why that is, perhaps it's a hack instead of a proper solution but here it is.
//resize the image to fit into the section of the atlas
NSImage *insetImage = [self resizeImage:[anImage copy] toSize:NSMakeSize(insetSize.width, insetSize.height)];
//pointer to the raw data
const void* insetDataPtr = [[insetImage TIFFRepresentation] bytes];
//for debugging, I placed the offset value next
int offset = 8;//it needed a 2 pixel (2 * 4 byte for RGBA) offset
//copy the data with the offset into a temporary data buffer
memcpy(data, insetDataPtr + offset, insetData.length - offset);
/*
.
. Calculate it's position with the texture
.
*/
//And finally overwrite the texture
glTexSubImage2D(GL_TEXTURE_2D, 0, Xplace, Yplace, (GLsizei)insetSize.width, (GLsizei)insetSize.height, GL_RGBA, GL_UNSIGNED_BYTE, data);
You may be running into the issue I answered already here: stackoverflow.com/a/5879551/524368
It's not really about pixel coordinates, but pixel perfect addressing of texels. This is especially important for texture atlases. A common misconception is, that many people assume texture coordinates 0 and 1 come to lie exactly on pixel centers. But in OpenGL this is not the case, texture coordinates 0 and 1 are exactly on the border between the pixels of a texture wrap. If you build your texture atlas making the 0 and 1 are on pixel centers assumption, then using the very same addressing scheme in OpenGL will lead to either a blurry picture or pixel shifts. You need to account for this.
I still don't understand how that makes a difference to a sub-section of the texture that's being rendered.
It helps a lot to understand that to OpenGL textures are not so much images rather than support samples for an interpolator (hence "sampler" uniforms in shaders). So to get really crisp looking images you've to choose the texture coordinates you're sampling from in a way, so that the interpolator evaluates at exactly the position of the support samples. The position of those samples however are neither integer coordinates nor simply fractions (i/N).
Note that newer versions of GLSL provide the texture sampling function texelFetch which completely bypasses the interpolator and addresses texture pixels directly. If you need pixel perfect texturing you might find this easier to use (if available).

Save image from opengl (pyqt) in high resolution:

I have a PyQt application in which an QtOpenGL.QGLWidget shows a wind turbine.
I would like to save the turbine as an image in high resulution, but so far I am only able to save the rendered screen image, i.e. in my case 1680x1050 minus borders,toolbars etc.
glPixelStorei(GL_PACK_ALIGNMENT, 1)
data = glReadPixels(0, 0, self.width, self.height, GL_RGBA, GL_UNSIGNED_BYTE)
How can I get around this limitation?
EDIT
I have tried using a framebuffer,
from __future__ import division
import OpenGL
from OpenGL.GL import *
from OpenGL.GLU import *
from OpenGL.GLUT import *
from PIL import Image
import time, sys
import numpy as np
WIDTH = 400
HEIGHT = 300
def InitGL():
glMatrixMode(GL_PROJECTION)
gluPerspective(45.0, float(WIDTH) / float(HEIGHT), 0.1, 100.0)
glMatrixMode(GL_MODELVIEW)
def DrawGLScene():
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
glLoadIdentity()
glTranslatef(0, 0., -3)
glutWireTeapot(1)
glFlush()
def capture_screen():
DrawGLScene()
glPixelStorei(GL_PACK_ALIGNMENT, 1)
data = glReadPixels(0, 0, WIDTH, HEIGHT, GL_RGBA, GL_UNSIGNED_BYTE)
image = Image.fromstring("RGBA", (WIDTH, HEIGHT), data)
image.transpose(Image.FLIP_TOP_BOTTOM).show()
def capture_fbo(width=800, height=600):
fbo = glGenFramebuffers(1)
render_buf = glGenRenderbuffers(1)
glBindRenderbuffer(GL_RENDERBUFFER, render_buf)
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA, width, height);
glBindFramebuffer(GL_FRAMEBUFFER, fbo)
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, render_buf);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
glMatrixMode(GL_MODELVIEW)
glLoadIdentity()
glTranslatef(1, 1., -3)
glScale(width / WIDTH, height / HEIGHT, 1)
glutWireTeapot(1.0)
glFlush()
glReadBuffer(GL_COLOR_ATTACHMENT0);
data = glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE)
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
image = Image.fromstring("RGBA", (width, height), data)
image.transpose(Image.FLIP_TOP_BOTTOM).show()
glDeleteFramebuffers(1, [fbo]);
glDeleteRenderbuffers(1, [render_buf]);
glutInit(sys.argv)
glutInitDisplayMode(GLUT_RGBA | GLUT_DEPTH)
glutInitWindowSize(WIDTH, HEIGHT)
window = glutCreateWindow("")
glutDisplayFunc(DrawGLScene)
InitGL()
DrawGLScene()
capture_screen()
capture_fbo()
glutMainLoop()
but nothing is drawn at the areas outside the normal screen window area
The way glReadPixels works is it reads the pixels from the currently selected framebuffer.
Not sure what a framebuffer is? Here's the wiki page: http://www.opengl.org/wiki/Framebuffer_Object
A quick explanation is, you can think of the framebuffer as a "canvas" that your graphics card draws on. You can have as many framebuffers as you want (sorta... most drivers define a limit somewhere), but only one will be displayed. This is the selected framebuffer and can only be as large as the window that you are displaying on screen.
Here's a tutorial on making a framebuffer: http://www.lighthouse3d.com/tutorials/opengl-short-tutorials/opengl_framebuffer_objects/
Modify it so that the framebuffer you are creating is the resolution that you want to take a screenshot from.
Once you have the framebuffer set up, you can draw to it like so:
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, buffer);
/* drawing code */
glReadPixels();
If you want to be able to still see something on screen, you will need to draw to the default frame buffer after drawing to the new framebuffer:
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0); //"0" means return to default framebuffer
/* same drawing code */
Note that this means you will be drawing your entire scene twice! Make sure your framerate can handle it.

Placing an image on a window while controlling its size - directX, c++

All code is pseudo
I have an image sized 1024 x 768.
It is a jpg
I have a window sized 1024 x 768.
I create a sprite, get a texture for it, then draw it to the screen.
for example:
D3DXCreateSprite( gfx.d3dDevice, &sprite );
D3DXCreateTextureFromFile( gfx.d3dDevice, _fileName, &gTexture );
....
pos.x = 0.0f;
pos.y = 0.0f;
pos.z = 0.0f;
sprite->Draw( scenes[ 0 ]->backgroundImage->gTexture, NULL, NULL, &pos, 0xFFFFFFFF );
When it comes out on to the screen, the picture is warped and does not look how it should:
My question would be:
How do I control the size of the output of the image when none of the texture, sprite or draw functions seem to have functionality for it..?
I thought maybe it was something to do with Rect but that just clips images //shrugs//
May be this I need to use:
http://msdn.microsoft.com/en-us/library/ms887494.aspx
directx texture dimensions
result = D3DXCreateTextureFromFileEx(
gfx.d3dDevice,
_fileName,
1024,
768,
D3DX_DEFAULT,
0,
D3DFMT_UNKNOWN,
D3DPOOL_MANAGED,
D3DX_DEFAULT,
D3DX_DEFAULT,
0,
NULL,
NULL,
&gTexture
);
D3DXCreateTextureFromFileEx allows a texture to be created without any size warping:
this was helpful: directx texture dimensions
and this was extremely helpful: http://msdn.microsoft.com/en-us/library/ms887494.aspx

Resources