In Ogre3D, how to export texture pixel value to physical memory - ogre

I am using Ogre3D.
I have a texture defined as:
rtt_texture = Ogre::TextureManager::getSingleton().createManual("RttTex", Ogre::ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME, Ogre::TEX_TYPE_2D, texWidth, texHeight, 0, Ogre::PF_R8G8B8, Ogre::TU_RENDERTARGET);
I am trying to use the following code to copy its pixel value to memory, but the data I got differs from what has been rendered:
unsigned char* data = new unsigned char[texWidth * texHeight * 3];
data = (unsigned char*)rtt_texture->getBuffer()->lock(0, texWidth*texHeight*3, Ogre::HardwareBuffer::HBL_READ_ONLY);
Are there any errors here?

I found that I actually need to replace
texWidth * texHeight * 3
everywhere with
texWidth * texHeight * 4

Related

stbir_resize_uint8 crashing on memory access

I'm using stb_image to upload an image to the GPU. If I just upload the image with stbi_load I can confirm (nvidia Nsight) that the image is correctly stored in the GPU memory. However, some images I like to resize before I upload the to the GPU. In this case, I get a crash. This is the code:
int textureWidth;
int textureHeight;
int textureChannelCount;
stbi_uc* pixels = stbi_load(fullPath.string().c_str(), &textureWidth, &textureHeight, &textureChannelCount, STBI_rgb_alpha);
if (!pixels) {
char error[512];
sprintf_s(error, "Failed to load image %s!", pathToTexture);
throw std::runtime_error(error);
}
stbi_uc* resizedPixels = nullptr;
uint32_t imageSize = 0;
if (scale > 1.0001f || scale < 0.9999f) {
stbir_resize_uint8(pixels, textureWidth, textureHeight, 0, resizedPixels, textureWidth * scale, textureHeight * scale, 0, textureChannelCount);
stbi_image_free(pixels);
textureWidth *= scale;
textureHeight *= scale;
imageSize = textureWidth * textureHeight * textureChannelCount;
} else {
resizedPixels = pixels;
imageSize = textureWidth * textureHeight * textureChannelCount;
}
// Upload the image to the gpu
When this code is run with scale set to 1.0f, it works fine. However, when I set the scale to 0.25f, the program crashes in method stbir_resize_uint8. The image I'm providing in both cases is a 1920x1080 RGBA PNG. Alpha channel is set to 1.0f across the whole image.
Which function do I have to use to resize the image?
EDIT: If I allocate the memory myself, the function no longer crashes and works fine. But I though stb handles all memory allocation internally. Was I wrong?
I see you found and solved the problem in your edit but here's some useful advice anyway:
It seems like the comments in the source (which is also the documentation) don't explicitly mention that you have to allocate memory for the resized image, but it becomes clear when you take a closer look at the function's signature:
STBIRDEF int stbir_resize_uint8( const unsigned char *input_pixels , int input_w , int input_h , int input_stride_in_bytes,
unsigned char *output_pixels, int output_w, int output_h, int output_stride_in_bytes,
int num_channels);
Think about how you yourself would return the address of a memory chunk that you allocated in a function. The easiest would be to return the pointer directly like so:
unsigned char* allocate_memory( int size )
{ return (unsigned char*) malloc(size); }
However the return seems to be reserved for error codes, so your only option would be to manipulate the pointer as a side-effect. To do that, you'd need to pass a pointer to it (pointer to pointer):
int allocate_memory( unsigned char** pointer_to_array, int size )
{
*pointer_to_array = (unsigned char*) malloc(size);
/* Check if allocation was successful and do other stuff... */
return 0;
}
If you take a closer look at the resize function's signature, you'll notice that there's no such parameter passed, so there's no way for it to return the address of internally allocated memory. (unsigned char* output_pixels instead of unsigned char** output_pixels). As a result, you have to allocate the memory for the resized image yourself.
I hope this helps you in the future.
There is a mention of memory allocation in the docs but as far as I understand, it's about allocations required to perform the resizing, which is unrelated to the output.

iOS 8 CGContextRef unsupported parameter combination

Anyone know how to update this code for iOS 8? I am getting this error message:
CGBitmapContextCreate: unsupported parameter combination: 8 integer bits/component; 32 bits/pixel; 3-component color space; kCGImageAlphaPremultipliedFirst; 4294967289 bytes/row.
CGContextRef CreateBitmapContenxtFromSizeWithData(CGSize s, void* data)
{
int w = s.width, h = s.height;
int bitsPerComponent = 8;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
int components = 4;
int bytesPerRow = (w * bitsPerComponent * components + 7)/8;
CGContextRef result = CGBitmapContextCreate(data, w, h, 8, bytesPerRow, colorSpace, (CGBitmapInfo)kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(colorSpace);
return result;
}
Bytes per row is calculated incorrectly in the above snippet.
To calculate the bytes per row, you can just take the width of your image and multiply it with the count of bits per pixel, which seems to be four in your case.
int bytesPerRow = w * 4;
Be careful though, if data points to image data that is stored in RGB, you have three bytes per pixel. You will also need to pass the CGImageAlphaInfo.NoneSkipFirst flag as last parameter to CGBitmapContextCreate so the alpha channel is omitted.

How to Convert the Image to Bitmap in BlackBerry?

In my application, I am getting the Image from the server. Now i want to convert that Image to Bitmap for displaying in the screen. For that i am using below codes. But it is not giving me the same image means not with exact dimension and clarity. It is coming smaller than the exact image.
I have used the below codes:
private Bitmap getBitmapFromImg(Image img) {
Bitmap bmp = null;
try {
Logger.out(TAG, "It is inside the the image conversion " +img);
Image image = Image.createImage(img);
byte[] data = BMPGenerator.encodeBMP(image);
Logger.out(TAG, "It is inside the the image conversion---333333333"+data);
bmp = Bitmap.createBitmapFromBytes(data, 0, data.length, 1);
} catch (Exception e) {
e.printStackTrace();
}
return bmp;
}
Here is the BMPGenerator class:
public final class BMPGenerator {
/**
* #param image
* #return
* #throws IOException
* #see {#link #encodeBMP(int[], int, int)}
*/
public static byte[] encodeBMP(Image image) throws IOException {
int width = image.getWidth();
int height = image.getHeight();
int[] rgb = new int[height * width];
image.getRGB(rgb, 0, width, 0, 0, width, height);
return encodeBMP(rgb, width, height);
}
/**
* A self-contained BMP generator, which takes a byte array (without any unusual
* offsets) extracted from an {#link Image}. The target platform is J2ME. You may
* wish to use the convenience method {#link #encodeBMP(Image)} instead of this.
* <p>
* A BMP file consists of 4 parts:-
* <ul>
* <li>header</li>
* <li>information header</li>
* <li>optional palette</li>
* <li>image data</li>
* </ul>
* At this time only 24 bit uncompressed BMPs with Windows V3 headers can be created.
* Future releases may become much more space-efficient, but will most likely be
* ditched in favour of a PNG generator.
*
* #param rgb
* #param width
* #param height
* #return
* #throws IOException
* #see http://en.wikipedia.org/wiki/Windows_bitmap
*/
public static byte[] encodeBMP(int[] rgb, int width, int height)
throws IOException {
int pad = (4 - (width % 4)) % 4;
// the size of the BMP file in bytes
int size = 14 + 40 + height * (pad + width * 3);
ByteArrayOutputStream bytes = new ByteArrayOutputStream(size);
DataOutputStream stream = new DataOutputStream(bytes);
// HEADER
// the magic number used to identify the BMP file: 0x42 0x4D
stream.writeByte(0x42);
stream.writeByte(0x4D);
stream.writeInt(swapEndian(size));
// reserved
stream.writeInt(0);
// the offset, i.e. starting address of the bitmap data
stream.writeInt(swapEndian(14 + 40));
// INFORMATION HEADER (Windows V3 header)
// the size of this header (40 bytes)
stream.writeInt(swapEndian(40));
// the bitmap width in pixels (signed integer).
stream.writeInt(swapEndian(width));
// the bitmap height in pixels (signed integer).
stream.writeInt(swapEndian(height));
// the number of colour planes being used. Must be set to 1.
stream.writeShort(swapEndian((short) 1));
// the number of bits per pixel, which is the colour depth of the image.
stream.writeShort(swapEndian((short) 24));
// the compression method being used.
stream.writeInt(0);
// image size. The size of the raw bitmap data. 0 is valid for uncompressed.
stream.writeInt(0);
// the horizontal resolution of the image. (pixel per meter, signed integer)
stream.writeInt(0);
// the vertical resolution of the image. (pixel per meter, signed integer)
stream.writeInt(0);
// the number of colours in the colour palette, or 0 to default to 2n.
stream.writeInt(0);
// the number of important colours used, or 0 when every colour is important;
// generally ignored.
stream.writeInt(0);
// PALETTE
// none for 24 bit depth
// IMAGE DATA
// starting in the bottom left, working right and then up
// a series of 3 bytes per pixel in the order B G R.
for (int j = height - 1; j >= 0; j--) {
for (int i = 0; i < width; i++) {
int val = rgb[i + width * j];
stream.writeByte(val & 0x000000FF);
stream.writeByte((val >>> 8) & 0x000000FF);
stream.writeByte((val >>> 16) & 0x000000FF);
}
// number of bytes in each row must be padded to multiple of 4
for (int i = 0; i < pad; i++) {
stream.writeByte(0);
}
}
byte[] out = bytes.toByteArray();
bytes.close();
// quick consistency check
if (out.length != size)
throw new RuntimeException("bad math");
return out;
}
/**
* Swap the Endian-ness of a 32 bit integer.
*
* #param value
* #return
*/
private static int swapEndian(int value) {
int b1 = value & 0xff;
int b2 = (value >> 8) & 0xff;
int b3 = (value >> 16) & 0xff;
int b4 = (value >> 24) & 0xff;
return b1 << 24 | b2 << 16 | b3 << 8 | b4 << 0;
}
/**
* Swap the Endian-ness of a 16 bit integer.
*
* #param value
* #return
*/
private static short swapEndian(short value) {
int b1 = value & 0xff;
int b2 = (value >> 8) & 0xff;
return (short) (b1 << 8 | b2 << 0);
}
Where is wrong in my code?? Is there any other way to do the same??
I believe you don't need this part:
Image image = Image.createImage(img);
byte[] data = BMPGenerator.encodeBMP(image);
Logger.out(TAG, "It is inside the the image conversion---333333333"+data);
bmp = Bitmap.createBitmapFromBytes(data, 0, data.length, 1);
Just use Image to get the raw ARGB data:
int width = image.getWidth();
int height = image.getHeight();
int[] argbData = new int[height * width];
image.getRGB(argbData, 0, width, 0, 0, width, height);
Then use argbData to create a Bitmap:
Bitmap b = new Bitmap(width, height);
b.setARGB(argbData, 0, width, 0, 0, width, height);
Thanks Armihed For your reply. I am able to resolve this some different way. I am using this four lines of code:
Image image = Image.createImage(img);
byte[] data = BMPGenerator.encodeBMP(image);
EncodedImage encodeimage = EncodedImage.createEncodedImage(data, 0, data.length);
bmp = encodeimage.getBitmap();
Now i am getting the same quality image. I did not check your solution. May be that also can work..

Drawing RAW buffer to CGBitmapContext

I have a raw image buffer in the RGB format. I need to draw it to CGContext so that I get a new buffer of the format ARGB. I accomplish this in the following way:
Create a data provider out of raw buffer using CGDataProviderCreateWithData and then create image out of the data provider with the api: CGImageCreate.
Now if I write this image back to the CGBitmapContext using CGContextImageDraw.
Instead of creating an intermediate image, is there any way of writing the buffer directly to CGContext so that I can avoid the image creation phase?
Thanks
If all you want is to take RGB data with no alpha component and turn it into ARGB data with full opacity (alpha = 1.0 at all points), why not just copy the data yourself into a new buffer?
// assuming 24-bit RGB (1 byte per color component)
unsigned char *rgb = /* ... */;
size_t rgb_bytes = /* ... */;
const size_t bpp_rgb = 3; // bytes per pixel - rgb
const size_t bpp_argb = 4; // bytes per pixel - argb
const size_t npixels = rgb_bytes / bpp_rgb;
unsigned char *argb = malloc(npixels * bpp_argb);
for (size_t i = 0; i < npixels; ++i) {
const size_t argbi = bpp_argb * i;
const size_t rgbi = bpp_rgb * i;
argb[argbi] = 0xFF; // alpha - full opacity
argb[argbi + 1] = rgb[rgbi]; // r
argb[argbi + 2] = rgb[rgbi + 1]; // g
argb[argbi + 3] = rgb[rgbi + 2]; // b
}
If you are using a CGBitmapContext then you can get a pointer to the bitmap buffer using the CGBitmapContextGetData() function. You can then write your data directly to the buffer.

Extracting 32-bit RGBA value from NSColor

I've got an NSColor, and I really want the 32-bit RGBA value that it represents. Is there any easy way to get this, besides extracting the float components, then multiplying and ORing and generally doing gross, endian-dependent things?
Edit: Thanks for the help. Really, what I was hoping for was a Cocoa function that already did this, but I'm cool with doing it myself.
Another more brute force approach would be to create a temporary CGBitmapContext and fill with the color.
NSColor *someColor = {whatever};
uint8_t data[4];
CGContextRef ctx = CGBitmapContextCreate((void*)data, 1, 1, 8, 4, colorSpace, kCGImageAlphaFirst | kCGBitmapByteOrder32Big);
CGContextSetRGBFillColor(ctx, [someColor redComponent], [someColor greenComponent], [someColor blueComponent], [someColor alphaComponent]);
CGContextFillRect(ctx, CGRectMake(0,0,1,1));
CGContextRelease(ctx);
FWIW, there are no endian issues with an 8 bit per component color value. Endianness is only with 16 bit or greater integers. You can lay out the memory any way you want, but the 8 bit integer values are the same whether a big endian or little endian machine. (ARGB is the default 8 bit format for Core Graphics and Core Image I believe).
Why not just this?:
uint32_t r = (uint32_t)(MIN(1.0f, MAX(0.0f, [someColor redComponent])) * 255.0f);
uint32_t g = (uint32_t)(MIN(1.0f, MAX(0.0f, [someColor greenComponent])) * 255.0f);
uint32_t b = (uint32_t)(MIN(1.0f, MAX(0.0f, [someColor blueComponent])) * 255.0f);
uint32_t a = (uint32_t)(MIN(1.0f, MAX(0.0f, [someColor alphaComponent])) * 255.0f);
uint32_t value = (r << 24) | (g << 16) | (b << 8) | a;
Then you know exactly how it is laid out in memory.
Or this, if its more clear to you:
uint8_t r = (uint8_t)(MIN(1.0f, MAX(0.0f, [someColor redComponent])) * 255.0f);
uint8_t g = (uint8_t)(MIN(1.0f, MAX(0.0f, [someColor greenComponent])) * 255.0f);
uint8_t b = (uint8_t)(MIN(1.0f, MAX(0.0f, [someColor blueComponent])) * 255.0f);
uint8_t a = (uint8_t)(MIN(1.0f, MAX(0.0f, [someColor alphaComponent])) * 255.0f);
uint8_t data[4];
data[0] = r;
data[1] = g;
data[2] = b;
data[3] = a;
Not all colors have an RGBA representation. They may have an approximation in RGBA, but that may or may not be accurate. Furthermore, there are "colors" that are drawn by Core Graphics as patterns (for example, the window background color on some releases of Mac OS X).
Converting the 4 floats to their integer representation, however you want to accomplish that, is the only way.

Resources