I noticed when I try to run BitBlt, the resulting data buffer is unexpected in two ways:
It is flipped along the y axis (the origin seems to be bottom left instead of top left)
In each RGBA grouping, the R and B values seem to be switched.
For the first issue, I noticed it when testing with my command prompt; if my command prompt was in the upper left portion of the screen, it would only say it was black when my cursor was in the lower left portion. I had to fix the inversion of the y axis by changing int offset = (y * monitor_width + x) * 4; to int offset = ((monitor_height - 1 - y) * monitor_width + x) * 4; this fixed the pixel location issue because it was showing black where I expected black.
However, the colors were still strong. I tested by trying to get the color of known pixels. I noticed every blue pixel had a very high R value and every red pixel had a very high blue value. That's when I compared with an existing tool I had and found out that the red and blue values seem to be switched in every pixel. At first I thought it was backwards or a byte alignment issue, but I also verified in a clustering of pixels that aren't uniform to make sure it's picking the right position of pixel, and it did perfectly well, just with the colors switched.
Full simplified code below (originally my tool was getting my cursor position and printing the pixel color via hotkey press; this is a simplified version that gets one specific point).
BYTE* my_pixel_data;
HDC hScreenDC = GetDC(GetDesktopWindow());
int BitsPerPixel = GetDeviceCaps(hScreenDC, BITSPIXEL);
HDC hMemoryDC = CreateCompatibleDC(hScreenDC);
int monitor_width = GetSystemMetrics(SM_CXSCREEN);
int monitor_height = GetSystemMetrics(SM_CYSCREEN);
std::cout << std::format("monitor width height: {}, {}\n", monitor_width, monitor_height);
BITMAPINFO info;
info.bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
info.bmiHeader.biWidth = monitor_width; // client_width;
info.bmiHeader.biHeight = monitor_height; // client_height;
info.bmiHeader.biPlanes = 1;
info.bmiHeader.biBitCount = BitsPerPixel;
info.bmiHeader.biCompression = BI_RGB;
HBITMAP hbitmap = CreateDIBSection(hMemoryDC, &info, DIB_RGB_COLORS, (void**)&my_pixel_data, 0, 0);
SelectObject(hMemoryDC, hbitmap);
BitBlt(hMemoryDC, 0, 0, monitor_width, monitor_height, hScreenDC, 0, 0, SRCCOPY);
int x = 12, y = 12;
int offset = ((monitor_height - 1 - y) * monitor_width + x) * 4;
std::cout << std::format("debug: ({}, {}): ({}, {}, {})\n", x, y, (int)my_pixel_data[offset], (int)my_pixel_data[offset + 1], (int)my_pixel_data[offset + 2], (int)my_pixel_data[offset + 3]);
system("pause");
The output of this will be debug: (12, 12): (199, 76, 133) even though another program has verified the colors are actually (133, 76, 199).
I can easily fix this in my code by flipping the y axis and switching each R and B value and the program will work perfectly well. However, I am just baffled by how this happened and whether there's a more elegant fix.
I can answer the RGB (and it looks like Hans answered the inverted Y axis in a comment). Remember that RGB is stored 0xAARRGGBB, so in that 32 bit value BB is byte 0, GG is byte 1, and RR is byte 2 (alpha is byte 3 if you use it), so when you index in at +0, +1 and +2 you're actually getting the values correctly. When we say RGB we're saying the colors in opposite order of how they're stored in memory.
Related
I was analyzing a 12 bit per pixel, GRBG, Little Endian, 1920x1280 resolution raw image but I am confused how data or RGB pixels are stored. Image size is 4915200 bytes, when calculated 4915200/(1920x1280) = 2. That means each pixel takes 2 bytes and 4 bits in 2bytes are used for padding. I tried to edit image with Hex editor but I have no idea how pixels are stored in image. Please do share if you have any idea.
Image Link
That means each pixel takes 2 bytes and 4 bits in 2bytes are used for padding
Well, sort of. It means each sample is stored in two consecutive bytes, with 4 bits of padding. But in raw images, samples usually aren't pixels, not exactly. Raw images have not been demosaiced yet, they are raw after all. For GRGB, the Bayer pattern looks like this:
What's in the file, is a 1920x1280 grid of 12+4 bit samples, arranged in the same order as pixels would have been, but each sample has only one channel, namely the one that corresponds to its position in the Bayer pattern.
Additionally, the color space is probably linear, not Gamma-compressed. The color balance is unknown unless you reverse engineer it. A proper decoder would have a calibrated color matrix, but I don't have that.
I combined these two things and guessed a color balance to do a really basic decoding (with bad demosaicing, just to demonstrate that the above information is probably accurate):
Using this C# code:
Bitmap bm = new Bitmap(1920, 1280);
for (int y = 0; y < 1280; y += 2)
{
int i = y * 1920 * 2;
for (int x = 0; x < 1920; x += 2)
{
const int stride = 1920 * 2;
int d0 = data[i] + (data[i + 1] << 8);
int d1 = data[i + 2] + (data[i + 3] << 8);
int d2 = data[i + stride] + (data[i + stride + 1] << 8);
int d3 = data[i + stride + 2] + (data[i + stride + 3] << 8);
i += 4;
int r = Math.Min((int)(Math.Sqrt(d1) * 4.5), 255);
int b = Math.Min((int)(Math.Sqrt(d2) * 9), 255);
int g0 = Math.Min((int)(Math.Sqrt(d0) * 5), 255);
int g3 = Math.Min((int)(Math.Sqrt(d3) * 5), 255);
int g1 = Math.Min((int)(Math.Sqrt((d0 + d3) * 0.5) * 5), 255);
bm.SetPixel(x, y, Color.FromArgb(r, g0, b));
bm.SetPixel(x + 1, y, Color.FromArgb(r, g1, b));
bm.SetPixel(x, y + 1, Color.FromArgb(r, g1, b));
bm.SetPixel(x + 1, y + 1, Color.FromArgb(r, g3, b));
}
}
You can load your image into a Numpy array and reshape correctly like this:
import numpy as np
# Load image and reshape
img = np.fromfile('Image_12bpp_grbg_LittleEndian_1920x1280.raw',dtype=np.uint16).reshape((1280,1920))
print(img.shape)
(1280, 1920)
Then you can demosaic and scale to get a 16-bit PNG. Note that I don't know your calibration coefficients so I guessed:
#!/usr/bin/env python3
# Demosaicing Bayer Raw image
# https://stackoverflow.com/a/68823014/2836621
import cv2
import numpy as np
filename = 'Image_12bpp_grbg_LittleEndian_1920x1280.raw'
# Set width and height
w, h = 1920, 1280
# Read mosaiced image as GRGRGR...
# BGBGBG...
bayer = np.fromfile(filename, dtype=np.uint16).reshape((h,w))
# Extract g0, g1, b, r from mosaic
g0 = bayer[0::2, 0::2] # every second pixel down and across starting at 0,0
g1 = bayer[1::2, 1::2] # every second pixel down and across starting at 1,1
r = bayer[0::2, 1::2] # every second pixel down and across starting at 0,1
b = bayer[1::2, 0::2] # every second pixel down and across starting at 1,0
# Apply (guessed) color matrix for 16-bit PNG
R = np.sqrt(r) * 1200
B = np.sqrt(b) * 2300
G = np.sqrt((g0+g1)/2) * 1300 # very crude
# Stack into 3 channel
BGR16 = np.dstack((B,G,R)).astype(np.uint16)
# Save result as 16-bit PNG
cv2.imwrite('result.png', BGR16)
Keywords: Python, raw, image processing, Bayer, de-Bayer, mosaic, demosaic, de-mosaic, GBRG, 12-bit.
[The final fix, which works unconditionally: use SetDIBitsToDevice, not BitBlt, to copy out the post-text-draw image data. With this change, all occurrences of the problem are gone.]
I fixed the problem I'm having, but for the life of me I can't figure out why it occurred.
Create a bitmap with CreateDIBitmap. Get a pointer to the bitmap bits.
Select the bitmap into a memory DC.
Background fill the bitmap by directly writing the bitmap memory.
TextOut.
No text displays.
What fixed the problem: change item 3. from direct fill to a call to FillRect. All is well, it works perfectly.
This is under Windows 10 but from what little I could find on the web, it spans all versions of Windows. NO operations work on the bitmap - even calling FillRect - after the manual write. No savvy, Kimosabe. Elsewhere in the app, I even build gradient fills by directly writing to that bitmap memory and there is no problem. But once TextOut is called after the manual fill, the bitmap is locked (effectively) and no further functions work on it - nor do any return an error.
I'm using a font with a 90 degree escapement. Have not tried it with a "normal" font, 0 degree escapement. DrawTextEx with DT_CALCRECT specifically states it only works on 0 degree escapement fonts so I had to use TextOut for this reason.
Very bizarre.
No, there were no stupid mistakes like using the same text color as the background color. I've spent too long on this for that. One option people have available is that the endless energy that would normally be spent destroying the question and/or the person who asked it could instead be used to write a few lines of code and try it for yourself.
Here's a function to make a bitmap. Don't pass a plain colour, pass a gradient fill, say going from white to pinkish.
Does it display correctly? If so, does the TextOut call on top of that work?
static HBITMAP MakeBitmap(unsigned char *rgba, int width, int height, VOID **buff)
{
VOID *pvBits; // pointer to DIB section
HBITMAP answer;
BITMAPINFO bmi;
HDC hdc;
int x, y;
int red, green, blue, alpha;
// setup bitmap info
bmi.bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
bmi.bmiHeader.biWidth = width;
bmi.bmiHeader.biHeight = height;
bmi.bmiHeader.biPlanes = 1;
bmi.bmiHeader.biBitCount = 32; // four 8-bit components
bmi.bmiHeader.biCompression = BI_RGB;
bmi.bmiHeader.biSizeImage = width * height * 4;
hdc = CreateCompatibleDC(GetDC(0));
answer = CreateDIBSection(hdc, &bmi, DIB_RGB_COLORS, &pvBits, NULL, 0x0);
for (y = 0; y < height; y++)
{
for (x = 0; x < width; x++)
{
red = rgba[(y*width + x) * 4];
green = rgba[(y*width + x) * 4 + 1];
blue = rgba[(y*width + x) * 4 + 2];
alpha = rgba[(y*width + x) * 4 + 3];
red = (red * alpha) >> 8;
green = (green * alpha) >> 8;
blue = (blue * alpha) >> 8;
((UINT32 *)pvBits)[(height - y - 1) * width + x] = (alpha << 24) | (red << 16) | (green << 8) | blue;
}
}
DeleteDC(hdc);
*buff = pvBits;
return answer;
}
The code is supposed to fade and copy the window's image to a buffer f, then draw f back onto the window but translated, rotated, and scaled. I am trying to create an effect like a feedback loop when you point a camera plugged into a TV at the TV.
I have tried everything I can think of, logged every variable I could think of, and still it just seems like image(f,0,0) is doing something wrong or unexpected.
What am I missing?
Pic of double image mirror about x-axis:
PGraphics f;
int rect_size;
int midX;
int midY;
void setup(){
size(1000, 1000, P2D);
f = createGraphics(width, height, P2D);
midX = width/2;
midY = height/2;
rect_size = 300;
imageMode(CENTER);
rectMode(CENTER);
smooth();
background(0,0,0);
fill(0,0);
stroke(255,255);
}
void draw(){
fade_and_copy_pixels(f); //fades window pixels and then copies pixels to f
background(0,0,0);//without this the corners dont get repainted.
//transform display window (instead of f)
pushMatrix();
float scaling = 0.90; // x>1 makes image bigger
float rot = 5; //angle in degrees
translate(midX,midY); //makes it so rotations are always around the center
rotate(radians(rot));
scale(scaling);
imageMode(CENTER);
image(f,0,0); //weird double image must have something not working around here
popMatrix();//returns window matrix to normal
int x = mouseX;
int y = mouseY;
rectMode(CENTER);
rect(x,y,rect_size,rect_size);
}
//fades window pixels and then copies pixels to f
void fade_and_copy_pixels(PGraphics f){
loadPixels(); //load windows pixels. dont need because I am only reading pixels?
f.loadPixels(); //loads feedback loops pixels
// Loop through every pixel in window
//it is faster to grab data from pixels[] array, so dont use get and set, use this
for (int i = 0; i < pixels.length; i++) {
//////////////FADE PIXELS in window and COPY to f:///////////////
color p = pixels[i];
//get color values, mask then shift
int r = (p & 0x00FF0000) >> 16;
int g = (p & 0x0000FF00) >> 8;
int b = p & 0x000000FF; //no need for shifting
// reduce value for each color proportional
// between fade_amount between 0-1 for 0 being totallty transparent, and 1 totally none
// min is 0.0039 (when using floor function and 255 as molorModes for colors)
float fade_percent= 0.005; //0.05 = 5%
int r_new = floor(float(r) - (float(r) * fade_percent));
int g_new = floor(float(g) - (float(g) * fade_percent));
int b_new = floor(float(b) - (float(b) * fade_percent));
//maybe later rewrite in a way to save what the difference is and round it differently, like maybe faster at first and slow later,
//round doesn't work because it never first subtracts one to get the ball rolling
//floor has a minimum of always subtracting 1 from each value each time. cant just subtract 1 ever n loops
//keep a list of all the pixel as floats? too much memory?
//ill stick with floor for now
// the lowest percent that will make a difference with floor is 0.0039?... because thats slightly more than 1/255
//shift back and or together
p = 0xFF000000 | (r_new << 16) | (g_new << 8) | b_new; // or-ing all the new hex together back into AARRGGBB
f.pixels[i] = p;
////////pixels now copied
}
f.updatePixels();
}
This is a weird one. But let's start with a simpler MCVE that isolates the problem:
PGraphics f;
void setup() {
size(500, 500, P2D);
f = createGraphics(width, height, P2D);
}
void draw() {
background(0);
rect(mouseX, mouseY, 100, 100);
copyPixels(f);
image(f, 0, 0);
}
void copyPixels(PGraphics f) {
loadPixels();
f.loadPixels();
for (int i = 0; i < pixels.length; i++) {
color p = pixels[i];
f.pixels[i] = p;
}
f.updatePixels();
}
This code exhibits the same problem as your code, without any of the extra logic. I would expect this code to show a rectangle wherever the mouse is, but instead it shows a rectangle at a position reflected over the X axis. If the mouse is on the top of the window, the rectangle is at the bottom of the window, and vice-versa.
I think this is caused by the P2D renderer being OpenGL, which has an inversed Y axis (0 is at the bottom instead of the top). So it seems like when you copy the pixels over, it's going from screen space to OpenGL space... or something. That definitely seems buggy though.
For now, there are two things that seem to fix the problem. First, you could just use the default renderer instead of P2D. That seems to fix the problem.
Or you could get rid of the for loop inside the copyPixels() function and just do f.pixels = pixels; for now. That also seems to fix the problem, but again it feels pretty buggy.
If somebody else (paging George) doesn't come along with a better explanation by tomorrow, I'd file a bug on Processing's GitHub. (I can do that for you if you want.)
Edit: I've filed an issue here, so hopefully we'll hear back from a developer in the next few days.
Edit Two: Looks like a fix has been implemented and should be available in the next release of Processing. If you need it now, you can always build Processing from source.
An easier one, and works like a charm:
add f.beginDraw(); before and f.endDraw(); after using f:
loadPixels(); //load windows pixels. dont need because I am only reading pixels?
f.loadPixels(); //loads feedback loops pixels
// Loop through every pixel in window
//it is faster to grab data from pixels[] array, so dont use get and set, use this
f.beginDraw();
and
f.updatePixels();
f.endDraw();
Processing must know when it's drawing in a buffer and when not.
In this image you can see that works
I need to import a PNG and display it on screen in a Motif application. For reasons best known to myself, I don't want to use any more libraries than I need to, and I'd like to stick with just Motif and pnglib.
I've been battling with this for a couple of days now, and I'd like to put aside my pride and ask for some help. This screenshot shows the problem:
https://s3.amazonaws.com/gtrebol264929/pnglib_fail.png
The window on the right shows what the image should look like, the window on the left is my Motif application showing what it looks like in my app. Clearly I've got the image data OK, as the basic concept of the picture can be seen. But also clearly I've messed up how I get the pixel data from pnglib into an XImage. Below is my code:
char * xdata = malloc(width * height * (channels + 1));
memset(xdata,100,width * height * channels);
int colc = 0;
int bytec = 0;
while (colc < width) {
int rowc = 0;
while(rowc < height) {
png_byte * row = png.row_pointers[rowc];
memcpy(&xdata[bytec],&row[colc],1);
bytec += 4;
rowc += 1;
}
colc += 1;
}
XImage * img = XCreateImage(display, CopyFromParent, depth * channels, ZPixmap, 0, xdata, width, height, 32, bytes_per_line);
printf("PNG %ix%i (depth: %i x %i) img: %p\n",width,height,depth,channels,img);
XPutImage (display, win, gc, img, 0, 0, 0, 0, width, height); // 0, 0, 0, 0 are src x,y and dst x,y
png.row_pointers is the pixel data from pnglib.
I'm pretty sure I've just misunderstood how the pixel data is stored, but I can't quite work out what I've done wrong. Any help is very much appreciated.
All the best
Garry
With the below code snippet I create a scene with 100.000 rectangles.
The performance is fine; the view responds with no delays.
QGraphicsScene * scene = new QGraphicsScene;
for (int y = -50000; y < 50000; y++) {
scene->addRect(0, y * 25, 40, 20);
}
...
view->setScene(scene);
And now the 2nd snippet sucks
for (int y = 0; y < 100000; y++) {
scene->addRect(0, y * 25, 40, 20);
}
For the 1st half of scene elements the view delays to respond on mouse and key events, and for the other half it seems to be ok ?!?
The former scene has sceneRect (x, y, w, h) = (0, -1250000, 40, 2499995).
The latter scene has sceneRect (x, y, w, h) = (0, 0, 40, 2499995).
I don't know why the sceneRect affects the performance, since the BSP index is based on relative item coordinates.
Am I missing something? I didn't find any information on the documentation,
plus the Qt demo 40000 Chips also distributes the elements around (0, 0), without explaining the reason for that choice.
// Populate scene
int xx = 0;
int nitems = 0;
for (int i = -11000; i < 11000; i += 110) {
++xx;
int yy = 0;
for (int j = -7000; j < 7000; j += 70) {
++yy;
qreal x = (i + 11000) / 22000.0;
qreal y = (j + 7000) / 14000.0;
...
I have a solution for you, but promise to not ask me why is this working,
because I really don't know :-)
QGraphicsScene * scene = new QGraphicsScene;
// Define a fake symetrical scene-rectangle
scene->setSceneRect(0, -(25*100000+20), 40, 2 * (25*100000+20) );
for (int y = 0; y < 100000; y++) {
scene->addRect(0, y * 25, 40, 20);
}
view->setScene(scene);
// Tell the view to display only the actual scene-objects area
view->setSceneRect(0, 0, 40, 25*100000+20);
For the common case, the default index
method BspTreeIndex works fine. If
your scene uses many animations and
you are experiencing slowness, you can
disable indexing by calling
setItemIndexMethod(NoIndex). Qt-doc
You will need to call setItemIndexMethod(QGraphicsScene::NoIndex) before insertion:
scene->setItemIndexMethod(QGraphicsScene::NoIndex);
for (int y = 0; y < 100000; y++) {
scene->addRect(0, y * 25, 40, 20);
}
//...
It could be due to loss of precision with float. A 32 bit float has a 23 bit mantissa (or significand), 1 bit sign and 8 bit exponent. This is like scientific notation. You have 23 "significant digits" (really 24 due to an implicit leading 1) and an exponent of 2^exp where the exponent can range from -126 to 127 (others are used to give you things like NaN and Inf). So you can represent really large numbers like 2^24*2^127 but the next closest floating point number to such a float is (2^24-1)*2^127 or 170 billion billion billion billion away. If you try to add a smaller amount (like 1000) to such a number it doesn't change. It has no way to represent that.
This becomes significant in computer graphics because you need some of your significant digits left over to make a fractional part. When your scene ranges up to 1250000.0 you can add 0.1 to that and get 1250000.1. If you take 2500000.0 + 0.1 you get 2500000.0. The problem is magnified by any scaling or rotation that occurs. This can lead to obvious visual problems if you actually fly out to those coordinates and look at your scene.
Why does centering around 0 help? Because there's a separate sign bit in the floating point representation. In floating point there are "more numbers" between (-x,+x) than there are from (0,2x). If I'm right it would also work if you simply scaled your entire scene down by 1/2. This moves the most significant bit down leaving it free for precision on the other end.
Why would this lead to poor performance? I can only speculate without reading the Qt source, but consider a data structure for storing objects by location. What might you have to do differently if two objects touch (or overlap) due to loss of precision that you didn't have to do when they did not overlap?