QImage creation - image

I start to work with images in radiolocation field using Qt library and I have some questions, sorry for stupid. I have to create black and white QImage from bytearray with 0 and 1 such this
0000000000000000000000
0000001100000000000000
0000001111000000000000
0000011111110000000000
0000011111111110000000
0000000111111111000000
I do
QImage pIm = QImage ((uchar *)(bIm.constData(), width, height, nBitsPerLine, QImage::Format_Mono);
where 0 is black color and 1 is white but image is incorrect, which way I have to transform colors on this image ? Sorry for stupid question.

I transform these data onto pixels such this
QImage pIm (nWidth, nHeight, QImage::Format_ARGB32);
ncount = 0;
for (uint i=0; i<nWidth; i++)
{
for (uint j=0; j<nHeight; j++)
{
uint c = (uchar)imData[ncount++];
c *= 255;
pIm.setPixel(i, j, qRgb(c,c,c));
}
}
Before I think that pixels color can be described by normalized to 1.0 numbers, but my suppose was not correct and I transform to 0-255 range.

Related

PImage background color specified not working - SOLVED, see below

I am working my way through Ira Greenberg's original version of "Processing: Creative Coding and Computational Art." On page 195 there is an example of using the PImage class for draw lines using pixels. The code is fairly simple and does not even use the setup and draw functions. My problem is that I cannot get the sketch to work by making a white background and black lines. I have to change it to make a black background and white lines. Take note that the background call should make it white but turns out black. I cannot find a relevant example online. Here is the code with the changed code specified on lines 5, 9, and 10.
/* program: p195_lines_with_pixels.pde
This program shows how to create lines with pixels. This seems like it has been covered.
However, this program uses the PImage library. */
size(500, 300);
background(255); // not working as specified
// used by diagonal lines
float slope = float(height)/float(width);
PImage img = createImage(width, height, RGB);
//color c = color(0, 0, 0); // original code
color c = color(255, 255, 255); // it works now but I do not know why - and inverted color
// horizontal line
for(int i=0; i<width; i++){
img.set(i, height/2, c);
}
// vertical line
for(int i=0; i<height; i++){
img.set(width/2, i, c);
}
// diagonal line (TL-BR)
for(float i=0; i<width; i++){
img.set(int(i), int(i*slope), c);
}
// diagonal line (BL-TR)
for(float i=0; i<width; i++){
img.set(int(i), int(height-i*slope), c);
}
image(img, 0, 0);
By looking at some other examples online, for instance in the PImage tutorials found on the Processing site, I figured out what was going on. The original code does not work as intended with current Java and Processing. And, as per usual, making mistakes can often lead to figuring out how things work. Code placement in the sketch can affect how the sketch looks. Specifying the slope as a global variable did not work properly at this time and I ended up putting it in the draw function. The sketch could not find the height and width when slope was specified as a global variable.
And when using image(img, 0, 0), that places the image at the 0, 0 starting point. So, if you have an image that is less than the window size, you can see that it does not fill the whole window. This is what lead me to realize that the slope was not getting the correct values when it was initialized. It was a 1/1 slope when it should have been less. Below is the code that I ended up writing to work with this example:
/* program: p195_lines_with_pixels_3.pde
This program uses the PImage class to work with images. It allows for
direct manipulation
of pixles in the image. You can use an image from an external source or
create an image
in the sketch. Here, the image had to be created in the sketch.
This sketch was originally written as an example in Ira Greenberg's
book, "Processing:
Coding and Computational Art," 1st ed, 2007. So, the code had to be
changed a little, I
imagine because some things have changed with Java and Processing over
the years. The
current year is 2022. See the original code below */
// used by diagonal lines
// float slope = float(height)/float(width);
/* I think that part of the problem is that the image created below
actually creates an
image that is black, not a white background. So, the image created is
overlaying the
image of the background. Maybe the PImage class had a transparency to it
in 2007. */
//PImage img = createImage(width, height, RGB);
PImage img; // create img here, intialize it below
color c = color(0, 0, 0);
color cBackground = color(255, 255, 255);
void setup(){
size(500, 300);
background(255); // technically not necessary here as the image fills the window
}
void draw(){
// create the image called img
PImage img = createImage(width, height, RGB);
// fill the image with white pixels
float slope = float(height)/float(width);
for(int i=0; i<width; i++){
for(int j=0; j<height; j++){
img.set(i, j, cBackground);
}
}
// horizontal line
for(int i=0; i<width; i++){
img.set(i, height/2, c);
}
// vertical line
for(int i=0; i<height; i++){
img.set(width/2, i, c);
}
// diagonal line (TL-BR)
for(float i=0; i<width; i++){
img.set(int(i), int(i*slope), c);
}
// diagonal line (BL-TR)
for(float i=0; i<width; i++){
img.set(int(i), int(height-i*slope), c);
}
image(img, 0, 0);
}

Unity texture2d getPixel return wrong color

I'm loading png as a texture:
byte[] bytes = File.ReadAllBytes(path);
Texture2D texture = new Texture2D(1, 1);
texture.LoadImage(bytes);
The problem is with the pixel retrieved by texture.GetPixel(23,23)
In debug it seems to be white, debug log: texture.GetPixel(23, 23) "RGBA(1.000, 1.000, 1.000, 0.000)" UnityEngine.Color
But it should be a kind of blue according to what I see in image:
I have no ideea how can I get the right value of this pixel.
I put here photos with textures, and what I obtained drawing from pixel colors
Code used:
public void JustTest()
{
ClearGrid();
byte[] bytes = File.ReadAllBytes(path);
Texture2D texture = new Texture2D(1, 1);
texture.LoadImage(bytes);
for (int i = 0; i <= texture.width; i++)
{
for (int j = 0; j <= texture.height; j++)
{
GameObject sa = Instantiate(_testTilePrefab, new Vector3(i, 0.2f, j), _testTilePrefab.transform.rotation);
sa.GetComponent<Renderer>().material.color = texture.GetPixel(i, j);
}
}
}
The issue is very likely that texture get reduced in their resolution according to what you have configured in the bottom section of the Texture Import Settings -> Platform specific overrides -> MaxSize
Set the maximum imported Texture dimensions in pixels
. Artists often prefer to work with huge dimension-size Textures, but you can scale down the Texture to a suitable dimension-size.
For your tile scene I'm almost sure that your tiles are too big and overlapping each other and thus the strange looking image.
It's likely that Y axis index issue.
It could be top-bottom, bottom-top flip issue.
Edit:
You are missing “texture.Apply()” after loading the byte[]. Without the apply function, Your texture is still in white color by default.
texture.Apply();

Using Processing for image visualization: pixel color thresholds

Image to be manipulated, hoping to identify each white dot on each picture with a counter
PImage blk;
void setup() {
size(640, 480);
blk=loadImage("img.png");
}
void draw () {
loadPixels();
blk.loadPixels();
int i = 0;
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
int loc = x+y*width;
pixels [loc] = blk.pixels[loc];
if (blk.pixels[loc] == 0) {
if (blk.pixels [loc]+1 != 0) {
i++;
}
}
float r = red(blk.pixels[loc]);
float g = green(blk.pixels[loc]);
float b = blue(blk.pixels[loc]);
pixels [loc] = color(r, g, b);
}
}
System.out.println (i);
updatePixels();
}
The main problem is within my if statement, not sure to approach it logically.
I'm unsure where this is exactly going, but I can help you find the white pixels. Here, I just counted 7457 "white" pixels (then I turned them red so you can see where they are and adjust the threshold if you want to get more or less of them):
Of course, this is just a proof of concept which you should be able to adapt to your needs.
PImage blk;
void setup() {
size(640, 480);
blk=loadImage("img.png");
blk.loadPixels();
int whitePixelsCount = 0;
// I'm doing this in the 'setup()' method because I don't need to do it 60 times per second
// Once it's done once I can just use the image as modified unless you want several
// different versions (which you can calculate once anyway then store in different PImages)
for (int i = 0; i < blk.width * blk.height; i++) {
float r = red(blk.pixels[i]);
float g = green(blk.pixels[i]);
float b = blue(blk.pixels[i]);
// In RGB, the brightness of each color is represented by it's intensity
// So here I'm checking the "average intensity" of the color to see how bright it is
// And I compare it to 100 since 255 is the max and I wanted this simple, but you can
// play with this threshold as much as you like
if ((r+g+b)/3 > 100) {
whitePixelsCount++;
// Here I'm making those pixels red so you can see where they are.
// It's easier to adjust the threshold if you can see what you're doing
blk.pixels[i] = color(255, 0, 0);
}
}
println(whitePixelsCount);
updatePixels();
}
void draw () {
image(blk, 0, 0);
}
In short (you'll read this in the comments too), we count the pixels according to a threshold we can adjust. To make things more obvious for you, I colored the "white" pixels red. You can lower or raise the threshold according to what you see this way, and once you know what you want you can get rid of the color.
There is a difficulty here, which is that the image isn't "black and white", but more greyscale - which is totally normal, but makes things harder for what you seem to be trying to do. You'll probably have to tinker a lot to get to the exact ratio which interests you. It could help a lot if you edited the original image in GiMP or another image software which lets you adjust contrast and brightness. It's kinda cheating, but it it doesn't work right off the bat this strategy could save you some work.
Have fun!

Qt-OpenCV:How to display grayscale images(opencv) in Qt

I have a piece of code here.
This is a camera capture application using OpenCV and Qt(for GUI).
void MainWindow::on_pushButton_clicked()
{
cv::VideoCapture cap(0);
if(!cap.isOpened()) return;
//namedWindow("edges",1);
QVector<QRgb> colorTable;
for (int i = 0; i < 256; i++) colorTable.push_back(qRgb(i, i, i));
QImage img;
img.setColorTable(colorTable);
for(;;)
{
cap >> image;
cvtColor(image, edges, CV_BGR2GRAY);
GaussianBlur(edges, edges, cv::Size(7,7), 1.5, 1.5);
Canny(edges, edges, 0, 30, 3);
//imshow("edges", edges);
if(cv::waitKey(30) >= 0) break;
// change color channel ordering
//cv::cvtColor(image,image,CV_BGR2RGB);
img = QImage((const unsigned char*)(edges.data),
image.cols,image.rows,QImage::Format_Indexed8);
// display on label
ui->label->setPixmap(QPixmap::fromImage(img,Qt::AutoColor));
// resize the label to fit the image
ui->label->resize(ui->label->pixmap()->size());
}
}
Initially "edges" is displayed in red with green background.Then it switches to blue background. This switching is happening randomly.
How can I display white edges in a black background in a stable manner.
In short, add the img.setColorTable(colorTable); just before the // display on labelcomment.
For more details, you create your image and affect the color table at the begining of your code:
QImage img;
img.setColorTable(colorTable);
Then in the infinite loop, you are doing the following:
img = QImage((const unsigned char*)(edges.data), image.cols, image.rows, QImage::Format_Indexed8);
What happens is that you destroy the image created at the begining of your code, the color map for this new image is not set and thus uses the default resulting in a colored output.

QT QImage pixel manipulation

I am building a QT GUI application and use QImage for opening images.
My problem is that I can't figure out how to use QImage's bit() and scanline()
methods to get access at per pixel level.
I've seen this post Qt QImage pixel manipulation problems
but this is only for the first pixel of each row. Is this correct or I got it all wrong?
thanks in advance
The scanlines correspond to the the height of image, the columns correspond to the width of the image.
According to the docs, the prototype looks like uchar* QImage::scanline(int i), or a similar const version.
But, as a commenter pointed out, because the data is dependent on the machine architecture and image, you should NOT use the uchar * directly. Instead, use something like the following:
QRgb *rowData = (QRgb*)img.scanLine(row);
QRgb pixelData = rowData[col];
int red = qRed(pixelData);
It may not be immediately obvious from Kaleb's post, but the following works for setting a pixel on a Format_RGB32 image.
// Get the line we want
QRgb *line = (QRgb *)image->scanLine(row_index);
// Go to the pixel we want
line += col_index;
// Actually set the pixel
*line = qRgb(qRed(color), qGreen(color), qBlue(color));
The answer did not work for me. It looks like, the data is not 32bit aligned on my system.
To get the correct data, on my system i had to do this:
for(uint32_t Y = 0; Y < mHeight; ++Y)
{
uint8_t* pPixel = Image.scanLine(Y);
for(uint32_t X = 0; X < mWidth; ++X)
{
const int Blue = *pPixel++;
const int Green = *pPixel++;
const int Red = *pPixel++;
uint8_t GrayscalePixel = (0.21f * Red) + (0.72f * Green) + (0.07 * Blue);
}
}

Resources