Qt-OpenCV:How to display grayscale images(opencv) in Qt - image

I have a piece of code here.
This is a camera capture application using OpenCV and Qt(for GUI).
void MainWindow::on_pushButton_clicked()
{
cv::VideoCapture cap(0);
if(!cap.isOpened()) return;
//namedWindow("edges",1);
QVector<QRgb> colorTable;
for (int i = 0; i < 256; i++) colorTable.push_back(qRgb(i, i, i));
QImage img;
img.setColorTable(colorTable);
for(;;)
{
cap >> image;
cvtColor(image, edges, CV_BGR2GRAY);
GaussianBlur(edges, edges, cv::Size(7,7), 1.5, 1.5);
Canny(edges, edges, 0, 30, 3);
//imshow("edges", edges);
if(cv::waitKey(30) >= 0) break;
// change color channel ordering
//cv::cvtColor(image,image,CV_BGR2RGB);
img = QImage((const unsigned char*)(edges.data),
image.cols,image.rows,QImage::Format_Indexed8);
// display on label
ui->label->setPixmap(QPixmap::fromImage(img,Qt::AutoColor));
// resize the label to fit the image
ui->label->resize(ui->label->pixmap()->size());
}
}
Initially "edges" is displayed in red with green background.Then it switches to blue background. This switching is happening randomly.
How can I display white edges in a black background in a stable manner.

In short, add the img.setColorTable(colorTable); just before the // display on labelcomment.
For more details, you create your image and affect the color table at the begining of your code:
QImage img;
img.setColorTable(colorTable);
Then in the infinite loop, you are doing the following:
img = QImage((const unsigned char*)(edges.data), image.cols, image.rows, QImage::Format_Indexed8);
What happens is that you destroy the image created at the begining of your code, the color map for this new image is not set and thus uses the default resulting in a colored output.

Related

PImage background color specified not working - SOLVED, see below

I am working my way through Ira Greenberg's original version of "Processing: Creative Coding and Computational Art." On page 195 there is an example of using the PImage class for draw lines using pixels. The code is fairly simple and does not even use the setup and draw functions. My problem is that I cannot get the sketch to work by making a white background and black lines. I have to change it to make a black background and white lines. Take note that the background call should make it white but turns out black. I cannot find a relevant example online. Here is the code with the changed code specified on lines 5, 9, and 10.
/* program: p195_lines_with_pixels.pde
This program shows how to create lines with pixels. This seems like it has been covered.
However, this program uses the PImage library. */
size(500, 300);
background(255); // not working as specified
// used by diagonal lines
float slope = float(height)/float(width);
PImage img = createImage(width, height, RGB);
//color c = color(0, 0, 0); // original code
color c = color(255, 255, 255); // it works now but I do not know why - and inverted color
// horizontal line
for(int i=0; i<width; i++){
img.set(i, height/2, c);
}
// vertical line
for(int i=0; i<height; i++){
img.set(width/2, i, c);
}
// diagonal line (TL-BR)
for(float i=0; i<width; i++){
img.set(int(i), int(i*slope), c);
}
// diagonal line (BL-TR)
for(float i=0; i<width; i++){
img.set(int(i), int(height-i*slope), c);
}
image(img, 0, 0);
By looking at some other examples online, for instance in the PImage tutorials found on the Processing site, I figured out what was going on. The original code does not work as intended with current Java and Processing. And, as per usual, making mistakes can often lead to figuring out how things work. Code placement in the sketch can affect how the sketch looks. Specifying the slope as a global variable did not work properly at this time and I ended up putting it in the draw function. The sketch could not find the height and width when slope was specified as a global variable.
And when using image(img, 0, 0), that places the image at the 0, 0 starting point. So, if you have an image that is less than the window size, you can see that it does not fill the whole window. This is what lead me to realize that the slope was not getting the correct values when it was initialized. It was a 1/1 slope when it should have been less. Below is the code that I ended up writing to work with this example:
/* program: p195_lines_with_pixels_3.pde
This program uses the PImage class to work with images. It allows for
direct manipulation
of pixles in the image. You can use an image from an external source or
create an image
in the sketch. Here, the image had to be created in the sketch.
This sketch was originally written as an example in Ira Greenberg's
book, "Processing:
Coding and Computational Art," 1st ed, 2007. So, the code had to be
changed a little, I
imagine because some things have changed with Java and Processing over
the years. The
current year is 2022. See the original code below */
// used by diagonal lines
// float slope = float(height)/float(width);
/* I think that part of the problem is that the image created below
actually creates an
image that is black, not a white background. So, the image created is
overlaying the
image of the background. Maybe the PImage class had a transparency to it
in 2007. */
//PImage img = createImage(width, height, RGB);
PImage img; // create img here, intialize it below
color c = color(0, 0, 0);
color cBackground = color(255, 255, 255);
void setup(){
size(500, 300);
background(255); // technically not necessary here as the image fills the window
}
void draw(){
// create the image called img
PImage img = createImage(width, height, RGB);
// fill the image with white pixels
float slope = float(height)/float(width);
for(int i=0; i<width; i++){
for(int j=0; j<height; j++){
img.set(i, j, cBackground);
}
}
// horizontal line
for(int i=0; i<width; i++){
img.set(i, height/2, c);
}
// vertical line
for(int i=0; i<height; i++){
img.set(width/2, i, c);
}
// diagonal line (TL-BR)
for(float i=0; i<width; i++){
img.set(int(i), int(i*slope), c);
}
// diagonal line (BL-TR)
for(float i=0; i<width; i++){
img.set(int(i), int(height-i*slope), c);
}
image(img, 0, 0);
}

Alpha channel in C++Builder

In Borland/Embarcadero C++Builder with VCL, I am trying to develop an application with an image where some parts (in fact, circles) fade in or out over time.
My code is mostly as follows:
void __fastcall TfmMain::FormCreate(TOBject *Sender)
{
img = new TBitmap;
img->Width = 800;
img->Height = 600;
fmMain->DoubleBuffered = true;
...
}
void __fastcall TfmMain::tmMainTimer(TObject *Sender)
{
for(int i = 0; i < nbParts; i++){
...
img->Brush->Color = clRed | alpha (t_time) << 24;
// alpha is a function returning 0 to 0xff, depending on required level of fade at time t_time)
img->Canvas->Ellipse(....);
}
fmMain->Canvas->Draw(0, 0, img);
}
But the result is not at all what I want : as an example, a part supposed to fade out has its color alternating between red and black. Same for a part supposed to fade in.
I tried DrawTransparent(), but had the error:
DrawTransparent is not accessible
And it has a transparency value for the whole bitmap, not for individual parts.
I tried a separate bitmap for each part, but I may have hundreds of them, and the animation becomes too slow.
Please, can someone help, and tell me what I should do?

Why doesn't the RoundRect path with gradient fill produce the correct corners on right side?

I came up with a routine to create a gradient filled rounded rectangle (button), however if I omit the code that writes the outline, the lower-right corner looks square and the upper-right seems to be not quite right either . Why is that?
note: The owner-draw button was created 23x23.
//-------------------------------------------------------------------------
// Purpose: Draw a rounded rectangle for owner-draw button
//
// Input: dis - [i] owner-draw information structure
// undermouse - [i] flag if button is under mouse
//
// Output: na
//
// Notes: This creates a standard grey type rounded rectangle for owner
// drawn buttons.
//
// This routine does not currently use undermouse to change
// gradient
//
void DrawRoundedButtonRectangle(const DRAWITEMSTRUCT& dis, BOOL undermouse)
{
UNREFERENCED_PARAMETER(undermouse);
// save DC before we modify it.
SaveDC(dis.hDC);
// create a path for the round rectangle (right/bottom is RECT format of +1)
BeginPath(dis.hDC);
RoundRect(dis.hDC, dis.rcItem.left, dis.rcItem.top, dis.rcItem.right, dis.rcItem.bottom, 6, 6);
EndPath(dis.hDC);
// save DC before changing clipping region
SaveDC(dis.hDC);
// set clipping region to be the path
SelectClipPath(dis.hDC, RGN_COPY);
TRIVERTEX vertices[2];
// setup the starting location and color (light grey)
vertices[0].x = dis.rcItem.left;
vertices[0].y = dis.rcItem.top;
vertices[0].Red = MAKEWORDHL(211, 0);
vertices[0].Green = MAKEWORDHL(211, 0);
vertices[0].Blue = MAKEWORDHL(211, 0);
vertices[0].Alpha = 0xffff;
// setup the ending location and color (grey)
vertices[1].x = dis.rcItem.right; // should this be -1 ?
vertices[1].y = dis.rcItem.bottom; // should this be -1 ?
vertices[1].Red = MAKEWORDHL(150, 0);
vertices[1].Green = MAKEWORDHL(150, 0);
vertices[1].Blue = MAKEWORDHL(150, 0);
vertices[1].Alpha = 0xffff;
// setup index to use for left to right
GRADIENT_RECT r[1];
r[0].UpperLeft = 0;
r[0].LowerRight = 1;
// fill the DC with a vertical gradient
GradientFill(dis.hDC, vertices, _countof(vertices), r, _countof(r), GRADIENT_FILL_RECT_V);
// go back to original clipping area
RestoreDC(dis.hDC, -1);
// change the path to be the outline border
if (WidenPath(dis.hDC)) {
// set clipping region to be the path
SelectClipPath(dis.hDC, RGN_COPY);
// create a gradient on the outline
GradientFill(dis.hDC, vertices, _countof(vertices), r, _countof(r), GRADIENT_FILL_RECT_V);
}
// put back the DC as we received it
RestoreDC(dis.hDC, -1);
}
The red in the pics show the background.
The bad button is generated when the WidenPath section is removed.
According to your description, I think you may be talking about this situation.
BeginPath(dis.hDC);
// RoundRect(dis.hDC, dis.rcItem.left, dis.rcItem.top, dis.rcItem.right, dis.rcItem.bottom, 6, 6);
EndPath(dis.hDC);
Let me first analyze the reason why I got this shape.
When you redraw the button, if the length and width of the redrawn button are smaller than that of the button itself, only a part of the redrawn will occur.
case WM_CREATE:
{
//Button width:230 Button height:230
button = CreateRoundRectButton(hWnd, 500, 200, 230, 230, 30, 30, BTN_ID);
return 0;
}
break;
case WM_DRAWITEM:
{
DRAWITEMSTRUCT dis;
dis.CtlType = ODT_BUTTON;
dis.CtlID = BTN_ID;
dis.hDC = GetDC(button);
dis.rcItem.left = 0;
dis.rcItem.top = 0;
dis.rcItem.right = 200; //Width of redrawing
dis.rcItem.bottom = 200; //Height of redrawing
DrawRoundedButtonRectangle(dis, TRUE);
}
In order to see the effect more clearly, I will widen the width and height.
If I omit the code that writes the outline, It only executes the following code to implement the gradient.
// fill the DC with a vertical gradient
GradientFill(dis.hDC, vertices, _countof(vertices), r, _countof(r), GRADIENT_FILL_RECT_V);
If I change the XY coordinates of the redrawing.
Actually, when you disable RoundRect, the only thing that works is GradientFill.
Updated:
The redrawn area is based on rcItem. When you draw a path, it's only the inside area that is considered and the outline is not, so WidenPath then goes on the outline and that gives the true routed rect area.

Programmatically Load Texture in Image and Set Border to the Image Unity

I am developing a endless game, and want to take a Snapshot when the player Dies. I've almost done that using Texture2D. i have done Load Texture in image programmatically. but want to set border to the image. How can i do that.? how can i set border to that image at Run-time.?
This Code For Load Texture To the Image at Run-time when my player Dies.
void LoadImage(){
byte[] bytes = File.ReadAllBytes (Application.dataPath +"/GameOverScreenShot" + "/BirdDiedScreenShot.png");
Texture2D texture = new Texture2D (900, 900, TextureFormat.RGB24, false);
texture.filterMode = FilterMode.Trilinear;
texture.LoadImage (bytes);
Sprite sprite = Sprite.Create (texture, new Rect (0, 0, 700, 380), new Vector2 (0.5f, 0.0f), 1.0f);
imgObject.GetComponent<UnityEngine.UI.Image> ().sprite = sprite;
}
i want to Set Border to that image at Run-time. any one can help i really appreciate. thanks in Advance.
Do this after you load the image into the Texture2D variable and this will change the border of the image to whatever color you want.
//color should be a variable that holds the color of the border you want.
for (int i = 0; i< texture.width; i++){
texture.SetPixel(i, 0, color); //top border
texture.SetPixel(i, texture.height - 1, color); //bottom border
}
for (int j = 0; j < texture.height; j++){
texture.SetPixel(0, j, color); // left border
texture.SetPixel(texture.width - 1, j, color); //right border
}
texture.Apply();
This will replace any pixels on the edge of your original image so if you need those edge pixels you will need to look for another solution. Also, texture.Apply takes a while to run so if you need to constantly apply this border you may experience slowdown but you mentioned it is only when the player dies so this should not be an issue.

pupil detection using opencv, with infrared image

I am trying the detect the pupil from a infrared image and calculate the center of the pupil.
In my setup, i used a camera sensitive to infrared light, and I added a visible light filter to the lens and two infrared LED around the camera.
However, the image I got is blur not so clear, maybe this caused by the low resolution of the camera, whose max is about 700x500.
In the processing, the first thing i did was to convert this RGB image to gray image, how ever the result is terrible. and it got nothing in the results.
int main()
{
//load image
cv::Mat src = cv::imread("11_13_2013_15_36_09.jpg");
cvNamedWindow("original");
cv::imshow("original", src);
cv::waitKey(10);
if (src.empty())
{
std::cout << "failed to find the image";
return -1;
}
// Invert the source image and convert to graysacle
cv::Mat gray;
cv::cvtColor(~src, gray, CV_BGR2GRAY);
cv::imshow("image1", gray);
cv::waitKey(10);
// Convert to binary image by thresholding it
cv::threshold(gray, gray, 220, 255, cv::THRESH_BINARY);
cv::imshow("image2", gray);
cv::waitKey(10);
// Find all contours
std::vector<std::vector<cv::Point>>contours;
cv::findContours(gray.clone(), contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
// Fill holes in each contour
cv::drawContours(gray, contours, -1, CV_RGB(255, 255, 255), -1);
cv::imshow("image3", gray);
cv::waitKey(10);
for (int i = 0; i < contours.size(); i++)
{
double area = cv::contourArea(contours[i]);
cv::Rect rect = cv::boundingRect(contours[i]);
int radius = rect.width / 2;
// If controu is big enough and has round shape
// Then it is the pupil
if (area >= 800 &&
std::abs(1 - ((double)rect.width / (double)rect.height)) <= 0.3 &&
std::abs(1 - (area / (CV_PI * std::pow(radius, 2)))) <= 0.3)
{
cv::circle(src, cv::Point(rect.x + radius, rect.y + radius), radius, CV_RGB(255, 0, 0), 2);
}
}
cv::imshow("image", src);
cvWaitKey(0);
}
When the original image was converted, the gray image is terrible, does anyone know a better solution to this? I am completely new to this. for the rest of the code for finding the circle, if you have any comments, just tell me. and also i need to extra the position of the two glint (the light point) on the original image, does anyone has some idea?
thanks.
Try equalizing and filtering your source image before thresholding it ;)

Resources