Not loading white pixels in picture in processing2.0 - performance

This is for the programing language Processing (2.0).
Say I wish to load a not square image (lets use a green circle for the example). If I load this on a black background you can visibly see the white square of the image(aka all parts of image that aren't the green circle). How would I go about efficiently removing them?
It can not think of an efficient way to do it, I will be doing it to hundreds of pictures about 25 times a second(since they will be moving).
Any help would be greatly appreciated, the more efficient the code the better.

As #user3342987 said, you can loop through the image's pixels to see if each pixel is white or not. However, it's worth noting that 255 is white (not 0, which is black). You also shouldn't hardcode the replacement color, as they suggested -- what if the image is moving over a striped background? The best approach is to change all the white pixels into transparent pixels using the image's alpha channel. Also, since you mentioned you would be doing it "about 25 times a second", you shouldn't be doing these checks more than once-- it will be the same every time and would be wasteful. Instead, do it when the images are first loaded, something like this (untested):
PImage[] images;
void setup(){
size(400,400);
images = new PImage[10];
for(int i = 0; i < images.length; i++){
// example filenames
PImage img = loadImage("img" + i + ".jpg");
img.beginDraw();
img.loadPixels();
for(int p = 0; p < img.pixels.length; p++){
//color(255,255,255) is white
if(img.pixels[p] == color(255,255,255)){
img.pixels[p] = color(0,0); // set it to transparent (first number is meaningless)
}
}
img.updatePixels();
img.endDraw();
images[i] = img;
}
}
void draw(){
//draw the images as normal, the white pixels are now transparent
}
So, this will lead to no lag during draw() because you edited out the white pixels in setup(). Whatever you're drawing the images on top of will show through.
It's also worth mentioning that some image filetypes have an alpha channel built in (e.g., the PNG format), so you could also change the white pixels to transparent in some image editor and use those edited files for your sketch. Then your sketch wouldn't have to edit them every time it starts up.

Pixels are stored in the Pixels[] array, you can use a for loop to check to see if the value is 0 (aka white). If it is white load it as the black background.

Related

Detecting circles and shots from paper target

I'm making a small project where i have to detect points scored from a given image of paper target. Something similar to TargetScan app for iPhone.
I'm using openCV for processing image and basically i have two parts for this, one is to detect circles from a target(which works pretty good with Hough Circle Transform) and the second part is to detect shots. I need some ideas how to detect those shots from a given image. Here is an example image with circle detection ON (green line for circles detected and red point for center). What algorithms from openCV can be used to detect those shoots?
Here is another example image
Algo:
create/clear mask for image
binarize image (to black and white by some intensity threshold)
process all pixels
count how many pixels of the same color are there in x,y directions
call it wx,wy
detect circle,shot and mid section
circles are thin so wx or wy should be less then thin threshold and the other one should be bigger. Shots are big so booth wx and wy must be in shot diameter range. Mid section is black and booth wx,wy above all thresholds (you can compute avg point here). Store this info into mask
recolor image with mask info
compute center and radiuses of the circles from found points
center is avg point of mid section area, now process all the green points and compute radius for it. Do histogram for all found radiuses and sort it by count descending. The count should be consistent with 2*PI*r if not ignore such points.
group shot pixels together
so segmentate or flood fill recolor each hit to avoid multiple accounting of single shot
I coded the #1..#6 for fun in C++ here is the code:
picture pic0,pic1,pic2;
// pic0 - source
// pic1 - output
// pic2 - mask
int x,y,i,n,wx,wy;
int r0=3; // thin curve wide treshod [pixels]
int r1a=15; // shot diameter min treshod [pixels]
int r1b=30; // shot diameter max treshod [pixels]
int x0,y0; // avg point == center
// init output as source image but in grayscale intensity only
pic1=pic0;
pic1.rgb2i();
// init mask (size of source image)
pic2.resize(pic0.xs,pic0.ys);
pic2.clear(0);
// binarize image and convert back to RGB
for (y=r0;y<pic1.ys-r0-1;y++)
for (x=r0;x<pic1.xs-r0-1;x++)
if (pic1.p[y][x].dd<=500) // Black/White treshold <0,765>
pic1.p[y][x].dd=0x00000000; // Black in RGB
else pic1.p[y][x].dd=0x00FFFFFF; // White in RGB
// process pixels
x0=0; y0=0; n=0;
for (y=r1b;y<pic1.ys-r1b-1;y++)
for (x=r1b;x<pic1.xs-r1b-1;x++)
{
wy=1; // count the same color pixels in column
for (i=1;i<=r1b;i++) if (pic1.p[y-i][x].dd==pic1.p[y][x].dd) wy++; else break;
for (i=1;i<=r1b;i++) if (pic1.p[y+i][x].dd==pic1.p[y][x].dd) wy++; else break;
wx=1; // count the same color pixels in line
for (i=1;i<=r1b;i++) if (pic1.p[y][x-i].dd==pic1.p[y][x].dd) wx++; else break;
for (i=1;i<=r1b;i++) if (pic1.p[y][x+i].dd==pic1.p[y][x].dd) wx++; else break;
if ((wx<r0)||(wy<r0)) // if thin
if ((wx>=r0)||(wy>=r0)) // but still line
{
pic2.p[y][x].dd=1; // thin line
}
if (pic1.p[y][x].dd==0) // black
if ((wx>=r0)&&(wy>=r0)) // and thick in both axises
{
pic2.p[y][x].dd=2; // middle section
x0+=x; y0+=y; n++;
}
if (pic1.p[y][x].dd) // white (background color)
if ((wx>r1a)&&(wy>r1a)) // size in range of shot
if ((wx<r1b)&&(wy<r1b))
{
pic2.p[y][x].dd=3; // shot
}
}
if (n) { x0/=n; y0/=n; }
// add mask data (recolor) to output image
// if (0)
for (y=0;y<pic1.ys;y++)
for (x=0;x<pic1.xs;x++)
{
if (pic2.p[y][x].dd==1) pic1.p[y][x].dd=0x0000FF00; // green thin line
if (pic2.p[y][x].dd==2) pic1.p[y][x].dd=0x000000FF; // blue midle section
if (pic2.p[y][x].dd==3) pic1.p[y][x].dd=0x00FF0000; // red shots
}
// Center cross
i=25;
pic1.bmp->Canvas->Pen->Color=0x0000FF;
pic1.bmp->Canvas->MoveTo(x0-i,y0);
pic1.bmp->Canvas->LineTo(x0+i,y0);
pic1.bmp->Canvas->MoveTo(x0,y0-i);
pic1.bmp->Canvas->LineTo(x0,y0+i);
I use my own picture class for images so some members are:
xs,ys size of image in pixels
p[y][x].dd is pixel at (x,y) position as 32 bit integer type
clear(color) - clears entire image
resize(xs,ys) - resizes image to new resolution
This is the recolored result
green - thin circles
blue mid section
red cross (center of circles)
red - shots
as you can see it needs the further processing from bullets #7,#8 and also your image has no shot outside mid section so may be it will need some tweak for shot detection outside mid section too
[edit1] radiuses
// create & clear radius histogram
n=xs; if (n<ys) n=ys;
int *hist=new int[n];
for (i=0;i<n;i++) hist[i]=0;
// compute histogram
for (y=0;y<pic2.ys;y++)
for (x=0;x<pic2.xs;x++)
if (pic2.p[y][x].dd==1) // thin pixels
{
i=sqrt(((x-x0)*(x-x0))+((y-y0)*(y-y0)));
hist[i]++;
}
// merge neigbour radiuses
for (i=0;i<n;i++)
if (hist[i])
{
for (x=i;x<n;x++) if (!hist[x]) break;
for (wx=0,y=i;y<x;y++) { wx+=hist[y]; hist[y]=0; }
hist[(i+x-1)>>1]=wx; i=x-1;
}
// draw the valid circles
pic1.bmp->Canvas->Pen->Color=0xFF00FF; // magenta
pic1.bmp->Canvas->Pen->Width=r0;
pic1.bmp->Canvas->Brush->Style=bsClear;
for (i=0;i<n;i++)
if (hist[i])
{
float a=float(hist[i])/(2.0*M_PI*float(i));
if ((a>=0.3)&&(a<=2.1))
pic1.bmp->Canvas->Ellipse(x0-i,y0-i,x0+i,y0+i);
}
pic1.bmp->Canvas->Brush->Style=bsSolid;
pic1.bmp->Canvas->Pen->Width=1;
delete[] hist;
detected circles are in Magenta ... pretty good I think. The mid section screw it a bit. You can compute average radius step and interpolate the missing circles ...

OpenCV : Transparent area of imported .png file is now white

I'm trying to develop a small and simplistic webcam-controlled game, where the user moves a figure on the x-axis by tracking a lighting source with the webcam (flashlight eg.)
So far my code generates a target object every couple of seconds at a random location in the picture.
That object is stored as a Mat via
Mat target = imread("target.png");
In order to paint the object onto the background image, I'm using
bgClear.copyTo(temp);
for(int i = targetX; i < target.cols + targetX; i++){
for(int j = targetY; j < target.rows + targetY; j++){
temp.at<Vec3b>(j,i) = target.at<Vec3b>(j-targetY,i-targetX);
}
}
temp.copyTo(bg);
where bgClear represents the clean background, temp the background copy that is being edited and bg the final background thats being shown. including the object.
targetX and targetY are the starting coordinates of the object (whereas targetX is randomly generated beforehand so that the object spawns at a random location in the upper half of the image), relative to the background. (so I'm not iterating through the whole background, only the range of the object).
It works so far, but I have a problem:
The transparent area of the imported image is now white, and I dont seem to be able to fix it by checking the pixel values with something like
if(target.at<Vec3b>(Point(j-targetY,i-targetX))[0] != 255 &&
target.at<Vec3b>(Point(j-targetY,i-targetX))[1] != 255 &&
target.at<Vec3b>(Point(j-targetY,i-targetX))[2] != 255)
before I am actually replacing the pixel.
I've also tried loading the .png file by adding the -1 flag (alpha channel), but then the image just seems ghosty and can barely be seen.
In case I might you have problems imaging what I'm talking about, here's a partial screenshot of it: Screenshot
Any advice on how I might fix this ?
Regards,
Daniel
You need to handle transparency manually. General idea is, while copying to temp only copy pixels that are opaque i.e. alpha value is high.
use CV_LOAD_IMAGE_UNCHANGED (= -1) in imread.
split target to four single channel image using split.
merge first three channels to form a BGR image using merge.
in the paint loop, use newly formed BGR image as source and the unmerged fourth channel (alpha) as mask.
...as I was mentioning in my comment to asif's helpful answer:
Mat target = imread("target", CV_LOAD_IMAGE_UNCHANGED); // load image
Mat targetBGR(target.rows, target.cols, CV_8UC3); // create BGR mat
Mat targetAlpha(target.rows, target.cols, CV_8UC1); // create alpha mat
Mat out[] = {targetBGR, targetAlpha}; // create array of matrices
int from_to[] = { 0,0, 1,1, 2,2, 3,3 }; // create array of index pairs
mixChannels( &target, 1, out, 2, from_to, 4 ); // finally split target into 3
channel BGR plus 1 channel Alpha
...as described in this example. (minus the R-B-channel-swapping).
...later in the pixel-processing loop:
if(targetAlpha.at<uchar>(j-targetY,i-targetX) > 0)
temp.at<Vec3b>(j,i) = targetBGR.at<Vec3b>(j-targetY,i-targetX);
Working like a charm!

Why is font size different in vertical direction

I created two rulers - one vertical and one horizontal:
Now in the vertical ruler, is 'size' of the text visually larger(aprox. 5-6 pixels longer).
Why?
Relevant code:
WM_CREATE:
LOGFONT Lf = {0};
Lf.lfHeight = 12;
lstrcpyW(Lf.lfFaceName, L"Arial");
if (!g_pGRI->bHorizontal)
{
Lf.lfEscapement = 900; // <----For vertical ruler!
}
g_pGRI->hfRuler = CreateFontIndirectW(&Lf);
SelectFont(g_pGRI->hdRuler, g_pGRI->hfRuler);
WM_PAINT:
SetTextColor(g_pGRI->hdRuler, g_pGRI->cBorder);
SetBkColor(g_pGRI->hdRuler, g_pGRI->cBackground);
SetTextAlign(g_pGRI->hdRuler, TA_CENTER);
#define INCREMENT 10
WCHAR wText[16] = {0};
if (g_pGRI->bHorizontal)
{
INT ixTicks = RECTWIDTH(g_pGRI->rRuler) / INCREMENT;
for (INT ix = 0; ix < ixTicks + 1; ix++)
{
MoveToEx(g_pGRI->hdRuler, INCREMENT * ix, 0, NULL);
if (ix % INCREMENT == 0)
{
//This is major tick.
LineTo(g_pGRI->hdRuler, INCREMENT * ix, g_pGRI->lMajor);
wsprintfW(wText, L"%d", INCREMENT * ix);
TextOutW(g_pGRI->hdRuler, INCREMENT * ix + 1, g_pGRI->lMajor + 1, wText, CHARACTERCOUNT(wText));
}
else
{
//This is minor tick.
LineTo(g_pGRI->hdRuler, INCREMENT * ix, g_pGRI->lMinor);
}
}
}
else
{
INT iyTicks = RECTHEIGHT(g_pGRI->rRuler) / INCREMENT;
for (INT iy = 0; iy < iyTicks + 1; iy++)
{
MoveToEx(g_pGRI->hdRuler, 0, INCREMENT * iy, NULL);
if (iy % INCREMENT == 0)
{
//This is major tick.
LineTo(g_pGRI->hdRuler, g_pGRI->lMajor, INCREMENT * iy);
wsprintfW(wText, L"%d", INCREMENT * iy);
TextOutW(g_pGRI->hdRuler, g_pGRI->lMajor + 1, INCREMENT * iy + 1, wText, CHARACTERCOUNT(wText));
}
else
{
//This is minor tick.
LineTo(g_pGRI->hdRuler, g_pGRI->lMinor, INCREMENT * iy);
}
}
}
}
Background
There are several different schemes for rasterizing text in a legible way when the text is small relative to the size of a pixel. For example, if the stroke width is supposed to be 1.25 pixels wide, you either have to round it off to a whole number of pixels, use antialiasing, or use subpixel rendering (like ClearType). Rounding is usually controlled by "hints" built into the font by the font designer.
Hinting is the main reason why text width doesn't always scale exactly with the text height. For example, if, because of rounding, the left hump of a lowercase m is a pixel wider than the right one, a hint might tell the renderer to round the width up to make the letter symmetric. The result is that the character is a tad wider relative to its height than the ideal character.
This issue
What's likely happening here is that when GDI renders the string horizontally, each subsequent character may start at a fractional position, which is simulated by antialiasing or subpixel (ClearType) rendering. But, when rendering vertically, it appears that each subsequent character's starting position is rounded up to the next whole pixel, which tends to make the vertical text a couple pixels "longer" than its horizontal counterpart. Effectively, the kerning is always rounded up to the next whole pixel.
It's likely that more effort was put into the common case of horizontal text rendering, making it easier to read (and possibly faster to render). The general case of rendering at any other angle may have been implemented in a simpler manner, working glyph-by-glyph instead of with the entire string.
Things to Try
If you want them to look that same, you'll probably have to make a small compromise in the visual quality of the horizontal labels. Here are a few things I can think of to try:
Render the labels with regular antialiasing instead of ClearType subpixel rendering. (You can do this by setting the lfQuality field in the LOGFONT.) You would then draw the horizontal labels in the normal manner. For the vertical labels, draw them to an offscreen buffer horizontally, rotate it, and then blit the buffer to the screen. This gives you labels that look identical. The reason I suggest regular antialiasing is that it's invariant to the rotation. ClearType rendering had an inherent orientation and thus cannot be rotated without creating fringing. I've used this approach for graph labels with good results.
Render the horizontal labels character by character, rounding the starting point up to the next whole pixel. This should make the horizontal labels look like the vertical ones. Typographically, they won't look as good, but for small labels like this, it's probably less distracting than having the horizontal and vertical labels visually mismatched.
Another answer suggested rendering the horizontal labels with a very small, but non-zero, escapement and orientation, forcing those to go through the same rendering pipeline as the vertical labels. This may be the easiest solution for short labels like yours. If you had to handle longer strings of text, I'd suggest one of the first two methods.
When using lfEscapement, you will often get strange behaviour as it renders text using a fairly different pipeline.
A trick would be to have lfEscapement set for both. One with 900, and one with a very low value (such as 1 or even 10. Once you have both rendering with escapement, you should be good.
If you're still having issues with smoothing, try doing something like this:
BOOL bSmooth;
//Get previous smooth value.
SystemParametersInfo(SPI_GETFONTSMOOTHING, 0, &bSmooth, 0);
//Set no smoothing.
SystemParametersInfo(SPI_SETFONTSMOOTHING, 0, NULL, 0);
//Draw text.
//Return smoothing.
SystemParametersInfo(SPI_SETFONTSMOOTHING, bSmooth, NULL, 0);

HTML5 how to draw transparent pixel image in canvas

I'm drawing an image using rgb pixel data. I need to set transparent background color for that image. What value I can set for alpha to be a transparent image? Or is there any other solution for this?
If I understand what you need, you basically want to turn specific colors on an image transparent. To do that you need to use getImageData check out mdn for an explanation on pixel manipulation.
Heres some sample code
var imgd = ctx.getImageData(0, 0, imageWidth, imageHeight),
pix = imgd.data;
for (var i = 0, n = pix.length; i <n; i += 4) {
var r = pix[i],
g = pix[i+1],
b = pix[i+2];
if(g > 150){
// If the green component value is higher than 150
// make the pixel transparent because i+3 is the alpha component
// values 0-255 work, 255 is solid
pix[i + 3] = 0;
}
}
ctx.putImageData(imgd, 0, 0);​
And a working demo
With the above code you could check for fuschia by using
if(r == 255 && g == 0 && b == 255)
I think you want the clearRect canvas method:
http://www.w3schools.com/html5/canvas_clearrect.asp
This will let you clear pixels to transparent (or any other RGBA color) without fuss or pixel manipulation.
an alpha of 0 indications that pixel is completely transparent an alpha value of 255 is completely opaque meaning that it will have no transparency.
if you portions of your image are completely transparent (an alpha of 0) it doesn't matter what you use for the RGB values as long as use an Alpha of 0. On a side note some older windows programs that I have used make an assumption like the upper left pixel or the lower right pixel is to be used as the transparency color. It would then loop through all of the pixels and set the alpha to 0 when it encountered this specific RGB value.
If you use an Alpha of 127 and the image appeared on top of another image it would look like the two images are equally visible or that the bottom image is bleeding 50% of it's colors through to the top image.
Set a variable for alpha if you want to test and see what it looks like when you apply it to the entire image.

Retrieve color information from images

I need to determine the amount/quality of color in an image in order to compare it with other images and recommend a user (owner of the image) maybe he needs to print it in black and white and not in color.
So far I'm analyzing the image and extracting some data of it:
The number of different colors I find in the image
The percentage of color in the whole page (color pixels / total pixels)
For further analysis I may need other characteristic of these images. Do you know what else is important (or I'm missing here) in image analysis?
After some time I found a missing characteristic (very important) which helped me a lot with the analysis of the images. I don't know if there is a name for that but I called it the average color of the image:
When I was looping over all the pixels of the image and counting each color I also retrieved the information of the RGB values and summarized all the Reds, Greens and Blues of all the pixels. Just to come up with this average color which, again, saved my life when I wanted to compare some kind of images.
The code is something like this:
File f = new File("image.jpg");
BufferedImage im = ImageIO.read(f);
int tot = 0;
int red = 0;
int blue= 0;
int green = 0;
int w = im.getWidth();
int h = im.getHeight();
// Going over all the pixels
for (int i=0;i<w;i++){
for (int j=0;j<h;j++){
int pix = im.getRGB(i, j); //
if (!sameARGB(pix)) { // Compares the RGB values
tot+=1;
red+=pix.getRed();
green+=pix.getGreen();
blue+=pix.getBlue();
}
}
}
And you should get the results like this:
// Percentage of color on the image
double per = (double)tot/(h*w);
// Average color <-------------
Color c = new Color((double)red/tot,(double)green/tot,(double)blue/tot);

Resources