Area of scanned (2D) figure - image

Lets say I have 100 one-colored A4 sheets of paper, that are cut into different shapes and figures (2D), scanned, saved as an image file, and then needs to be sorted in ascending order of area.
Is there an effective way to find the area of the figures and arrange them?

If all pictures have the same size and all shapes the same color (that´s the situation if I don´t missunderstand your question), you can calculate the average color value.
The nearer the calculated color comes to the figures´s color, the bigger is the shape on the Image.
Some code:
private Color GetAverageImageColor(Image img)
{
double[] rgb = new double[3];
Color col;
Bitmap bmp = new Bitmap(img);
for(int y = 0; y < bmp.Size.Height; y++)
{
for(int x = 0; x < bmp.Size.Width; x++)
{
col = bmp.GetPixel(x, y);
rgb[0] += col.R;
rgb[1] += col.G;
rgb[2] += col.B;
}
}
for (int i = 0; i < 3; i++)
{
rgb[i] /= (bmp.Size.Height * bmp.Size.Width);
rgb[i] = Math.Round(rgb[i]);
}
return Color.FromArgb((int) rgb[0], (int) rgb[1], (int) rgb[2]);
}

Related

Colored Letters with Colorcycle | Processing

I found a program, that generates random letters in a grid and gives them a random color.
How can I have the letters cange in color or brightness while the program is running?
(sourcecode: https://happycoding.io/examples/processing/for-loops/letters)
I tried making the fill(r, g, b) have a 'r' that cycles from 1 to 255 and back while having 'g' and 'b' at 0, but I could´t get it to update the color. Im cinda new to programming so I´d love to know how I could make that happen.
First, let's change the fill method to accept RGB values:
fill(random(256),random(256),random(256));
To change the colors while the program is running, the changes must be made inside the draw() method, that will constantly loop and update the canvas. Further information about draw here I believe the following code outputs what you asked for:
int rows = 10;
int cols = 10;
int cellHeight;
int cellWidth;
void setup(){
size(500, 500);
cellHeight = height/rows;
cellWidth = width/cols;
textAlign(CENTER, CENTER);
textSize(28);
}
void draw(){
background(32);
for(int y = 0; y < rows; y++){
for(int x = 0; x < cols; x++){
//get a random ascii letter
char c = '!';
c += random(93);
//calculate cell position
int pixelX = cellWidth * x;
int pixelY = cellHeight * y;
//add half to center letters
pixelX += cellWidth/2;
pixelY += cellHeight/2;
fill(random(256),random(256),random(256));
text(c, pixelX, pixelY);
}
}
delay(100);
}

ray tracer objects stretch when off center

I am writing a ray tracer program for my computer graphics class. So far I only have spheres implemented and a shadow ray. The current problem is that when i move my sphere off center it stretches. here is the code that i use to calculate if a ray is intersecting a sphere:
bool Sphere::onSphere(Ray r)
{
float b = (r.dir*2).innerProduct(r.pos + centre*-1);
float c = (r.pos + centre*-1).innerProduct(r.pos + centre*-1) - radius*radius;
return b*b - 4*c >= 0;
}
here is the code that i use to spawn each ray:
for(int i = -cam.width/2; i &lt cam.width/2; i++)
{
for(int j = -cam.height/2; j &lt cam.height/2; j++)
{
float normi = (float)i;
float normj = (float)j;
Vector pixlePos = cam.right*normi + cam.up*normj + cam.forward*cam.dist + cam.pos*1;
Vector direction = pixlePos + cam.pos*-1;
direction.normalize();
Vector colour = recursiveRayTrace(Ray(pixlePos, direction), 30, 1, 0);
float red = colour.getX()/255;
float green = colour.getY()/255;
float blue = colour.getZ()/255;
fwrite (&red, sizeof(float), 1, myFile);
fwrite (&green, sizeof(float), 1, myFile);
fwrite (&blue, sizeof(float), 1, myFile);
}
}
recursiveRayTrace:
Vector Scene::recursiveRayTrace(Ray r, float maxDist, int maxBounces, int bounces)
{
if(maxBounces &lt bounces)
return Vector(0,0,0);
int count = 0;
for(int i = 0; i &lt spheres.size(); i++)
{
if(spheres.at(i).onSphere(r))
{
Vector colour(ambiant.colour);
for(int j = 0; j &lt lights.size(); j++)
{
Vector intersection(r.pos + r.dir*spheres.at(i).getT(r));
Ray nRay(intersection, lights.at(i).centre + intersection*-1);
colour = colour + lights.at(i).colour;
}
return colour;
}
}
return Vector(0,0,0);
}
What i get is an sphere that is stretched in the direction of the vector from the center to the center of the circle. I'm not looking for anyone to do my homework. I am just having a really hard time debugging this on. Any hints are appreciated :) Thanks!
Edit: cam.dist is the distance from the camera to the view plane
The stretching is actually a natural consequence of perspective viewing and it is exaggerated if you have a very wide field of view. In other words moving the camera back from your image plane should make it seem more natural.

XNA changing 200 or so tiles pixels

Hey guys i thought this was going to be a 1second thing to do, currently i have a tileset for a 2d game and depending on the terrain it calls these tile sets. and the tile sets are colour coded pink is the material and blue is the material adjacent to the tile.
so far when terrain changes or on load up i draw the tiles on the screen and any new tiles as i move. this is all fine and works very well, but when i have like 50 grass tiles that need to be changed from pink to green and 75 dirt tiles which need to be changed from pink to grey i have a issue.
it only appears the first tile is changed then every other tile infront or before that tile is the same colour as the first colour.
ill give you an example in code what im doing
Color[] storePixels = new Color[50 * 50];
for(int y = 0; y < 100; y ++)
{
for(int x = 0; x < 100; x++)
{
tileTexture[y,x].GetData(storePixels);
for(int i = 0; i < storePixels.Length; i ++)
{
if(tileMaterial[y,x] == "DIRT")
{
storePixels[i] = new Color(100,100,100);
}
if(tileMaterial[y,x] == "GRASS")
{
storePixels[i] = new Color(0,255,0);
}
}
tileTexture[y,x].SetData(storePixels);
}
}
To me i cant see why this wont work. i assume maybe i need to reset the storePixels which i have tried but still it does not create green if its grass or grey if its dirt.
Please let me know if you know why this is not working thank you for your time and thank you in advance :)
OK! i figured it out im calling the same Image in my content folder
for(int y = 0; y < 100; y ++)
{
for(int x = 0; x < 100; x ++)
{
tileTexture[y,x] = Content.Load<Texture2D>("tile");
}
}
so now we are getting somewere we know why the problem is being cuassed as i change the Texture2D through SetData() it actually changes this tile texture2D directly and everything else that calls it will always be changed.
i cannot call it in a loop, i only call it at startup i could try store a copy of the texture2D and then change it.
i dunno does anyone have any further solutions to help solve this problem? thanks for you time :)
for(int y = 0; y < 100; y++)
{
for(int x = 0; x < 100; x++)
{
spriteBatch.Draw(tileTexture[y,x],rectangle,Color.White);
}
}
this is how i am drawing my grid
That code is a bad idea...
you can work with only one white texture...
WhiteTex = new Texture2D(GraphicsDevice,1,1);
WhiteTex.SetData<Color>(new Color[] {Color.White };
you can draw tiles this way...
Enum MaterialTypes { Dirt, Grass }
static readonly Dictionary<int, Color> MaterialTypeToColor = new Dictionary() { {0, new Color(100,100,100)}, {1, new Color(0,255,0) }};
for(int y = 0; y < 100; y ++)
{
for(int x = 0; x < 100; x++)
{
batch.Draw(whiteTex, new Rectangle(x*50,y*50, 50,50), null, (MaterialTypeToColor[ (int) tileMaterial[y,x]]);
}
}

Pixel reordering is wrong when trying to process and display image copy with lower res

I'm currently making an application using processing intended to take an image and apply 8bit style processing to it: that is to make it look pixelated. To do this it has a method that take a style and window size as parameters (style is the shape in which the window is to be displayed - rect, ellipse, cross etc, and window size is a number between 1-10 squared) - to produce results similar to the iphone app pxl ( http://itunes.apple.com/us/app/pxl./id499620829?mt=8 ). This method then counts through the image's pixels, window by window averages the colour of the window and displays a rect(or which every shape/style chosen) at the equivalent space on the other side of the sketch window (the sketch when run is supposed to display the original image on the left mirror it with the processed version on the right).
The problem Im having is when drawing the averaged colour rects, the order in which they display becomes skewed..
Although the results are rather amusing, they are not what I want. Here the code:
//=========================================================
// GLOBAL VARIABLES
//=========================================================
PImage img;
public int avR, avG, avB;
private final int BLOCKS = 0, DOTS = 1, VERTICAL_CROSSES = 2, HORIZONTAL_CROSSES = 3;
public sRGB styleColour;
//=========================================================
// METHODS FOR AVERAGING WINDOW COLOURS, CREATING AN
// 8 BIT REPRESENTATION OF THE IMAGE AND LOADING AN
// IMAGE
//=========================================================
public sRGB averageWindowColour(color [] c){
// RGB Variables
float r = 0;
float g = 0;
float b = 0;
// Iterator
int i = 0;
int sizeOfWindow = c.length;
// Count through the window's pixels, store the
// red, green and blue values in the RGB variables
// and sum them into the average variables
for(i = 0; i < c.length; i++){
r = red (c[i]);
g = green(c[i]);
b = blue (c[i]);
avR += r;
avG += g;
avB += b;
}
// Divide the sum of the red, green and blue
// values by the number of pixels in the window
// to obtain the average
avR = avR / sizeOfWindow;
avG = avG / sizeOfWindow;
avB = avB / sizeOfWindow;
// Return the colour
return new sRGB(avR,avG,avB);
}
public void eightBitIT(int style, int windowSize){
img.loadPixels();
for(int wx = 0; wx < img.width; wx += (sqrt(windowSize))){
for(int wy = 0; wy < img.height; wy += (sqrt(windowSize))){
color [] tempCols = new color[windowSize];
int i = 0;
for(int x = 0; x < (sqrt(windowSize)); x ++){
for(int y = 0; y < (sqrt(windowSize)); y ++){
int loc = (wx+x) + (y+wy)*(img.width-windowSize);
tempCols[i] = img.pixels[loc];
// println("Window loc X: "+(wx+(img.width+5))+" Window loc Y: "+(wy+5)+" Window pix X: "+x+" Window Pix Y: "+y);
i++;
}
}
//this is ment to be in a switch test (0 = rect, 1 ellipse etc)
styleColour = new sRGB(averageWindowColour(tempCols));
//println("R: "+ red(styleColour.returnColourScaled())+" G: "+green(styleColour.returnColourScaled())+" B: "+blue(styleColour.returnColourScaled()));
rectMode(CORNER);
noStroke();
fill(styleColour.returnColourScaled());
//println("Rect Loc X: "+(wx+(img.width+5))+" Y: "+(wy+5));
ellipse(wx+(img.width+5),wy+5,sqrt(windowSize),sqrt(windowSize));
}
}
}
public PImage load(String s){
PImage temp = loadImage(s);
temp.resize(600,470);
return temp;
}
void setup(){
background(0);
// Load the image and set size of screen to its size*2 + the borders
// and display the image.
img = loadImage("oscilloscope.jpg");
size(img.width*2+15,(img.height+10));
frameRate(25);
image(img,5,5);
// Draw the borders
strokeWeight(5);
stroke(255);
rectMode(CORNERS);
noFill();
rect(2.5,2.5,img.width+3,height-3);
rect(img.width+2.5,2.5,width-3,height-3);
stroke(255,0,0);
strokeWeight(1);
rect(5,5,9,9); //window example
// process the image
eightBitIT(BLOCKS, 16);
}
void draw(){
//eightBitIT(BLOCKS, 4);
//println("X: "+mouseX+" Y: "+mouseY);
}
This has been bugging me for a while now as I can't see where in my code im offsetting the coordinates so they display like this. I know its probably something very trivial but I can seem to work it out. If anyone can spot why this skewed reordering is happening i would be much obliged as i have quite a lot of other ideas i want to implement and this is holding me back...
Thanks,

How to joint some objects in digital image?

I'm looking for some algorithm to joint objects, for example, combine an apple into a tree in digital image and some demo in Matlab. Please show me some materials of that. Thanks for reading and helping me!!!
I not sure if I undertand your question, but if you are looking to do some image overlaping, as does photoshop layers, you can use some image characteristics to, through that characteristc, determine the degree of transparency.
For example, consider using two RGB images. Image A will be overlapped by image B. To do it, we'll use image B brightness to determine transparency degree (255 = 100%).
Intensity = pixel / 255;
NewPixel = (PixelA * (1 - Intensity)) + (PixelB * Intensity);
As intensity is a percentage and each pixel is multiplied by the complement of this percentage, the resulting sum will never overflow over 255 (max graylevel)
int WidthA = imageA.Width * channels;
int WidthB = imageB.Width * channels;
int width = Min(ImageA.Width, ImageB.Width) * channels;
int height = Min(ImageA.Height, ImageB.Height);
byte *ptrA = imageA.Buffer;
byte *ptrB = imageB.Buffer;
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x += channels, ptrA += channels, ptrB += channels)
{
//Take the intensity of the pixel. If RGB (channels = 3), intensity = (R+B+G) / 3. If grayscale, the pixel value is intensity itself
int avg = 0;
for (int j = 0; j < channels; ++j)
{
avg += ptrB[j];
}
//Obtain the intensity as a value between 0..100%
double intensity = (double)(avg / channels) / 255;
for (int j = 0; j < channels; ++j)
{
//Write in image A the resulting pixel which is obtained by multiplying Image B pixel
//by 100% - intensity plus Image A pixel multiplied by the intensity
ptrA[j] = (byte) ((ptrB[j] * (1.0 - intensity)) + ((intensity) * ptrA[j]));
}
}
ptrA = imageA.Buffer + (y * WidthA));
ptrB = imageB.Buffer + (y * WidthB));
}
You can also change this algorithm in order to overlap Image A over B, in a different place. I'm assuming here the image B coordinate (0, 0) will overlap image A coordinate (0, 0).
But once again, I'm not sure if this is what you are looking for.

Resources