Count the number of black pixels using ByteBuffer javacv - javacv

I have use this code..I am new to javacv and I need to get the pixels one by one in a region and get the color of that pixel. Can I please know how to do it using the ByteBuffer ,because byte buffer can read pixel by pixel and I need to check whether the pixel is black or white...
Can anyone please consider about this..I am really stuck here...
IplImage img=cvLoadImage("img\\ROI.jpg");
CvScalar Black = cvScalar(0, 0, 0, 0);
CvScalar white = cvScalar(255, 255, 255, 255);
ByteBuffer buffer = img.getByteBuffer();
for(int y = 0; y < img.height(); y++) {
for(int x = 0; x < img.width(); x++) {
int index = y * img.widthStep() + x * img.nChannels();
// Used to read the pixel value - the 0xFF is needed to cast from
// an unsigned byte to an int.
int value = buffer.get(index) & 0xFF;
// Sets the pixel to a value (greyscale).
buffer.put(index, (byte) value);
// Sets the pixel to a value (RGB, stored in BGR order).
//buffer.putInt(index, Black);
// buffer.put(index + 1, white);
}
}

Related

How to slow down random color generator in Processing?

Hi everyone – I want to make a grid pattern of rectangles with random filling colors out of an array.I can get it done the way I want – but the random selection is way too speedy.
I tried to slow everything down with frameRate(); – but this slows down the whole animation. (For example if I want to add something else). Then I tried to slow it down with if(frameCount%20 == 0) {…} but this does not keep the drawn grid – only lets it appear every XXX frames for one frame – does someone have an idea how I could slow down the lets call it "Color Noise"? – Thank you for any kind of help!
float size = 20;
color cbw = color(0, 0, 0); //defines BLACK
color cg = color(0, 255, 0); //defines GREEN
color cb = color(0, 0, 255); //defines BLUE
color cw = color(255, 255, 255); //defines WHITE
color[] colors = { //random selects one of above colors
cbw, cg, cb, cw
};
void setup() {
size(1080, 1080);
}
void draw() {
background(255);
for (int x = 0; x < width/size; x++) {
for (int y = 0; y < height/size; y++) {
color c1 = (colors[int(random(0, 4))]); //assigns a random color from above to c1-4
fill(c1);
noStroke();
rect(size*x, size*y, size, size);
}
}
}
You're on the right track with frameCount % 20. (Alternatively you can use millis())
The main issue is the colour selection is tightly coupled with the rectangle drawing.
In plain English, currently you can only select random colours and render at the same time, but not select colours and render independently (e.g. at different times)
One option is to use an array to store the colours for every rectangle which you can use twice:
to write values to: pick random colours
to read values from: when rendering the rectangles
Here's a modified version of your sketch that illustrated the idea above:
float size = 20;
color cbw = color(0, 0, 0); //defines BLACK
color cg = color(0, 255, 0); //defines GREEN
color cb = color(0, 0, 255); //defines BLUE
color cw = color(255, 255, 255); //defines WHITE
color[] colors = { //random selects one of above colors
cbw, cg, cb, cw
};
// all colors for each rect
color[][] rectColors;
void setup() {
size(1080, 1080);
// allocate invidual rect colours
rectColors = new color[width/(int)size][height/(int)size];
}
void draw() {
background(255);
if(frameCount%20 == 0){
// randomize colours
int numColors = colors.length;
for (int x = 0; x < width/size; x++) {
for (int y = 0; y < height/size; y++) {
rectColors[x][y] = colors[int(random(0, numColors))];
}
}
}
for (int x = 0; x < width/size; x++) {
for (int y = 0; y < height/size; y++) {
color c1 = rectColors[x][y]; //assigns a random color from above to c1-4
fill(c1);
noStroke();
rect(size*x, size*y, size, size);
}
}
}
Personally, I would do a few extra things to make this easier to read and potentially re-use in other sketches:
change float size = 20; to int size = 20; assuming you want the grid cells to land on whole pixels. This removes the need to cast (e.g. width/(int)size)
cache/store data that is often re-used (such as grid rows and columns)
encapsulate the loops that randomize colours and render rectangles into separate functions. Even something as simple as functions that return no values and take no arguments (e.g. much like void setup() for example)
Here is what that could look like:
int size = 20;
color cbw = color(0, 0, 0); //defines BLACK
color cg = color(0, 255, 0); //defines GREEN
color cb = color(0, 0, 255); //defines BLUE
color cw = color(255, 255, 255); //defines WHITE
color[] colors = { //random selects one of above colors
cbw, cg, cb, cw
};
// all colours for each rect
color[][] rectColors;
// grid dimensions
int cols;
int rows;
void setup() {
size(1080, 1080);
// compute grid dimensions
cols = width / size;
rows = height / size;
// allocate invidual rect colours
rectColors = new color[cols][rows];
// call randomize colours function
randomizeColors();
}
// declare randomize colours function
void randomizeColors(){
// read array length, avoding the previosuly hardcoded value (4)
int numColors = colors.length;
for (int x = 0; x < cols; x++) {
for (int y = 0; y < rows; y++) {
rectColors[x][y] = colors[int(random(0, numColors))];
}
}
}
void drawRectangles(){
for (int x = 0; x < cols; x++) {
for (int y = 0; y < rows; y++) {
color c1 = rectColors[x][y]; //read a random color
fill(c1);
noStroke();
rect(size * x, size * y, size, size);
}
}
}
void draw() {
background(255);
if(frameCount % 20 == 0){
randomizeColors();
}
drawRectangles();
}

Can't isolate pixels from av_frame_copy_to_buffer

I'm trying to pull the YUV pixel data from an AVFrame, modify the pixels, and put it back into FFmpeg.
I'm currently using this to retrieve the YUV buffer
const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(base->format);
int baseSize = av_image_get_buffer_size(base->format, base->width, base->height, 32);
uint8_t *baseBuffer = (uint8_t*)malloc(baseSize);
av_image_copy_to_buffer(baseBuffer, baseSize, base->data, base->linesize, base->format, base->width, base->height, 32);
But I can't seem to correctly target pixels in that buffer. From the source code they seem to be stacking the planes on top of each other, leading me to attempt this
int width = base->width;
int height = base->height;
int chroma2h = desc->log2_chroma_h;
int linesizeY = base->linesize[0];
int linesizeU = base->linesize[1];
int linesizeV = base->linesize[2];
int chromaHeight = (height + (1 << chroma2h) -1) >> chroma2h;
int x = 100;
int y = 100;
uint8_t *vY = base;
uint8_t *vU = base +(linesizeY*height);
uint8_t *vV = base +((linesizeY*height) + (linesizeU*chromaHeight));
vY+= x + (y * linesizeY);
vU+= x + (y * linesizeU);
vV+= x + (y * linesizeV);
Using that, if I try to modify pixels from a range of 300,300-400,400 I get a small box darker than the rest of the video, along with horizontal stripes of darkness along the video. The original color is still there, so I think I'm still touching the Y plane on all 3 pointers.
How can I actually hit the pixels I want to hit?

find the white/ black pixels in specific region javacv

I have tried this code. 540 is the left most x value of the box,3 is left most y value of the box,262 - width ,23 -height of the region which I am going to calculate the ratio of the white/black pixels. What I really wanted to do is detect the number of white/black pixel ratio in a specific region.I have calculate the coordinates for each cell (regions which I am going to specified)and try with this code.But the error in counting.
Can I please have an idea about this issue please..
I am really stuck here with my final year project.
CvSize cvSize = cvSize(img.width(), img.height());
IplImage image = cvCreateImage(cvSize, IPL_DEPTH_8U, 1);
IplImage image2 = cvCreateImage(cvSize, IPL_DEPTH_8U, 3);
cvCvtColor(image2, image, CV_RGB2GRAY);
cvSetImageROI(image2, cvRect(540,3,262,23));
//IplImage image2 = cvCreateImage(cvSize, IPL_DEPTH_8U, 3);
//
//cvCvtColor(arg0, arg1, arg2)
// cvCvtColor(image2, image, CV_RGB2GRAY);
//cvThreshold(image, image, 128, 255, CV_THRESH_BINARY);
CvLineIterator iterator = new CvLineIterator();
double sum = 0, green_sum = 0, red_sum = 0;
CvPoint p2 = new CvPoint(802,3);
CvPoint p1 = new CvPoint(540,26);
int lineCount = cvInitLineIterator(image2, p1, p2, iterator, 8, 0 );
for (int i = 0; i < lineCount; i++) {
sum += iterator.ptr().get() & 0xFF;
}
System.out.println("sum................"+sum);
CV_NEXT_LINE_POINT(iterator);
}
}
it gave the result as sum................0.0
I have really stuck with this..can you please give any solution for this issue please
Move CV_NEXT_LINE_POINT(iterator); line inside the for loop. Then it should work.

Pixel reordering is wrong when trying to process and display image copy with lower res

I'm currently making an application using processing intended to take an image and apply 8bit style processing to it: that is to make it look pixelated. To do this it has a method that take a style and window size as parameters (style is the shape in which the window is to be displayed - rect, ellipse, cross etc, and window size is a number between 1-10 squared) - to produce results similar to the iphone app pxl ( http://itunes.apple.com/us/app/pxl./id499620829?mt=8 ). This method then counts through the image's pixels, window by window averages the colour of the window and displays a rect(or which every shape/style chosen) at the equivalent space on the other side of the sketch window (the sketch when run is supposed to display the original image on the left mirror it with the processed version on the right).
The problem Im having is when drawing the averaged colour rects, the order in which they display becomes skewed..
Although the results are rather amusing, they are not what I want. Here the code:
//=========================================================
// GLOBAL VARIABLES
//=========================================================
PImage img;
public int avR, avG, avB;
private final int BLOCKS = 0, DOTS = 1, VERTICAL_CROSSES = 2, HORIZONTAL_CROSSES = 3;
public sRGB styleColour;
//=========================================================
// METHODS FOR AVERAGING WINDOW COLOURS, CREATING AN
// 8 BIT REPRESENTATION OF THE IMAGE AND LOADING AN
// IMAGE
//=========================================================
public sRGB averageWindowColour(color [] c){
// RGB Variables
float r = 0;
float g = 0;
float b = 0;
// Iterator
int i = 0;
int sizeOfWindow = c.length;
// Count through the window's pixels, store the
// red, green and blue values in the RGB variables
// and sum them into the average variables
for(i = 0; i < c.length; i++){
r = red (c[i]);
g = green(c[i]);
b = blue (c[i]);
avR += r;
avG += g;
avB += b;
}
// Divide the sum of the red, green and blue
// values by the number of pixels in the window
// to obtain the average
avR = avR / sizeOfWindow;
avG = avG / sizeOfWindow;
avB = avB / sizeOfWindow;
// Return the colour
return new sRGB(avR,avG,avB);
}
public void eightBitIT(int style, int windowSize){
img.loadPixels();
for(int wx = 0; wx < img.width; wx += (sqrt(windowSize))){
for(int wy = 0; wy < img.height; wy += (sqrt(windowSize))){
color [] tempCols = new color[windowSize];
int i = 0;
for(int x = 0; x < (sqrt(windowSize)); x ++){
for(int y = 0; y < (sqrt(windowSize)); y ++){
int loc = (wx+x) + (y+wy)*(img.width-windowSize);
tempCols[i] = img.pixels[loc];
// println("Window loc X: "+(wx+(img.width+5))+" Window loc Y: "+(wy+5)+" Window pix X: "+x+" Window Pix Y: "+y);
i++;
}
}
//this is ment to be in a switch test (0 = rect, 1 ellipse etc)
styleColour = new sRGB(averageWindowColour(tempCols));
//println("R: "+ red(styleColour.returnColourScaled())+" G: "+green(styleColour.returnColourScaled())+" B: "+blue(styleColour.returnColourScaled()));
rectMode(CORNER);
noStroke();
fill(styleColour.returnColourScaled());
//println("Rect Loc X: "+(wx+(img.width+5))+" Y: "+(wy+5));
ellipse(wx+(img.width+5),wy+5,sqrt(windowSize),sqrt(windowSize));
}
}
}
public PImage load(String s){
PImage temp = loadImage(s);
temp.resize(600,470);
return temp;
}
void setup(){
background(0);
// Load the image and set size of screen to its size*2 + the borders
// and display the image.
img = loadImage("oscilloscope.jpg");
size(img.width*2+15,(img.height+10));
frameRate(25);
image(img,5,5);
// Draw the borders
strokeWeight(5);
stroke(255);
rectMode(CORNERS);
noFill();
rect(2.5,2.5,img.width+3,height-3);
rect(img.width+2.5,2.5,width-3,height-3);
stroke(255,0,0);
strokeWeight(1);
rect(5,5,9,9); //window example
// process the image
eightBitIT(BLOCKS, 16);
}
void draw(){
//eightBitIT(BLOCKS, 4);
//println("X: "+mouseX+" Y: "+mouseY);
}
This has been bugging me for a while now as I can't see where in my code im offsetting the coordinates so they display like this. I know its probably something very trivial but I can seem to work it out. If anyone can spot why this skewed reordering is happening i would be much obliged as i have quite a lot of other ideas i want to implement and this is holding me back...
Thanks,

How to joint some objects in digital image?

I'm looking for some algorithm to joint objects, for example, combine an apple into a tree in digital image and some demo in Matlab. Please show me some materials of that. Thanks for reading and helping me!!!
I not sure if I undertand your question, but if you are looking to do some image overlaping, as does photoshop layers, you can use some image characteristics to, through that characteristc, determine the degree of transparency.
For example, consider using two RGB images. Image A will be overlapped by image B. To do it, we'll use image B brightness to determine transparency degree (255 = 100%).
Intensity = pixel / 255;
NewPixel = (PixelA * (1 - Intensity)) + (PixelB * Intensity);
As intensity is a percentage and each pixel is multiplied by the complement of this percentage, the resulting sum will never overflow over 255 (max graylevel)
int WidthA = imageA.Width * channels;
int WidthB = imageB.Width * channels;
int width = Min(ImageA.Width, ImageB.Width) * channels;
int height = Min(ImageA.Height, ImageB.Height);
byte *ptrA = imageA.Buffer;
byte *ptrB = imageB.Buffer;
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x += channels, ptrA += channels, ptrB += channels)
{
//Take the intensity of the pixel. If RGB (channels = 3), intensity = (R+B+G) / 3. If grayscale, the pixel value is intensity itself
int avg = 0;
for (int j = 0; j < channels; ++j)
{
avg += ptrB[j];
}
//Obtain the intensity as a value between 0..100%
double intensity = (double)(avg / channels) / 255;
for (int j = 0; j < channels; ++j)
{
//Write in image A the resulting pixel which is obtained by multiplying Image B pixel
//by 100% - intensity plus Image A pixel multiplied by the intensity
ptrA[j] = (byte) ((ptrB[j] * (1.0 - intensity)) + ((intensity) * ptrA[j]));
}
}
ptrA = imageA.Buffer + (y * WidthA));
ptrB = imageB.Buffer + (y * WidthB));
}
You can also change this algorithm in order to overlap Image A over B, in a different place. I'm assuming here the image B coordinate (0, 0) will overlap image A coordinate (0, 0).
But once again, I'm not sure if this is what you are looking for.

Resources