Processing: Make only some random pixels change colour - processing

I have created some white noise, which I would like to decrease over time (change starting after 2 secs, intensifying after 10 secs etc), slowly tending towards a black screen.
What I can't figure out is, how can I make only some (say, 50% of all pixels) random pixels change colour, while the rest is just black, within the same frame?
So far, I could only make ALL of them change randomly, or ALL of them stay black. Any help would be much appreciated, thank you!!
void setup() {
size(1000, 800);
}
void draw() {
if (millis() < 2000) {
loadPixels();
for ( int i=0; i<pixels.length; i++)
pixels[i] = color(random(255));
updatePixels();
}
if (millis() > 2000) {
loadPixels();
if (random(1) >= 0.5) {
for ( int i=0; i<pixels.length; i++)
pixels[i] = color(random(255));
updatePixels();
} else {
loadPixels();
for ( int i=0; i<pixels.length; i++)
pixels[i] = color(0);
updatePixels();
}
}
if (millis() > 10000) {
loadPixels();
for ( int i=0; i<pixels.length; i++)
pixels[i] = color(random(255));
updatePixels();
}
}

A simple way would be to take into account that random() returns a random value within a range. If you give it a low value, you'll have a low random value.
If you use that value as a colour, the lower the value, the closer to black you are which might work well in your case.
If you have a randomness to 255, you increase the changes of having bright pixels, otherwise(with low random values), pixels will be dark:
//noise image
PImage noise;
//amount of noise image image 0 = 0%, 255 = 100%
int noiseAmt = 255;
void setup(){
noise = createImage(width,height,RGB);
}
void draw(){
//decrease noise over time
noiseAmt--;
if(noiseAmt < 0) noiseAmt = 255;
//apply noise based on noise amount
noiseImage();
//render image
image(noise,0,0);
}
void noiseImage(){
int numPixels = noise.pixels.length;
for(int i = 0 ; i < numPixels; i++){
//random(noiseAmt) is the key - low values = darker pixels
noise.pixels[i] = color(random(noiseAmt));
}
noise.updatePixels();
}
To get a hang of this, here's a slightly modified version of the code which uses the UP/DOWN arrow keys to control noise:
//noise image
PImage noise;
//amount of noise image image 0 = 0%, 255 = 100%
int noiseAmt = 127;
void setup() {
noise = createImage(width, height, RGB);
}
void draw() {
//apply noise based on noise amount
noiseImage();
//render image
image(noise, 0, 0);
}
void noiseImage() {
int numPixels = noise.pixels.length;
for (int i = 0; i < numPixels; i++) {
//random(noiseAmt) is the key - low values = darker pixels
noise.pixels[i] = color(random(noiseAmt));
}
noise.updatePixels();
}
void keyPressed(){
if(keyCode == UP) noiseAmt += 5;
if(keyCode == DOWN) noiseAmt -= 5;
noiseAmt = constrain(noiseAmt,0,255);
println("noiseAmt: " + noiseAmt);
}
Back to the matter of time, you can have a look this answer which covers tracking time using millis(). The only extra part is mapping the fade time to the noise amount, which would be some ratio. It might be easier if we map the time passed as a normalised value (from 0.0 to 1.0) which can easily scale to 0.0 to 255.0 simply by multiplying by 255:
//noise image
PImage noise;
//amount of noise image image 0 = 0%, 255 = 100%
int noiseAmt = 255;
int timestamp,fadeTime = 10000;//fade to black in 10s
void setup(){
noise = createImage(width,height,RGB);
timestamp = millis();
}
void draw(){
//decrease noise over time
int now = millis();
//if the difference between an initial timestamp and the current time is less than 10s
if(now - timestamp <= fadeTime){
//compute the ratio between the time difference and total fadeTime which will be from 0.0 to 1.0
//subtract this difference from 1.0 to flip the ratio direction from 0.0 -> 1.0 to 1.0 -> 0.0
float fadeRatio = 1.0 - ((float)(now-timestamp)/fadeTime);
//this ratio multiplied to 255 will be
noiseAmt = (int)(fadeRatio * 255);
}
//apply noise based on noise amount
noiseImage();
//render image
image(noise,0,0);
}
void noiseImage(){
int numPixels = noise.pixels.length;
for(int i = 0 ; i < numPixels; i++){
//random(noiseAmt) is the key - low values = darker pixels
noise.pixels[i] = color(random(noiseAmt));
}
noise.updatePixels();
}
Processing has some nice functions for dealing with mapping and constraining number ranges:
//noise image
PImage noise;
//amount of noise image image 0 = 0%, 255 = 100%
int noiseAmt = 255;
int timestamp,fadeTime = 10000;//fade to black in 10s
void setup(){
noise = createImage(width,height,RGB);
timestamp = millis();
}
void draw(){
//decrease noise over time
int now = millis();
//if the difference between an initial timestamp and the current time is less than 10s
noiseAmt = (int)map(now - timestamp,0,fadeTime,255,0);
noiseAmt = constrain(noiseAmt,0,255);
//apply noise based on noise amount
int numPixels = noise.pixels.length;
for(int i = 0 ; i < numPixels; i++){
//random(noiseAmt) is the key - low values = darker pixels
noise.pixels[i] = color(random(noiseAmt));
}
noise.updatePixels();
//render image
image(noise,0,0);
}
Note the fadeTime is set to 10s (10000 milliseconds). Feel free to tinker with the fadeTime value.

Instead of posting all of your code in an external website, boil your problem down to an MCVE and include it directly in your question.
That being said, you have two options:
Option 1: Store all of your pixels in some kind of data structure. You might have a 2D array of MyPixel objects, where MyPixel is a class you create that contains all of the information you need to know which instances in that array to change the color of.
Option 2: Draw directly to a PImage. Then you can iterate through that PImage to find a non-black pixel and change it.
Which approach you take is entirely up to you. I'd personally choose the first option, but that's just my personal preference. Try one of these approaches, and post an MCVE when you get stuck. Note that this should be as few lines as possible while still demonstrating the problem, not your whole sketch- we don't need to see your timing logic, for example.

Related

Using Processing for image visualization: pixel color thresholds

Image to be manipulated, hoping to identify each white dot on each picture with a counter
PImage blk;
void setup() {
size(640, 480);
blk=loadImage("img.png");
}
void draw () {
loadPixels();
blk.loadPixels();
int i = 0;
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
int loc = x+y*width;
pixels [loc] = blk.pixels[loc];
if (blk.pixels[loc] == 0) {
if (blk.pixels [loc]+1 != 0) {
i++;
}
}
float r = red(blk.pixels[loc]);
float g = green(blk.pixels[loc]);
float b = blue(blk.pixels[loc]);
pixels [loc] = color(r, g, b);
}
}
System.out.println (i);
updatePixels();
}
The main problem is within my if statement, not sure to approach it logically.
I'm unsure where this is exactly going, but I can help you find the white pixels. Here, I just counted 7457 "white" pixels (then I turned them red so you can see where they are and adjust the threshold if you want to get more or less of them):
Of course, this is just a proof of concept which you should be able to adapt to your needs.
PImage blk;
void setup() {
size(640, 480);
blk=loadImage("img.png");
blk.loadPixels();
int whitePixelsCount = 0;
// I'm doing this in the 'setup()' method because I don't need to do it 60 times per second
// Once it's done once I can just use the image as modified unless you want several
// different versions (which you can calculate once anyway then store in different PImages)
for (int i = 0; i < blk.width * blk.height; i++) {
float r = red(blk.pixels[i]);
float g = green(blk.pixels[i]);
float b = blue(blk.pixels[i]);
// In RGB, the brightness of each color is represented by it's intensity
// So here I'm checking the "average intensity" of the color to see how bright it is
// And I compare it to 100 since 255 is the max and I wanted this simple, but you can
// play with this threshold as much as you like
if ((r+g+b)/3 > 100) {
whitePixelsCount++;
// Here I'm making those pixels red so you can see where they are.
// It's easier to adjust the threshold if you can see what you're doing
blk.pixels[i] = color(255, 0, 0);
}
}
println(whitePixelsCount);
updatePixels();
}
void draw () {
image(blk, 0, 0);
}
In short (you'll read this in the comments too), we count the pixels according to a threshold we can adjust. To make things more obvious for you, I colored the "white" pixels red. You can lower or raise the threshold according to what you see this way, and once you know what you want you can get rid of the color.
There is a difficulty here, which is that the image isn't "black and white", but more greyscale - which is totally normal, but makes things harder for what you seem to be trying to do. You'll probably have to tinker a lot to get to the exact ratio which interests you. It could help a lot if you edited the original image in GiMP or another image software which lets you adjust contrast and brightness. It's kinda cheating, but it it doesn't work right off the bat this strategy could save you some work.
Have fun!

How to spread the audio spectrum into a grid

I'm trying to use processing to take an audio input and create a audio spectrum that is broken into multiple rows and fits uniformly to the width of the sketch.
I want the ellipse to be spread out in a grid like fashion and also represent different parts of the spectrum.
import ddf.minim.analysis.*;
import ddf.minim.*;
Minim minim;
FFT fft;
AudioInput mic;
void setup()
{
size(512, 512, P3D);
minim = new Minim(this);
mic = minim.getLineIn();
fft = new FFT(mic.bufferSize(), mic.sampleRate());
}
void draw()
{
background(0);
stroke(255);
fft.forward(mic.mix);
for(int i = 0; i < fft.specSize(); i++)
{
float size = fft.getBand(i);
float x = map(i, 0, fft.specSize(), 0, height);
float y = i;
ellipse(x, y, size, size );
}
}
The fft data is a 1D signal and you want to visualise the data as a 2D grid.
If you know how many rows and columns you want your grid to have you can use arithmetic to calculate the x and y grid location base on the index.
Let's say you have 100 elements and you want to display them in a 10x10 grid:
use the 1D array counter and modulo (%) the number of columns to calculate the 2D x index and divide (/) by the number of columns to calculate the 2D y index:
for(int i = 0 ; i < 100; i++){
println(i,i % 10, i / 10);
}
here's a longer commented example:
// fft data placeholder
float[] values = new float[100];
// fill with 100 random values
for(int i = 0 ; i < values.length; i++){
values[i] = random(0.0,1.0);
}
// how many rows/cols
int rows = 10;
int cols = 10;
// how large will a grid element be (including spacing)
float widthPerSquare = (width / cols);
// grid elements offset from top left
float offsetX = widthPerSquare * 0.5;
float offsetY = widthPerSquare * 0.5;
noStroke();
smooth();
println("i,gridX,gridY,value");
// traverse data
for(int i = 0; i < 100; i++){
// calculate x,y indices
int gridX = i % rows;
int gridY = i / rows;
println(i+","+gridX+","+gridY+","+values[i]);
// calculate on screen x,y position based on grid element size
float x = offsetX + (gridX * widthPerSquare);
float y = offsetY + (gridY * widthPerSquare);
// set the size to only be 75% of the grid element (to leave some spacing)
float size = values[i] * widthPerSquare * 0.75;
//fill(values[i] * 255);
ellipse(x,y,size,size);
}
In your case, let's say fft.specSize() is around 512 and you want to draw a square grid, you could do something like this:
import ddf.minim.analysis.*;
import ddf.minim.*;
Minim minim;
FFT fft;
AudioInput mic;
int rows;
int cols;
float xSpacing;
float ySpacing;
void setup()
{
size(512, 512, P3D);
noStroke();
minim = new Minim(this);
mic = minim.getLineIn();
fft = new FFT(mic.bufferSize(), mic.sampleRate());
// define your own grid size or use an estimation based on square root of your FFT data
rows = cols = (int)sqrt(fft.specSize());
println(rows,rows * rows);
xSpacing = width / cols;
ySpacing = height / rows;
}
void draw()
{
background(0);
fft.forward(mic.mix);
for(int i = 0; i < fft.specSize(); i++)
{
float size = fft.getBand(i) * 90;
float x = (i % rows) * xSpacing;
float y = (i / rows) * ySpacing;
ellipse(x, y, size, size );
}
}
Notice that the example isn't applying the offset and the grid is 22 x 22 (484 != 512),
but hopefully it will give you some ideas.
The other thing to bare in mind is the contents of that FFT array.
You might want to scale that logarithmically to account for how we perceive sound.
Check out Processing > Examples > Contributed Libraries > Minim > Analysis > SoundSpectrum and have a look at logAverages(). Playing minBandwidth and bandsPerOctave might help you get a nicer visualisation.
If you want to go a bit deeper into visualisation checkout this wakjah' excellent answer here and if you have time, go through Dan Ellis' amazing Music Signal Computing course

Extract Pixel Data in Processing

Using the capture video in processing, I want to understand how to set up a small section of the camera feed that the camera will constantly scan. Within that defined section, I want the camera to look for a change in brightness (i.e the brightness now becomes dark.) If the brightness changes I just want it to return 'shadow detected.' Can anyone help me get started? I am very new to this language.
You can easily get a small section of the camera(or any image) using PImage's get() method to which you pass a bunch of coordinates describing of your section rectangle(x,y, width, height).
This is also known as a region of interest (ROI) in computer vision.
Once you retrieve this region, you can process it.
Here a minimal example showing how to get the ROI and process it (in this case simply doing threshold based on the mouse position:
import processing.video.*;
Capture cam;
int w = 320;
int h = 240;
int np = w*h;
int roiX = 80;
int roiY = 60;
int roiW = 160;
int roiH = 120;
PImage roi;
void setup(){
size(w,h);
cam = new Capture(this,w,h);
cam.start();
}
void draw(){
image(cam,0,0);
if(roi != null){
//process ROI
// roi.filter(GRAY);
roi.filter(THRESHOLD,(float)mouseX/width);
//display output
image(roi,roiX,roiY);
}
}
void captureEvent(Capture c){
c.read();
roi = c.get(roiX,roiY,roiW,roiH);
}
You can get the brightness of a pixel using the brightness() function.
This means you can get the average brightness of your ROI by adding the brightness levels for each pixels, then dividing the result by the total number of pixels:
import processing.video.*;
Capture cam;
int w = 320;
int h = 240;
int np = w*h;
int roiX = 80;
int roiY = 60;
int roiW = 160;
int roiH = 120;
PImage roi;
void setup(){
size(w,h);fill(127);
cam = new Capture(this,w,h);
cam.start();
}
void draw(){
image(cam,0,0);
if(roi != null){
//process ROI
// roi.filter(GRAY);
roi.filter(THRESHOLD,(float)mouseX/width);
//display output
image(roi,roiX,roiY);
text("ROI brightness:"+brightness(roi),10,15);
}
}
void captureEvent(Capture c){
c.read();
roi = c.get(roiX,roiY,roiW,roiH);
}
float brightness(PImage in){
float brightness = 0.0;
int numPixels = in.pixels.length;
for(int i = 0 ; i < numPixels; i++) brightness += brightness(in.pixels[i]);
return brightness/numPixels;
}
If you've set your ROI to cover the bright area, you should see the average brightness go down as the shadow appears. Simply using a threshold value in a condition should allow to act on it:
import processing.video.*;
Capture cam;
int w = 320;
int h = 240;
int np = w*h;
int roiX = 80;
int roiY = 60;
int roiW = 160;
int roiH = 120;
PImage roi;
float brightness = 0.0;
float shadowThresh = 127.0;
void setup(){
size(w,h);fill(127);
cam = new Capture(this,w,h);
cam.start();
}
void draw(){
image(cam,0,0);
if(roi != null){
//process ROI
// roi.filter(GRAY);
roi.filter(THRESHOLD,(float)mouseX/width);
brightness = brightness(roi);
if(brightness < shadowThresh) println("shadow detected");
//display output
image(roi,roiX,roiY);
text("ROI brightness:"+brightness,10,15);
}
}
void captureEvent(Capture c){
c.read();
roi = c.get(roiX,roiY,roiW,roiH);
}
float brightness(PImage in){
float brightness = 0.0;
int numPixels = in.pixels.length;
for(int i = 0 ; i < numPixels; i++) brightness += brightness(in.pixels[i]);
return brightness/numPixels;
}
Hopefully these examples are easy to read and understand.
Note that this aren't as fast as they can be.
Be sure to also check out the video examples that come with Processing (Examples > Libraries > video > Capture), especially these: BrightnessThresholding,BrightnessTracking
If you want to learn more about techniques like these you should look into computer vision and the OpenCV library. There is a very nice OpenCV Processing library which you can now easily install via Sketch > Import Library... > Add Library... and select OpenCV for Processing. It also comes with examples on using brightness.
This covers the pixel manipulation side, but another important aspect of doing this sort of development is setup. It's crucial to have a reliable setup: it will make your life easier. What I mean by that is, in your case:
having control over the camera: being able to control auto white balance/brightness/etc. as automatic adjustments may throw off your values.
having control over the scene: making sure you reduce of risks of accidental lights messing with your tracking, or something bumping over the camera or object you're tracking.
Assuming the camera data you are analysing is a PImage, you can apply filters to the data to get it into a Black/White or grey scale form. The docs on PImage Filter modes: https://processing.org/reference/filter_.html should be useful.
You will probably have to do a pixel analysis - there may be a library to help here, but you can get an array of pixels from the filtered PImage, loop through it and check the values against your baseline values to see if they are brighter or darker. If they are grey scale in the 0 - 255 scale, you can tell if they are lighter if the number if higher than the baseline, or darker of the number is lower.

saveFrame can't keep up with frameRate in processing

I am using saveFrame to create an image sequence to bring into after effects. At each loop, I'm upping the frameRate - which I'm sure is not the best way to go about thing. At the end of each loop, I'm saving the frame, but saveFrame can't keep up with the progressively higher frameRate I'm trying to save at. Anyone have an idea how to achieve the effect I'm going for without upping the frameRate, so that saveFrame can keep up? Here's my code:
```
int w = 640; // canvas size
int h = 480;
int n = 10; // number of grid cells
int d = w/n; // diameter of a grid cell
float depth = 0.5; // relative cell depth
int fr = 100;
int iterator = 0;
boolean doSaveFrames = false;
void setup() {
size(w, h, P3D);
rectMode(CENTER);
background(0);
fill(51, 255, 0);
noStroke();
frameRate(fr);
}
void draw() {
// get coordinates
int xy = frameCount % (n*n);
// shift image in z-direction
if (xy == 0) {
PImage img = get();
background(0);
pushMatrix();
translate(0, 0, -d * depth);
tint(255, 127);
image(img, 0, 0);
popMatrix();
// fr+=iterator*10;
// frameRate(fr); //MH - really cool but I can't export fast enough
iterator++;
}
// scale and rotate the square
scale(d);
translate(xy%n + .5, xy/n + .5, -depth * .5 );
rotate(QUARTER_PI - HALF_PI *int(random(2)));
rotateX(HALF_PI);
// draw the square
rect(0, 0, sqrt(2), depth);
if (doSaveFrames) {
saveFrame("frames/line-######.tga");
}
}
```
Instead of basing your animation off of the frameCount variable, create your own variable that you increment at your own speed, and then increase that speed over time. Keep your framerate the same, but increase your animation speed.

Pixel reordering is wrong when trying to process and display image copy with lower res

I'm currently making an application using processing intended to take an image and apply 8bit style processing to it: that is to make it look pixelated. To do this it has a method that take a style and window size as parameters (style is the shape in which the window is to be displayed - rect, ellipse, cross etc, and window size is a number between 1-10 squared) - to produce results similar to the iphone app pxl ( http://itunes.apple.com/us/app/pxl./id499620829?mt=8 ). This method then counts through the image's pixels, window by window averages the colour of the window and displays a rect(or which every shape/style chosen) at the equivalent space on the other side of the sketch window (the sketch when run is supposed to display the original image on the left mirror it with the processed version on the right).
The problem Im having is when drawing the averaged colour rects, the order in which they display becomes skewed..
Although the results are rather amusing, they are not what I want. Here the code:
//=========================================================
// GLOBAL VARIABLES
//=========================================================
PImage img;
public int avR, avG, avB;
private final int BLOCKS = 0, DOTS = 1, VERTICAL_CROSSES = 2, HORIZONTAL_CROSSES = 3;
public sRGB styleColour;
//=========================================================
// METHODS FOR AVERAGING WINDOW COLOURS, CREATING AN
// 8 BIT REPRESENTATION OF THE IMAGE AND LOADING AN
// IMAGE
//=========================================================
public sRGB averageWindowColour(color [] c){
// RGB Variables
float r = 0;
float g = 0;
float b = 0;
// Iterator
int i = 0;
int sizeOfWindow = c.length;
// Count through the window's pixels, store the
// red, green and blue values in the RGB variables
// and sum them into the average variables
for(i = 0; i < c.length; i++){
r = red (c[i]);
g = green(c[i]);
b = blue (c[i]);
avR += r;
avG += g;
avB += b;
}
// Divide the sum of the red, green and blue
// values by the number of pixels in the window
// to obtain the average
avR = avR / sizeOfWindow;
avG = avG / sizeOfWindow;
avB = avB / sizeOfWindow;
// Return the colour
return new sRGB(avR,avG,avB);
}
public void eightBitIT(int style, int windowSize){
img.loadPixels();
for(int wx = 0; wx < img.width; wx += (sqrt(windowSize))){
for(int wy = 0; wy < img.height; wy += (sqrt(windowSize))){
color [] tempCols = new color[windowSize];
int i = 0;
for(int x = 0; x < (sqrt(windowSize)); x ++){
for(int y = 0; y < (sqrt(windowSize)); y ++){
int loc = (wx+x) + (y+wy)*(img.width-windowSize);
tempCols[i] = img.pixels[loc];
// println("Window loc X: "+(wx+(img.width+5))+" Window loc Y: "+(wy+5)+" Window pix X: "+x+" Window Pix Y: "+y);
i++;
}
}
//this is ment to be in a switch test (0 = rect, 1 ellipse etc)
styleColour = new sRGB(averageWindowColour(tempCols));
//println("R: "+ red(styleColour.returnColourScaled())+" G: "+green(styleColour.returnColourScaled())+" B: "+blue(styleColour.returnColourScaled()));
rectMode(CORNER);
noStroke();
fill(styleColour.returnColourScaled());
//println("Rect Loc X: "+(wx+(img.width+5))+" Y: "+(wy+5));
ellipse(wx+(img.width+5),wy+5,sqrt(windowSize),sqrt(windowSize));
}
}
}
public PImage load(String s){
PImage temp = loadImage(s);
temp.resize(600,470);
return temp;
}
void setup(){
background(0);
// Load the image and set size of screen to its size*2 + the borders
// and display the image.
img = loadImage("oscilloscope.jpg");
size(img.width*2+15,(img.height+10));
frameRate(25);
image(img,5,5);
// Draw the borders
strokeWeight(5);
stroke(255);
rectMode(CORNERS);
noFill();
rect(2.5,2.5,img.width+3,height-3);
rect(img.width+2.5,2.5,width-3,height-3);
stroke(255,0,0);
strokeWeight(1);
rect(5,5,9,9); //window example
// process the image
eightBitIT(BLOCKS, 16);
}
void draw(){
//eightBitIT(BLOCKS, 4);
//println("X: "+mouseX+" Y: "+mouseY);
}
This has been bugging me for a while now as I can't see where in my code im offsetting the coordinates so they display like this. I know its probably something very trivial but I can seem to work it out. If anyone can spot why this skewed reordering is happening i would be much obliged as i have quite a lot of other ideas i want to implement and this is holding me back...
Thanks,

Resources