Extract Pixel Data in Processing - processing

Using the capture video in processing, I want to understand how to set up a small section of the camera feed that the camera will constantly scan. Within that defined section, I want the camera to look for a change in brightness (i.e the brightness now becomes dark.) If the brightness changes I just want it to return 'shadow detected.' Can anyone help me get started? I am very new to this language.

You can easily get a small section of the camera(or any image) using PImage's get() method to which you pass a bunch of coordinates describing of your section rectangle(x,y, width, height).
This is also known as a region of interest (ROI) in computer vision.
Once you retrieve this region, you can process it.
Here a minimal example showing how to get the ROI and process it (in this case simply doing threshold based on the mouse position:
import processing.video.*;
Capture cam;
int w = 320;
int h = 240;
int np = w*h;
int roiX = 80;
int roiY = 60;
int roiW = 160;
int roiH = 120;
PImage roi;
void setup(){
size(w,h);
cam = new Capture(this,w,h);
cam.start();
}
void draw(){
image(cam,0,0);
if(roi != null){
//process ROI
// roi.filter(GRAY);
roi.filter(THRESHOLD,(float)mouseX/width);
//display output
image(roi,roiX,roiY);
}
}
void captureEvent(Capture c){
c.read();
roi = c.get(roiX,roiY,roiW,roiH);
}
You can get the brightness of a pixel using the brightness() function.
This means you can get the average brightness of your ROI by adding the brightness levels for each pixels, then dividing the result by the total number of pixels:
import processing.video.*;
Capture cam;
int w = 320;
int h = 240;
int np = w*h;
int roiX = 80;
int roiY = 60;
int roiW = 160;
int roiH = 120;
PImage roi;
void setup(){
size(w,h);fill(127);
cam = new Capture(this,w,h);
cam.start();
}
void draw(){
image(cam,0,0);
if(roi != null){
//process ROI
// roi.filter(GRAY);
roi.filter(THRESHOLD,(float)mouseX/width);
//display output
image(roi,roiX,roiY);
text("ROI brightness:"+brightness(roi),10,15);
}
}
void captureEvent(Capture c){
c.read();
roi = c.get(roiX,roiY,roiW,roiH);
}
float brightness(PImage in){
float brightness = 0.0;
int numPixels = in.pixels.length;
for(int i = 0 ; i < numPixels; i++) brightness += brightness(in.pixels[i]);
return brightness/numPixels;
}
If you've set your ROI to cover the bright area, you should see the average brightness go down as the shadow appears. Simply using a threshold value in a condition should allow to act on it:
import processing.video.*;
Capture cam;
int w = 320;
int h = 240;
int np = w*h;
int roiX = 80;
int roiY = 60;
int roiW = 160;
int roiH = 120;
PImage roi;
float brightness = 0.0;
float shadowThresh = 127.0;
void setup(){
size(w,h);fill(127);
cam = new Capture(this,w,h);
cam.start();
}
void draw(){
image(cam,0,0);
if(roi != null){
//process ROI
// roi.filter(GRAY);
roi.filter(THRESHOLD,(float)mouseX/width);
brightness = brightness(roi);
if(brightness < shadowThresh) println("shadow detected");
//display output
image(roi,roiX,roiY);
text("ROI brightness:"+brightness,10,15);
}
}
void captureEvent(Capture c){
c.read();
roi = c.get(roiX,roiY,roiW,roiH);
}
float brightness(PImage in){
float brightness = 0.0;
int numPixels = in.pixels.length;
for(int i = 0 ; i < numPixels; i++) brightness += brightness(in.pixels[i]);
return brightness/numPixels;
}
Hopefully these examples are easy to read and understand.
Note that this aren't as fast as they can be.
Be sure to also check out the video examples that come with Processing (Examples > Libraries > video > Capture), especially these: BrightnessThresholding,BrightnessTracking
If you want to learn more about techniques like these you should look into computer vision and the OpenCV library. There is a very nice OpenCV Processing library which you can now easily install via Sketch > Import Library... > Add Library... and select OpenCV for Processing. It also comes with examples on using brightness.
This covers the pixel manipulation side, but another important aspect of doing this sort of development is setup. It's crucial to have a reliable setup: it will make your life easier. What I mean by that is, in your case:
having control over the camera: being able to control auto white balance/brightness/etc. as automatic adjustments may throw off your values.
having control over the scene: making sure you reduce of risks of accidental lights messing with your tracking, or something bumping over the camera or object you're tracking.

Assuming the camera data you are analysing is a PImage, you can apply filters to the data to get it into a Black/White or grey scale form. The docs on PImage Filter modes: https://processing.org/reference/filter_.html should be useful.
You will probably have to do a pixel analysis - there may be a library to help here, but you can get an array of pixels from the filtered PImage, loop through it and check the values against your baseline values to see if they are brighter or darker. If they are grey scale in the 0 - 255 scale, you can tell if they are lighter if the number if higher than the baseline, or darker of the number is lower.

Related

Processing | Program is lagging

I'm new to Processing and I need to make a program that, captured the main monitor, shows on the second screen the average color and makes a spiral using another color (perceptual dominant color) get by a function.
The problem is that the program is so slow (lag, 1FPS). I think it's because it has too many things to do everytime i do a screenshot, but I have no idea how to make it faster.
Also there could be many other problems, but the main one is that.
Thank you very much!
Here's the code:
import java.awt.Robot;
import java.awt.AWTException;
import java.awt.Rectangle;
import java.awt.color.ColorSpace;
PImage screenshot;
float a = 0;
int blockSize = 20;
int avg_c;
int per_c;
void setup() {
fullScreen(2); // 1920x1080
noStroke();
frame.removeNotify();
}
void draw() {
screenshot();
avg_c = extractColorFromImage(screenshot);
per_c = extractAverageColorFromImage(screenshot);
background(avg_c); // Average color
spiral();
}
void screenshot() {
try{
Robot robot_Screenshot = new Robot();
screenshot = new PImage(robot_Screenshot.createScreenCapture
(new Rectangle(0, 0, displayWidth, displayHeight)));
}
catch (AWTException e){ }
frame.setLocation(displayWidth/2, 0);
}
void spiral() {
fill (per_c);
for (int i = blockSize; i < width; i += blockSize*2)
{
ellipse(i, height/2+sin(a+i)*100, blockSize+cos(a+i)*5, blockSize+cos(a+i)*5);
a += 0.001;
}
}
color extractColorFromImage(PImage screenshot) { // Get average color
screenshot.loadPixels();
int r = 0, g = 0, b = 0;
for (int i = 0; i < screenshot.pixels.length; i++) {
color c = screenshot.pixels[i];
r += c>>16&0xFF;
g += c>>8&0xFF;
b += c&0xFF;
}
r /= screenshot.pixels.length; g /= screenshot.pixels.length; b /= screenshot.pixels.length;
return color(r, g, b);
}
color extractAverageColorFromImage(PImage screenshot) { // Get lab average color (perceptual)
float[] average = new float[3];
CIELab lab = new CIELab();
int numPixels = screenshot.pixels.length;
for (int i = 0; i < numPixels; i++) {
color rgb = screenshot.pixels[i];
float[] labValues = lab.fromRGB(new float[]{red(rgb),green(rgb),blue(rgb)});
average[0] += labValues[0];
average[1] += labValues[1];
average[2] += labValues[2];
}
average[0] /= numPixels;
average[1] /= numPixels;
average[2] /= numPixels;
float[] rgb = lab.toRGB(average);
return color(rgb[0] * 255,rgb[1] * 255, rgb[2] * 255);
}
public class CIELab extends ColorSpace {
#Override
public float[] fromCIEXYZ(float[] colorvalue) {
double l = f(colorvalue[1]);
double L = 116.0 * l - 16.0;
double a = 500.0 * (f(colorvalue[0]) - l);
double b = 200.0 * (l - f(colorvalue[2]));
return new float[] {(float) L, (float) a, (float) b};
}
#Override
public float[] fromRGB(float[] rgbvalue) {
float[] xyz = CIEXYZ.fromRGB(rgbvalue);
return fromCIEXYZ(xyz);
}
#Override
public float getMaxValue(int component) {
return 128f;
}
#Override
public float getMinValue(int component) {
return (component == 0)? 0f: -128f;
}
#Override
public String getName(int idx) {
return String.valueOf("Lab".charAt(idx));
}
#Override
public float[] toCIEXYZ(float[] colorvalue) {
double i = (colorvalue[0] + 16.0) * (1.0 / 116.0);
double X = fInv(i + colorvalue[1] * (1.0 / 500.0));
double Y = fInv(i);
double Z = fInv(i - colorvalue[2] * (1.0 / 200.0));
return new float[] {(float) X, (float) Y, (float) Z};
}
#Override
public float[] toRGB(float[] colorvalue) {
float[] xyz = toCIEXYZ(colorvalue);
return CIEXYZ.toRGB(xyz);
}
CIELab() {
super(ColorSpace.TYPE_Lab, 3);
}
private double f(double x) {
if (x > 216.0 / 24389.0) {
return Math.cbrt(x);
} else {
return (841.0 / 108.0) * x + N;
}
}
private double fInv(double x) {
if (x > 6.0 / 29.0) {
return x*x*x;
} else {
return (108.0 / 841.0) * (x - N);
}
}
private final ColorSpace CIEXYZ =
ColorSpace.getInstance(ColorSpace.CS_CIEXYZ);
private final double N = 4.0 / 29.0;
}
There's lots that can be done, even beyond what's already been mentioned.
Iteration & Threading
After taking the screenshot, immediately iterate over every 1/N pixels (perhaps every 4 or 8) of the buffered image. Then, during this iteration, calculate the LAB value for each pixel (as you have each pixel channel directly available), and meanwhile increment the running total of each RGB channel.
This saves us from iterating over the same pixels twice and avoids unncessary conversions (BufferedImage → PImage; and composing then decomposing pixel channels from PImage pixels).
Likewise, we avoid Processing's expensive resize() call (as suggested in another answer), which is not something we want to call every frame (even though it does speed the program up, it's not an efficient method).
Now, on top of iteration change, we can wrap the iteration in a Callable to easily run the workload across multiple system threads concurrently (after all, pixel iteration is embarrassingly parallel); the example below does this with 2 threads, each screenshotting and processing half of the display's pixels.
Optimise RGB→XYZ→LAB conversion
We're not so concerned about the backwards conversion since that's only done for one value per frame
It looks like you've implemented XYZ→LAB yourself and are using the RGB→XYZ converter from java.awt.color.
As has been identified, the forward conversion XYZ→LAB uses a cbrt() which is as a bottleneck. I also imagine that the RGB→XYZ implementation makes 3 calls to Math.Pow(x, 2.4) — 3 non-integer exponents per pixel adds considerably to the computation. The solution is faster math...
Jafama
Jafama is a drop-in java.math replacement -- simply import the library and replace any Math.__() calls with FastMath.__() for a free speedup (you could go even further by trading Jafama's E-15 precision with less accurate and even faster dedicated LUT-based classes).
So at the very least, swap out Math.cbrt() for FastMath.cbrt(). Then consider implementing RGB→XYZ yourself (example), again using Jafama in place of java.math.
You may even find that for such a project, converting to XYZ only is a sufficient color space to work with to overcome the well known weaknesses with RGB (and therefore save yourself from the XYZ→LAB conversion).
Cache LAB Calculation
Unless most pixels are changing every frame, then consider caching the LAB value for every pixel, recalculating it only when the pixel has changed between the current the previous frames. The tradeoff here is the overhead from checking every pixel against its previous value, versus how much calculation positive checks will save. Given that the LAB calculation is much more expensive it's very worthwhile here. The example below uses this technique.
Screen Capture
No matter how well optimised the rest of the program is, a considerable bottleneck is the AWT Robot's createScreenCapture(). It will struggles to go past 30FPS on large enough displays. I can't offer any exact advice but it's worth looking at other screen capture methods in Java.
Reworked code with iteration changes & threading
This code implements what has discussed above minus any changes to the LAB calculation.
float a = 0;
int blockSize = 20;
int avg_c;
int per_c;
java.util.concurrent.ExecutorService threadPool = java.util.concurrent.Executors.newFixedThreadPool(4);
List<java.util.concurrent.Callable<Boolean>> taskList;
float[] averageLAB;
int totalR = 0, totalG = 0, totalB = 0;
CIELab lab = new CIELab();
final int pixelStride = 8; // look at every 8th pixel
void setup() {
size(800, 800, FX2D);
noStroke();
frame.removeNotify();
taskList = new ArrayList<java.util.concurrent.Callable<Boolean>>();
Compute thread1 = new Compute(0, 0, width, height/2);
Compute thread2 = new Compute(0, height/2, width, height/2);
taskList.add(thread1);
taskList.add(thread2);
}
void draw() {
totalR = 0; // re init
totalG = 0; // re init
totalB = 0; // re init
averageLAB = new float[3]; // re init
final int numPixels = (width*height)/pixelStride;
try {
threadPool.invokeAll(taskList); // run threads now and block until completion of all
}
catch (Exception e) {
e.printStackTrace();
}
// calculate average LAB
averageLAB[0]/=numPixels;
averageLAB[1]/=numPixels;
averageLAB[2]/=numPixels;
final float[] rgb = lab.toRGB(averageLAB);
per_c = color(rgb[0] * 255, rgb[1] * 255, rgb[2] * 255);
// calculate average RGB
totalR/=numPixels;
totalG/=numPixels;
totalB/=numPixels;
avg_c = color(totalR, totalG, totalB);
background(avg_c); // Average color
spiral();
fill(255, 0, 0);
text(frameRate, 10, 20);
}
class Compute implements java.util.concurrent.Callable<Boolean> {
private final Rectangle screenRegion;
private Robot robot_Screenshot;
private final int[] previousRGB;
private float[][] previousLAB;
Compute(int x, int y, int w, int h) {
screenRegion = new Rectangle(x, y, w, h);
previousRGB = new int[w*h];
previousLAB = new float[w*h][3];
try {
robot_Screenshot = new Robot();
}
catch (AWTException e1) {
e1.printStackTrace();
}
}
#Override
public Boolean call() {
BufferedImage rawScreenshot = robot_Screenshot.createScreenCapture(screenRegion);
int[] ssPixels = new int[rawScreenshot.getWidth()*rawScreenshot.getHeight()]; // screenshot pixels
rawScreenshot.getRGB(0, 0, rawScreenshot.getWidth(), rawScreenshot.getHeight(), ssPixels, 0, rawScreenshot.getWidth()); // copy buffer to int[] array
for (int pixel = 0; pixel < ssPixels.length; pixel+=pixelStride) {
// get invididual colour channels
final int pixelColor = ssPixels[pixel];
final int R = pixelColor >> 16 & 0xFF;
final int G = pixelColor >> 8 & 0xFF;
final int B = pixelColor & 0xFF;
if (pixelColor != previousRGB[pixel]) { // if pixel has changed recalculate LAB value
float[] labValues = lab.fromRGB(new float[]{R/255f, G/255f, B/255f}); // note that I've fixed this; beforehand you were missing the /255, so it was always white.
previousLAB[pixel] = labValues;
}
averageLAB[0] += previousLAB[pixel][0];
averageLAB[1] += previousLAB[pixel][1];
averageLAB[2] += previousLAB[pixel][2];
totalR+=R;
totalG+=G;
totalB+=B;
previousRGB[pixel] = pixelColor; // cache last result
}
return true;
}
}
800x800px; pixelStride = 4; fairly static screen background
Yeesh, about 1 FPS on my machine:
To optimize code can be really hard, so instead of reading everything looking for stuff to improve, I started by testing where you were losing so much processing power. The answer was at this line:
per_c = extractAverageColorFromImage(screenshot);
The extractAverageColorFromImage method is well written, but it underestimate the amount of work it has to do. There is a quadratic relationship between the size of a screen and the number of pixels in this screen, so the bigger the screen the worst the situation. And this method is processing every pixel of the screenshot all the time, several time per screenshot.
This is a lot of work for an average color. Now, if there was a way to cut some corners... maybe a smaller screen, or a smaller screenshot... oh! there is! Let's resize the screenshot. After all, we don't need to go into such details as individual pixels for an average. In the screenshot method, add this line:
void screenshot() {
try {
Robot robot_Screenshot = new Robot();
screenshot = new PImage(robot_Screenshot.createScreenCapture(new Rectangle(0, 0, displayWidth, displayHeight)));
// ADD THE NEXT LINE
screenshot.resize(width/4, height/4);
}
catch (AWTException e) {
}
frame.setLocation(displayWidth/2, 0);
}
I divided the workload by 4, but I encourage you to tweak this number until you have the fastest satisfying result you can. This is just a proof of concept:
As you can see, resizing the screenshot and making it 4x smaller gives me 10x more speed. That's not a miracle, but it's much better, and I can't see a difference in the end result - but about that part, you'll have to use your own judgement, as you are the one who knows what your project is about. Hope it'll help!
Have fun!
Unfortunately I can't provide a detailed answer like laancelot (+1), but hopefully I can provide a few tips:
Resizing the image is definitely a good direction. Bare in mind you can also skip a number of pixels instead of incrementing every single pixel. (if you handle the pixel indices correctly, you can get a similar effect to resize without calling resize, though that won't save you a lot CPU time)
Don't create a new Robot instance multiple times a second. Create it once in setup and re-use it. (This is more of a good habit to get into)
Use a CPU profiler, such as the one in VisualVM to see what exactly is slow and aim to optimise the slowest stuff first.
point 1 example:
for (int i = 0; i < numPixels; i+= 100)
point 2 example:
Robot robot_Screenshot;
...
void setup() {
fullScreen(2); // 1920x1080
noStroke();
frame.removeNotify();
try{
robot_Screenshot = new Robot();
}catch(AWTException e){
println("error setting up screenshot Robot instance");
e.printStackTrace();
}
}
...
void screenshot() {
screenshot = new PImage(robot_Screenshot.createScreenCapture
(new Rectangle(0, 0, displayWidth, displayHeight)));
frame.setLocation(displayWidth/2, 0);
}
point 3 example:
Notice the slowest bit are actually AWT's fromRGB and Math.cbrt()
I'd suggest finding another alternative RGB -> XYZ -> L*a*b* conversion method that is simpler (mainly functions, less classes, with AWT or other dependencies) and hopefully faster.

Using Processing for image visualization: pixel color thresholds

Image to be manipulated, hoping to identify each white dot on each picture with a counter
PImage blk;
void setup() {
size(640, 480);
blk=loadImage("img.png");
}
void draw () {
loadPixels();
blk.loadPixels();
int i = 0;
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
int loc = x+y*width;
pixels [loc] = blk.pixels[loc];
if (blk.pixels[loc] == 0) {
if (blk.pixels [loc]+1 != 0) {
i++;
}
}
float r = red(blk.pixels[loc]);
float g = green(blk.pixels[loc]);
float b = blue(blk.pixels[loc]);
pixels [loc] = color(r, g, b);
}
}
System.out.println (i);
updatePixels();
}
The main problem is within my if statement, not sure to approach it logically.
I'm unsure where this is exactly going, but I can help you find the white pixels. Here, I just counted 7457 "white" pixels (then I turned them red so you can see where they are and adjust the threshold if you want to get more or less of them):
Of course, this is just a proof of concept which you should be able to adapt to your needs.
PImage blk;
void setup() {
size(640, 480);
blk=loadImage("img.png");
blk.loadPixels();
int whitePixelsCount = 0;
// I'm doing this in the 'setup()' method because I don't need to do it 60 times per second
// Once it's done once I can just use the image as modified unless you want several
// different versions (which you can calculate once anyway then store in different PImages)
for (int i = 0; i < blk.width * blk.height; i++) {
float r = red(blk.pixels[i]);
float g = green(blk.pixels[i]);
float b = blue(blk.pixels[i]);
// In RGB, the brightness of each color is represented by it's intensity
// So here I'm checking the "average intensity" of the color to see how bright it is
// And I compare it to 100 since 255 is the max and I wanted this simple, but you can
// play with this threshold as much as you like
if ((r+g+b)/3 > 100) {
whitePixelsCount++;
// Here I'm making those pixels red so you can see where they are.
// It's easier to adjust the threshold if you can see what you're doing
blk.pixels[i] = color(255, 0, 0);
}
}
println(whitePixelsCount);
updatePixels();
}
void draw () {
image(blk, 0, 0);
}
In short (you'll read this in the comments too), we count the pixels according to a threshold we can adjust. To make things more obvious for you, I colored the "white" pixels red. You can lower or raise the threshold according to what you see this way, and once you know what you want you can get rid of the color.
There is a difficulty here, which is that the image isn't "black and white", but more greyscale - which is totally normal, but makes things harder for what you seem to be trying to do. You'll probably have to tinker a lot to get to the exact ratio which interests you. It could help a lot if you edited the original image in GiMP or another image software which lets you adjust contrast and brightness. It's kinda cheating, but it it doesn't work right off the bat this strategy could save you some work.
Have fun!

Processing: Make only some random pixels change colour

I have created some white noise, which I would like to decrease over time (change starting after 2 secs, intensifying after 10 secs etc), slowly tending towards a black screen.
What I can't figure out is, how can I make only some (say, 50% of all pixels) random pixels change colour, while the rest is just black, within the same frame?
So far, I could only make ALL of them change randomly, or ALL of them stay black. Any help would be much appreciated, thank you!!
void setup() {
size(1000, 800);
}
void draw() {
if (millis() < 2000) {
loadPixels();
for ( int i=0; i<pixels.length; i++)
pixels[i] = color(random(255));
updatePixels();
}
if (millis() > 2000) {
loadPixels();
if (random(1) >= 0.5) {
for ( int i=0; i<pixels.length; i++)
pixels[i] = color(random(255));
updatePixels();
} else {
loadPixels();
for ( int i=0; i<pixels.length; i++)
pixels[i] = color(0);
updatePixels();
}
}
if (millis() > 10000) {
loadPixels();
for ( int i=0; i<pixels.length; i++)
pixels[i] = color(random(255));
updatePixels();
}
}
A simple way would be to take into account that random() returns a random value within a range. If you give it a low value, you'll have a low random value.
If you use that value as a colour, the lower the value, the closer to black you are which might work well in your case.
If you have a randomness to 255, you increase the changes of having bright pixels, otherwise(with low random values), pixels will be dark:
//noise image
PImage noise;
//amount of noise image image 0 = 0%, 255 = 100%
int noiseAmt = 255;
void setup(){
noise = createImage(width,height,RGB);
}
void draw(){
//decrease noise over time
noiseAmt--;
if(noiseAmt < 0) noiseAmt = 255;
//apply noise based on noise amount
noiseImage();
//render image
image(noise,0,0);
}
void noiseImage(){
int numPixels = noise.pixels.length;
for(int i = 0 ; i < numPixels; i++){
//random(noiseAmt) is the key - low values = darker pixels
noise.pixels[i] = color(random(noiseAmt));
}
noise.updatePixels();
}
To get a hang of this, here's a slightly modified version of the code which uses the UP/DOWN arrow keys to control noise:
//noise image
PImage noise;
//amount of noise image image 0 = 0%, 255 = 100%
int noiseAmt = 127;
void setup() {
noise = createImage(width, height, RGB);
}
void draw() {
//apply noise based on noise amount
noiseImage();
//render image
image(noise, 0, 0);
}
void noiseImage() {
int numPixels = noise.pixels.length;
for (int i = 0; i < numPixels; i++) {
//random(noiseAmt) is the key - low values = darker pixels
noise.pixels[i] = color(random(noiseAmt));
}
noise.updatePixels();
}
void keyPressed(){
if(keyCode == UP) noiseAmt += 5;
if(keyCode == DOWN) noiseAmt -= 5;
noiseAmt = constrain(noiseAmt,0,255);
println("noiseAmt: " + noiseAmt);
}
Back to the matter of time, you can have a look this answer which covers tracking time using millis(). The only extra part is mapping the fade time to the noise amount, which would be some ratio. It might be easier if we map the time passed as a normalised value (from 0.0 to 1.0) which can easily scale to 0.0 to 255.0 simply by multiplying by 255:
//noise image
PImage noise;
//amount of noise image image 0 = 0%, 255 = 100%
int noiseAmt = 255;
int timestamp,fadeTime = 10000;//fade to black in 10s
void setup(){
noise = createImage(width,height,RGB);
timestamp = millis();
}
void draw(){
//decrease noise over time
int now = millis();
//if the difference between an initial timestamp and the current time is less than 10s
if(now - timestamp <= fadeTime){
//compute the ratio between the time difference and total fadeTime which will be from 0.0 to 1.0
//subtract this difference from 1.0 to flip the ratio direction from 0.0 -> 1.0 to 1.0 -> 0.0
float fadeRatio = 1.0 - ((float)(now-timestamp)/fadeTime);
//this ratio multiplied to 255 will be
noiseAmt = (int)(fadeRatio * 255);
}
//apply noise based on noise amount
noiseImage();
//render image
image(noise,0,0);
}
void noiseImage(){
int numPixels = noise.pixels.length;
for(int i = 0 ; i < numPixels; i++){
//random(noiseAmt) is the key - low values = darker pixels
noise.pixels[i] = color(random(noiseAmt));
}
noise.updatePixels();
}
Processing has some nice functions for dealing with mapping and constraining number ranges:
//noise image
PImage noise;
//amount of noise image image 0 = 0%, 255 = 100%
int noiseAmt = 255;
int timestamp,fadeTime = 10000;//fade to black in 10s
void setup(){
noise = createImage(width,height,RGB);
timestamp = millis();
}
void draw(){
//decrease noise over time
int now = millis();
//if the difference between an initial timestamp and the current time is less than 10s
noiseAmt = (int)map(now - timestamp,0,fadeTime,255,0);
noiseAmt = constrain(noiseAmt,0,255);
//apply noise based on noise amount
int numPixels = noise.pixels.length;
for(int i = 0 ; i < numPixels; i++){
//random(noiseAmt) is the key - low values = darker pixels
noise.pixels[i] = color(random(noiseAmt));
}
noise.updatePixels();
//render image
image(noise,0,0);
}
Note the fadeTime is set to 10s (10000 milliseconds). Feel free to tinker with the fadeTime value.
Instead of posting all of your code in an external website, boil your problem down to an MCVE and include it directly in your question.
That being said, you have two options:
Option 1: Store all of your pixels in some kind of data structure. You might have a 2D array of MyPixel objects, where MyPixel is a class you create that contains all of the information you need to know which instances in that array to change the color of.
Option 2: Draw directly to a PImage. Then you can iterate through that PImage to find a non-black pixel and change it.
Which approach you take is entirely up to you. I'd personally choose the first option, but that's just my personal preference. Try one of these approaches, and post an MCVE when you get stuck. Note that this should be as few lines as possible while still demonstrating the problem, not your whole sketch- we don't need to see your timing logic, for example.

Video with alpha channel in Processing

I was wondering if anyone can be amazing and help me with something I'm working on in Processing. I need to play a video file with transparencies over a live feed so that the video isn't simply a rectangle. Here is the section of the code that I think I need to add something to or change. I'm extremely new to all of this and I'm extremely grateful to anyone that can help.
If you're video has an alpha channel, that's great,
otherwise, you should be able to blend() the other content.
Here's a basic proof of concept sketch. It overlays a grid of circles on top of a live feed. Use the space key to cycle though blend modes. Some will work better than others depending on your content and what you're trying to achieve:
import processing.video.*;
Capture cam;
int w = 320;
int h = 240;
int np = w*h;
PImage overlay;
int blendMode = 1;
int[] blendModes = {BLEND,ADD,SUBTRACT,DARKEST,LIGHTEST,DIFFERENCE,EXCLUSION,MULTIPLY,SCREEN,OVERLAY,HARD_LIGHT,SOFT_LIGHT,DODGE,BURN};
String[] blendModesNames = {"BLEND","ADD","SUBTRACT","DARKEST","LIGHTEST","DIFFERENCE","EXCLUSION","MULTIPLY","SCREEN","OVERLAY","HARD_LIGHT","SOFT_LIGHT","DODGE","BURN"};
void setup(){
size(w,h);
cam = new Capture(this,w,h);
cam.start();
//test content to overlay, a grid of circles
background(0);fill(255);
for(int y = 0 ; y < height; y += 30)
for(int x = 0 ; x < width; x+= 30)
ellipse(x,y,15,15);
overlay = get();
}
void draw(){
image(cam,0,0);
blend(overlay,0,0,width,height,0,0,width,height,blendModes[blendMode]);
}
void keyReleased(){
if(key == ' ') {
blendMode = (blendMode+1)%blendModes.length;
println("blendMode: " + blendModesNames[blendMode]);
}
}
void captureEvent(Capture c){
c.read();
}
I solved (maybe can be improved) by using 2 videos: first footage is the color map with white color on the background; second footage is the matte mask: white for the "important" part, and black the others. Then apply mask() function, herunder is the important part of the code:
Movie mov1;
Movie mov2;
void setup() {
....code...
mov1 = new Movie(this, "matte.mov");
mov2 = new Movie(this, "alpha.mov");
mov1.play();
mov1.pause();
mov2.play();
mov2.pause();
}
void draw() {
...code...
mov1.play();
mov2.play();
loadPixels();
mov2.mask(mov1);
image(mov2, 0, 0);
}
The video used for the test was 256x256, I always use power of two numbers for better performance (float maths). Hope this helps someone!

Pixel reordering is wrong when trying to process and display image copy with lower res

I'm currently making an application using processing intended to take an image and apply 8bit style processing to it: that is to make it look pixelated. To do this it has a method that take a style and window size as parameters (style is the shape in which the window is to be displayed - rect, ellipse, cross etc, and window size is a number between 1-10 squared) - to produce results similar to the iphone app pxl ( http://itunes.apple.com/us/app/pxl./id499620829?mt=8 ). This method then counts through the image's pixels, window by window averages the colour of the window and displays a rect(or which every shape/style chosen) at the equivalent space on the other side of the sketch window (the sketch when run is supposed to display the original image on the left mirror it with the processed version on the right).
The problem Im having is when drawing the averaged colour rects, the order in which they display becomes skewed..
Although the results are rather amusing, they are not what I want. Here the code:
//=========================================================
// GLOBAL VARIABLES
//=========================================================
PImage img;
public int avR, avG, avB;
private final int BLOCKS = 0, DOTS = 1, VERTICAL_CROSSES = 2, HORIZONTAL_CROSSES = 3;
public sRGB styleColour;
//=========================================================
// METHODS FOR AVERAGING WINDOW COLOURS, CREATING AN
// 8 BIT REPRESENTATION OF THE IMAGE AND LOADING AN
// IMAGE
//=========================================================
public sRGB averageWindowColour(color [] c){
// RGB Variables
float r = 0;
float g = 0;
float b = 0;
// Iterator
int i = 0;
int sizeOfWindow = c.length;
// Count through the window's pixels, store the
// red, green and blue values in the RGB variables
// and sum them into the average variables
for(i = 0; i < c.length; i++){
r = red (c[i]);
g = green(c[i]);
b = blue (c[i]);
avR += r;
avG += g;
avB += b;
}
// Divide the sum of the red, green and blue
// values by the number of pixels in the window
// to obtain the average
avR = avR / sizeOfWindow;
avG = avG / sizeOfWindow;
avB = avB / sizeOfWindow;
// Return the colour
return new sRGB(avR,avG,avB);
}
public void eightBitIT(int style, int windowSize){
img.loadPixels();
for(int wx = 0; wx < img.width; wx += (sqrt(windowSize))){
for(int wy = 0; wy < img.height; wy += (sqrt(windowSize))){
color [] tempCols = new color[windowSize];
int i = 0;
for(int x = 0; x < (sqrt(windowSize)); x ++){
for(int y = 0; y < (sqrt(windowSize)); y ++){
int loc = (wx+x) + (y+wy)*(img.width-windowSize);
tempCols[i] = img.pixels[loc];
// println("Window loc X: "+(wx+(img.width+5))+" Window loc Y: "+(wy+5)+" Window pix X: "+x+" Window Pix Y: "+y);
i++;
}
}
//this is ment to be in a switch test (0 = rect, 1 ellipse etc)
styleColour = new sRGB(averageWindowColour(tempCols));
//println("R: "+ red(styleColour.returnColourScaled())+" G: "+green(styleColour.returnColourScaled())+" B: "+blue(styleColour.returnColourScaled()));
rectMode(CORNER);
noStroke();
fill(styleColour.returnColourScaled());
//println("Rect Loc X: "+(wx+(img.width+5))+" Y: "+(wy+5));
ellipse(wx+(img.width+5),wy+5,sqrt(windowSize),sqrt(windowSize));
}
}
}
public PImage load(String s){
PImage temp = loadImage(s);
temp.resize(600,470);
return temp;
}
void setup(){
background(0);
// Load the image and set size of screen to its size*2 + the borders
// and display the image.
img = loadImage("oscilloscope.jpg");
size(img.width*2+15,(img.height+10));
frameRate(25);
image(img,5,5);
// Draw the borders
strokeWeight(5);
stroke(255);
rectMode(CORNERS);
noFill();
rect(2.5,2.5,img.width+3,height-3);
rect(img.width+2.5,2.5,width-3,height-3);
stroke(255,0,0);
strokeWeight(1);
rect(5,5,9,9); //window example
// process the image
eightBitIT(BLOCKS, 16);
}
void draw(){
//eightBitIT(BLOCKS, 4);
//println("X: "+mouseX+" Y: "+mouseY);
}
This has been bugging me for a while now as I can't see where in my code im offsetting the coordinates so they display like this. I know its probably something very trivial but I can seem to work it out. If anyone can spot why this skewed reordering is happening i would be much obliged as i have quite a lot of other ideas i want to implement and this is holding me back...
Thanks,

Resources