I've been trying a couple of ways to achieve this using Processing, but each time, it doesn't get some that are perfectly consecutive. Would anyone know the "right way" of doing this?
Thanks in advance!
In theory it should be a matter of listening to the captureEvent() to get a new frame and keep track if the first frame was previously recorded, if so recorded the second one afterwards.
Here's a basic commented sketch to illustrate the point (press any key to grab another pair of frames):
import processing.video.*;
Capture camera;
PImage firstFrame;
PImage secondFrame;
void setup(){
size(1920,480);
camera = new Capture(this,640,480);
camera.start();
}
void draw(){
image(camera,0,0);
if(firstFrame != null){
image(firstFrame,640,0);
}
if(secondFrame != null){
image(secondFrame,1280,0);
}
}
//this is the callback from the video library when a new camera frame is available
void captureEvent(Capture c){
//read a new frame
c.read();
//if the first frame wasn't recorded yet, record(copy) it's pixels
if(firstFrame == null){
firstFrame = c.get();
}
//same for the second frame, but check if the first frame has been recorded first
if(firstFrame != null && secondFrame == null){
secondFrame = c.get();
}
}
void keyPressed(){
//reset consecutive frames on keypress
firstFrame = secondFrame = null;
}
In theory (as you can see in the Processing Video Library's source code), captureEvent is fired only when a new camera sample is ready.
In practice, you will find that two consecutive frames may look identical (even though they could be a split second apart in time), even noise as you pointed out in your comments.
It feels like what you're after is a frame that is consecutive, but different enough from the previous. If that's the case you can have a play with the FrameDifferencing example (Processing > Examples > Libraries > Video > Capture > FrameDifferencing)
Here's a modified version of the above sketch, using Golan Levin's FrameDifferencing code to only grab a second frame if it's different by a little bit:
import processing.video.*;
Capture camera;
PImage firstFrame;
PImage secondFrame;
PImage diff;
void setup(){
size(1920,960);
camera = new Capture(this,640,480);
camera.start();
diff = createImage(640,480,RGB);
}
void draw(){
image(camera,0,0);
if(firstFrame != null){
image(firstFrame,640,0);
}
if(secondFrame != null){
image(secondFrame,1280,0);
}
image(diff,0,480);
}
//this is the callback from the video library when a new camera frame is available
void captureEvent(Capture c){
//read a new frame
c.read();
//if the first frame wasn't recorded yet, record(copy) it's pixels
if(firstFrame == null){
firstFrame = c.get();
println("recorded first frame at",new java.util.Date());
}
//same for the second frame, but check if the first frame has been recorded first
if(firstFrame != null && secondFrame == null){
//if the difference between the first frame cand the current frame is even ever so slightly off, record the second frame
if(difference(firstFrame,camera) > 100){
secondFrame = c.get();
}
}
}
int difference(PImage first,PImage second){
final int numPixels = 640*480;
camera.loadPixels();
int movementSum = 0; // Amount of movement in the frame
for (int i = 0; i < numPixels; i++) { // For each pixel in the video frame...
color currColor = first.pixels[i];
color prevColor = second.pixels[i];
// Extract the red, green, and blue components from current pixel
int currR = (currColor >> 16) & 0xFF; // Like red(), but faster
int currG = (currColor >> 8) & 0xFF;
int currB = currColor & 0xFF;
// Extract red, green, and blue components from previous pixel
int prevR = (prevColor >> 16) & 0xFF;
int prevG = (prevColor >> 8) & 0xFF;
int prevB = prevColor & 0xFF;
// Compute the difference of the red, green, and blue values
int diffR = abs(currR - prevR);
int diffG = abs(currG - prevG);
int diffB = abs(currB - prevB);
// Render the difference image to the screen
diff.pixels[i] = color(diffR, diffG, diffB);
// Add these differences to the running tally
movementSum += diffR + diffG + diffB;
}
diff.updatePixels();
return movementSum;
}
void keyPressed(){
//reset consecutive frames on keypress
firstFrame = secondFrame = null;
}
In the example above 100 is an arbitrary value.
The max would be 255*3*640*480 (0-255 per channel * number of channels * width * height)
I looked at the KetaiCamera sources and issue reports, especially this one. Unfortunately this code is not built to provide real time camera frames.
You can try to take the AndroidCapture project as a starter, and modify the native Android class to achieve your goal.
Related
I am trying to connect rotate a shape in processing using serial data from a gyroscope. The shape rotates fine from 0-90 but when the angle is greater than 90 it doesn't rotate until it reaches approx 180. The serial data is correct as I am printing it on the serial monitor. I should note that it does work rarely in the range of 90-180. What can I do.
void draw(){
pushMatrix();
translate(200,200);
rotateX(radians(90));
rotateY(myVal);
scale(50);
beginShape(QUADS);
while (mySerial.available() > 0){
myString = mySerial.readStringUntil(nl);
if(myString != null){
background(0.5);
myVal = float(myString) ;
myVal = radians(-1 * myVal);
The while loop is blocking the main animation/rendering thread.
An alternative is to pair bufferUntil() with serialEvent():
e.g.
void setup(){
...
mySerial.bufferUntil(lf);
}
...
void serialEvent(Serial p){
myString = mySerial.readString();
myVal = float(myString.trim()) ;
myVal = radians(-1 * myVal);
}
So, i'm sorry if this is a weird or stupid question but I genuinely couldn't find an answer. So, the thing is I'm trying to do a visual representation of the multiplication tables, so i divide a circle in a certain number of "slices", for example I divide it in 10. Then I join each point with its product, for example for the 2 times table i make a line between 1 and 2, another one between 2 and 4, 3 and 6 and so on...
The thing is, if i surpass a certain amount of "slices" i can clearly see Processing drawing each one of the lines one by one. I wanted to progressively change the number of slices so you can see how does it evolve, but then the lines would just have to "appear" or change instantaneously since the "animation" makes no sense if you see it drawing every line. Is there a way I can improve the speed of the program, or just make it show all lines at once?
For reference, this is how i kinda want it to look like:
YouTube video
This is the code i'm using (with the ControlP5 library and soooo poorly optimized):
import controlP5.*;
ControlP5 cp5;
Knob myKnobA;
Knob myKnobB;
int ncosas = 30;
float sumangle = (2*PI)/ncosas;
float angle = HALF_PI + PI + sumangle;
int radius = 100;
int counter = 1;
int sumar = 15;
int tablade = 2;
int prueba = 30;
void setup(){
size(400,400);
background(255);
textAlign(CENTER,CENTER);
fill(0);
stroke(0);
textSize(8);
cp5 = new ControlP5(this);
myKnobA = cp5.addKnob("Servo")
.setRange(1,120)
.setValue(1)
.setPosition(20,20)
.setRadius(30)
.setDragDirection(Knob.HORIZONTAL)
.setCaptionLabel("N")
.setColorCaptionLabel(0)
;
myKnobB = cp5.addKnob("TablaD")
.setRange(1,50)
.setValue(1)
.setPosition(20,120)
.setRadius(30)
.setDragDirection(Knob.HORIZONTAL)
.setCaptionLabel("Tabla de")
.setColorCaptionLabel(0)
;
//translate(height/2,width/2);
//line(0,0,radius*sin(radians(prueba)),radius*cos(radians(prueba)));
}
void draw(){
if(counter <= ncosas){
dibujar();
}
}
void Servo(int theValue){
background(255);
counter = 1;
ncosas = theValue;
sumangle = (2*PI)/ncosas;
angle = HALF_PI + PI + sumangle;
}
void TablaD(int theValue){
background(255);
counter = 1;
tablade = theValue;
angle = HALF_PI + PI + sumangle;
}
void dibujar(){
pushMatrix();
translate(width*2.5/4,height/2);
circle(radius*sin(angle),radius*cos(angle),2);
//if(counter*tablade<=ncosas){
line(radius*sin(angle),radius*cos(angle),radius*sin((counter*tablade*sumangle)+(angle-counter*sumangle)),radius*cos((counter*tablade*sumangle)+(angle-counter*sumangle)));
//}
println(counter*tablade + " -> " + counter*tablade*degrees(sumangle));
text(counter,(radius+sumar)*sin(angle),(radius+sumar)*cos(angle));
angle += sumangle;
counter++;
popMatrix();
}
void keyPressed(){
if (key == 'D' || key == 'd'){
Servo(int(myKnobA.getValue())+1);
myKnobA.setValue(int(myKnobA.getValue())+1);
}
if (key == 'A' || key == 'a'){
Servo(int(myKnobA.getValue())-1);
myKnobA.setValue(int(myKnobA.getValue())-1);
}
if (key == 'W' || key == 'w'){
TablaD(int(myKnobB.getValue())+1);
myKnobB.setValue(int(myKnobB.getValue())+1);
}
if (key == 'S' || key == 's'){
TablaD(int(myKnobB.getValue())-1);
myKnobB.setValue(int(myKnobB.getValue())-1);
}
}
Thank you in advance
To expand on what John Coleman said, you need to execute the dibujar() command multiple times in draw(). In Processing, the canvas is rendered at the end of the draw() loop, so if you draw multiple lines in draw(), they will all appear at the same time.
This will involve some kind of loop. If you want to draw the entire multiplication circle at once, you could replace if with while in the draw() loop:
void draw(){
while (counter <= ncosas){
dibujar();
}
}
I believe this will draw the entire multiplication circle in a single frame. You can then adjust the knobs to change the parameters of the multiplication circle, and the multiplication circle will change as you adjust the knobs.
I'm new to Processing and I need to make a program that, captured the main monitor, shows on the second screen the average color and makes a spiral using another color (perceptual dominant color) get by a function.
The problem is that the program is so slow (lag, 1FPS). I think it's because it has too many things to do everytime i do a screenshot, but I have no idea how to make it faster.
Also there could be many other problems, but the main one is that.
Thank you very much!
Here's the code:
import java.awt.Robot;
import java.awt.AWTException;
import java.awt.Rectangle;
import java.awt.color.ColorSpace;
PImage screenshot;
float a = 0;
int blockSize = 20;
int avg_c;
int per_c;
void setup() {
fullScreen(2); // 1920x1080
noStroke();
frame.removeNotify();
}
void draw() {
screenshot();
avg_c = extractColorFromImage(screenshot);
per_c = extractAverageColorFromImage(screenshot);
background(avg_c); // Average color
spiral();
}
void screenshot() {
try{
Robot robot_Screenshot = new Robot();
screenshot = new PImage(robot_Screenshot.createScreenCapture
(new Rectangle(0, 0, displayWidth, displayHeight)));
}
catch (AWTException e){ }
frame.setLocation(displayWidth/2, 0);
}
void spiral() {
fill (per_c);
for (int i = blockSize; i < width; i += blockSize*2)
{
ellipse(i, height/2+sin(a+i)*100, blockSize+cos(a+i)*5, blockSize+cos(a+i)*5);
a += 0.001;
}
}
color extractColorFromImage(PImage screenshot) { // Get average color
screenshot.loadPixels();
int r = 0, g = 0, b = 0;
for (int i = 0; i < screenshot.pixels.length; i++) {
color c = screenshot.pixels[i];
r += c>>16&0xFF;
g += c>>8&0xFF;
b += c&0xFF;
}
r /= screenshot.pixels.length; g /= screenshot.pixels.length; b /= screenshot.pixels.length;
return color(r, g, b);
}
color extractAverageColorFromImage(PImage screenshot) { // Get lab average color (perceptual)
float[] average = new float[3];
CIELab lab = new CIELab();
int numPixels = screenshot.pixels.length;
for (int i = 0; i < numPixels; i++) {
color rgb = screenshot.pixels[i];
float[] labValues = lab.fromRGB(new float[]{red(rgb),green(rgb),blue(rgb)});
average[0] += labValues[0];
average[1] += labValues[1];
average[2] += labValues[2];
}
average[0] /= numPixels;
average[1] /= numPixels;
average[2] /= numPixels;
float[] rgb = lab.toRGB(average);
return color(rgb[0] * 255,rgb[1] * 255, rgb[2] * 255);
}
public class CIELab extends ColorSpace {
#Override
public float[] fromCIEXYZ(float[] colorvalue) {
double l = f(colorvalue[1]);
double L = 116.0 * l - 16.0;
double a = 500.0 * (f(colorvalue[0]) - l);
double b = 200.0 * (l - f(colorvalue[2]));
return new float[] {(float) L, (float) a, (float) b};
}
#Override
public float[] fromRGB(float[] rgbvalue) {
float[] xyz = CIEXYZ.fromRGB(rgbvalue);
return fromCIEXYZ(xyz);
}
#Override
public float getMaxValue(int component) {
return 128f;
}
#Override
public float getMinValue(int component) {
return (component == 0)? 0f: -128f;
}
#Override
public String getName(int idx) {
return String.valueOf("Lab".charAt(idx));
}
#Override
public float[] toCIEXYZ(float[] colorvalue) {
double i = (colorvalue[0] + 16.0) * (1.0 / 116.0);
double X = fInv(i + colorvalue[1] * (1.0 / 500.0));
double Y = fInv(i);
double Z = fInv(i - colorvalue[2] * (1.0 / 200.0));
return new float[] {(float) X, (float) Y, (float) Z};
}
#Override
public float[] toRGB(float[] colorvalue) {
float[] xyz = toCIEXYZ(colorvalue);
return CIEXYZ.toRGB(xyz);
}
CIELab() {
super(ColorSpace.TYPE_Lab, 3);
}
private double f(double x) {
if (x > 216.0 / 24389.0) {
return Math.cbrt(x);
} else {
return (841.0 / 108.0) * x + N;
}
}
private double fInv(double x) {
if (x > 6.0 / 29.0) {
return x*x*x;
} else {
return (108.0 / 841.0) * (x - N);
}
}
private final ColorSpace CIEXYZ =
ColorSpace.getInstance(ColorSpace.CS_CIEXYZ);
private final double N = 4.0 / 29.0;
}
There's lots that can be done, even beyond what's already been mentioned.
Iteration & Threading
After taking the screenshot, immediately iterate over every 1/N pixels (perhaps every 4 or 8) of the buffered image. Then, during this iteration, calculate the LAB value for each pixel (as you have each pixel channel directly available), and meanwhile increment the running total of each RGB channel.
This saves us from iterating over the same pixels twice and avoids unncessary conversions (BufferedImage → PImage; and composing then decomposing pixel channels from PImage pixels).
Likewise, we avoid Processing's expensive resize() call (as suggested in another answer), which is not something we want to call every frame (even though it does speed the program up, it's not an efficient method).
Now, on top of iteration change, we can wrap the iteration in a Callable to easily run the workload across multiple system threads concurrently (after all, pixel iteration is embarrassingly parallel); the example below does this with 2 threads, each screenshotting and processing half of the display's pixels.
Optimise RGB→XYZ→LAB conversion
We're not so concerned about the backwards conversion since that's only done for one value per frame
It looks like you've implemented XYZ→LAB yourself and are using the RGB→XYZ converter from java.awt.color.
As has been identified, the forward conversion XYZ→LAB uses a cbrt() which is as a bottleneck. I also imagine that the RGB→XYZ implementation makes 3 calls to Math.Pow(x, 2.4) — 3 non-integer exponents per pixel adds considerably to the computation. The solution is faster math...
Jafama
Jafama is a drop-in java.math replacement -- simply import the library and replace any Math.__() calls with FastMath.__() for a free speedup (you could go even further by trading Jafama's E-15 precision with less accurate and even faster dedicated LUT-based classes).
So at the very least, swap out Math.cbrt() for FastMath.cbrt(). Then consider implementing RGB→XYZ yourself (example), again using Jafama in place of java.math.
You may even find that for such a project, converting to XYZ only is a sufficient color space to work with to overcome the well known weaknesses with RGB (and therefore save yourself from the XYZ→LAB conversion).
Cache LAB Calculation
Unless most pixels are changing every frame, then consider caching the LAB value for every pixel, recalculating it only when the pixel has changed between the current the previous frames. The tradeoff here is the overhead from checking every pixel against its previous value, versus how much calculation positive checks will save. Given that the LAB calculation is much more expensive it's very worthwhile here. The example below uses this technique.
Screen Capture
No matter how well optimised the rest of the program is, a considerable bottleneck is the AWT Robot's createScreenCapture(). It will struggles to go past 30FPS on large enough displays. I can't offer any exact advice but it's worth looking at other screen capture methods in Java.
Reworked code with iteration changes & threading
This code implements what has discussed above minus any changes to the LAB calculation.
float a = 0;
int blockSize = 20;
int avg_c;
int per_c;
java.util.concurrent.ExecutorService threadPool = java.util.concurrent.Executors.newFixedThreadPool(4);
List<java.util.concurrent.Callable<Boolean>> taskList;
float[] averageLAB;
int totalR = 0, totalG = 0, totalB = 0;
CIELab lab = new CIELab();
final int pixelStride = 8; // look at every 8th pixel
void setup() {
size(800, 800, FX2D);
noStroke();
frame.removeNotify();
taskList = new ArrayList<java.util.concurrent.Callable<Boolean>>();
Compute thread1 = new Compute(0, 0, width, height/2);
Compute thread2 = new Compute(0, height/2, width, height/2);
taskList.add(thread1);
taskList.add(thread2);
}
void draw() {
totalR = 0; // re init
totalG = 0; // re init
totalB = 0; // re init
averageLAB = new float[3]; // re init
final int numPixels = (width*height)/pixelStride;
try {
threadPool.invokeAll(taskList); // run threads now and block until completion of all
}
catch (Exception e) {
e.printStackTrace();
}
// calculate average LAB
averageLAB[0]/=numPixels;
averageLAB[1]/=numPixels;
averageLAB[2]/=numPixels;
final float[] rgb = lab.toRGB(averageLAB);
per_c = color(rgb[0] * 255, rgb[1] * 255, rgb[2] * 255);
// calculate average RGB
totalR/=numPixels;
totalG/=numPixels;
totalB/=numPixels;
avg_c = color(totalR, totalG, totalB);
background(avg_c); // Average color
spiral();
fill(255, 0, 0);
text(frameRate, 10, 20);
}
class Compute implements java.util.concurrent.Callable<Boolean> {
private final Rectangle screenRegion;
private Robot robot_Screenshot;
private final int[] previousRGB;
private float[][] previousLAB;
Compute(int x, int y, int w, int h) {
screenRegion = new Rectangle(x, y, w, h);
previousRGB = new int[w*h];
previousLAB = new float[w*h][3];
try {
robot_Screenshot = new Robot();
}
catch (AWTException e1) {
e1.printStackTrace();
}
}
#Override
public Boolean call() {
BufferedImage rawScreenshot = robot_Screenshot.createScreenCapture(screenRegion);
int[] ssPixels = new int[rawScreenshot.getWidth()*rawScreenshot.getHeight()]; // screenshot pixels
rawScreenshot.getRGB(0, 0, rawScreenshot.getWidth(), rawScreenshot.getHeight(), ssPixels, 0, rawScreenshot.getWidth()); // copy buffer to int[] array
for (int pixel = 0; pixel < ssPixels.length; pixel+=pixelStride) {
// get invididual colour channels
final int pixelColor = ssPixels[pixel];
final int R = pixelColor >> 16 & 0xFF;
final int G = pixelColor >> 8 & 0xFF;
final int B = pixelColor & 0xFF;
if (pixelColor != previousRGB[pixel]) { // if pixel has changed recalculate LAB value
float[] labValues = lab.fromRGB(new float[]{R/255f, G/255f, B/255f}); // note that I've fixed this; beforehand you were missing the /255, so it was always white.
previousLAB[pixel] = labValues;
}
averageLAB[0] += previousLAB[pixel][0];
averageLAB[1] += previousLAB[pixel][1];
averageLAB[2] += previousLAB[pixel][2];
totalR+=R;
totalG+=G;
totalB+=B;
previousRGB[pixel] = pixelColor; // cache last result
}
return true;
}
}
800x800px; pixelStride = 4; fairly static screen background
Yeesh, about 1 FPS on my machine:
To optimize code can be really hard, so instead of reading everything looking for stuff to improve, I started by testing where you were losing so much processing power. The answer was at this line:
per_c = extractAverageColorFromImage(screenshot);
The extractAverageColorFromImage method is well written, but it underestimate the amount of work it has to do. There is a quadratic relationship between the size of a screen and the number of pixels in this screen, so the bigger the screen the worst the situation. And this method is processing every pixel of the screenshot all the time, several time per screenshot.
This is a lot of work for an average color. Now, if there was a way to cut some corners... maybe a smaller screen, or a smaller screenshot... oh! there is! Let's resize the screenshot. After all, we don't need to go into such details as individual pixels for an average. In the screenshot method, add this line:
void screenshot() {
try {
Robot robot_Screenshot = new Robot();
screenshot = new PImage(robot_Screenshot.createScreenCapture(new Rectangle(0, 0, displayWidth, displayHeight)));
// ADD THE NEXT LINE
screenshot.resize(width/4, height/4);
}
catch (AWTException e) {
}
frame.setLocation(displayWidth/2, 0);
}
I divided the workload by 4, but I encourage you to tweak this number until you have the fastest satisfying result you can. This is just a proof of concept:
As you can see, resizing the screenshot and making it 4x smaller gives me 10x more speed. That's not a miracle, but it's much better, and I can't see a difference in the end result - but about that part, you'll have to use your own judgement, as you are the one who knows what your project is about. Hope it'll help!
Have fun!
Unfortunately I can't provide a detailed answer like laancelot (+1), but hopefully I can provide a few tips:
Resizing the image is definitely a good direction. Bare in mind you can also skip a number of pixels instead of incrementing every single pixel. (if you handle the pixel indices correctly, you can get a similar effect to resize without calling resize, though that won't save you a lot CPU time)
Don't create a new Robot instance multiple times a second. Create it once in setup and re-use it. (This is more of a good habit to get into)
Use a CPU profiler, such as the one in VisualVM to see what exactly is slow and aim to optimise the slowest stuff first.
point 1 example:
for (int i = 0; i < numPixels; i+= 100)
point 2 example:
Robot robot_Screenshot;
...
void setup() {
fullScreen(2); // 1920x1080
noStroke();
frame.removeNotify();
try{
robot_Screenshot = new Robot();
}catch(AWTException e){
println("error setting up screenshot Robot instance");
e.printStackTrace();
}
}
...
void screenshot() {
screenshot = new PImage(robot_Screenshot.createScreenCapture
(new Rectangle(0, 0, displayWidth, displayHeight)));
frame.setLocation(displayWidth/2, 0);
}
point 3 example:
Notice the slowest bit are actually AWT's fromRGB and Math.cbrt()
I'd suggest finding another alternative RGB -> XYZ -> L*a*b* conversion method that is simpler (mainly functions, less classes, with AWT or other dependencies) and hopefully faster.
Using the capture video in processing, I want to understand how to set up a small section of the camera feed that the camera will constantly scan. Within that defined section, I want the camera to look for a change in brightness (i.e the brightness now becomes dark.) If the brightness changes I just want it to return 'shadow detected.' Can anyone help me get started? I am very new to this language.
You can easily get a small section of the camera(or any image) using PImage's get() method to which you pass a bunch of coordinates describing of your section rectangle(x,y, width, height).
This is also known as a region of interest (ROI) in computer vision.
Once you retrieve this region, you can process it.
Here a minimal example showing how to get the ROI and process it (in this case simply doing threshold based on the mouse position:
import processing.video.*;
Capture cam;
int w = 320;
int h = 240;
int np = w*h;
int roiX = 80;
int roiY = 60;
int roiW = 160;
int roiH = 120;
PImage roi;
void setup(){
size(w,h);
cam = new Capture(this,w,h);
cam.start();
}
void draw(){
image(cam,0,0);
if(roi != null){
//process ROI
// roi.filter(GRAY);
roi.filter(THRESHOLD,(float)mouseX/width);
//display output
image(roi,roiX,roiY);
}
}
void captureEvent(Capture c){
c.read();
roi = c.get(roiX,roiY,roiW,roiH);
}
You can get the brightness of a pixel using the brightness() function.
This means you can get the average brightness of your ROI by adding the brightness levels for each pixels, then dividing the result by the total number of pixels:
import processing.video.*;
Capture cam;
int w = 320;
int h = 240;
int np = w*h;
int roiX = 80;
int roiY = 60;
int roiW = 160;
int roiH = 120;
PImage roi;
void setup(){
size(w,h);fill(127);
cam = new Capture(this,w,h);
cam.start();
}
void draw(){
image(cam,0,0);
if(roi != null){
//process ROI
// roi.filter(GRAY);
roi.filter(THRESHOLD,(float)mouseX/width);
//display output
image(roi,roiX,roiY);
text("ROI brightness:"+brightness(roi),10,15);
}
}
void captureEvent(Capture c){
c.read();
roi = c.get(roiX,roiY,roiW,roiH);
}
float brightness(PImage in){
float brightness = 0.0;
int numPixels = in.pixels.length;
for(int i = 0 ; i < numPixels; i++) brightness += brightness(in.pixels[i]);
return brightness/numPixels;
}
If you've set your ROI to cover the bright area, you should see the average brightness go down as the shadow appears. Simply using a threshold value in a condition should allow to act on it:
import processing.video.*;
Capture cam;
int w = 320;
int h = 240;
int np = w*h;
int roiX = 80;
int roiY = 60;
int roiW = 160;
int roiH = 120;
PImage roi;
float brightness = 0.0;
float shadowThresh = 127.0;
void setup(){
size(w,h);fill(127);
cam = new Capture(this,w,h);
cam.start();
}
void draw(){
image(cam,0,0);
if(roi != null){
//process ROI
// roi.filter(GRAY);
roi.filter(THRESHOLD,(float)mouseX/width);
brightness = brightness(roi);
if(brightness < shadowThresh) println("shadow detected");
//display output
image(roi,roiX,roiY);
text("ROI brightness:"+brightness,10,15);
}
}
void captureEvent(Capture c){
c.read();
roi = c.get(roiX,roiY,roiW,roiH);
}
float brightness(PImage in){
float brightness = 0.0;
int numPixels = in.pixels.length;
for(int i = 0 ; i < numPixels; i++) brightness += brightness(in.pixels[i]);
return brightness/numPixels;
}
Hopefully these examples are easy to read and understand.
Note that this aren't as fast as they can be.
Be sure to also check out the video examples that come with Processing (Examples > Libraries > video > Capture), especially these: BrightnessThresholding,BrightnessTracking
If you want to learn more about techniques like these you should look into computer vision and the OpenCV library. There is a very nice OpenCV Processing library which you can now easily install via Sketch > Import Library... > Add Library... and select OpenCV for Processing. It also comes with examples on using brightness.
This covers the pixel manipulation side, but another important aspect of doing this sort of development is setup. It's crucial to have a reliable setup: it will make your life easier. What I mean by that is, in your case:
having control over the camera: being able to control auto white balance/brightness/etc. as automatic adjustments may throw off your values.
having control over the scene: making sure you reduce of risks of accidental lights messing with your tracking, or something bumping over the camera or object you're tracking.
Assuming the camera data you are analysing is a PImage, you can apply filters to the data to get it into a Black/White or grey scale form. The docs on PImage Filter modes: https://processing.org/reference/filter_.html should be useful.
You will probably have to do a pixel analysis - there may be a library to help here, but you can get an array of pixels from the filtered PImage, loop through it and check the values against your baseline values to see if they are brighter or darker. If they are grey scale in the 0 - 255 scale, you can tell if they are lighter if the number if higher than the baseline, or darker of the number is lower.
I was wondering if anyone can be amazing and help me with something I'm working on in Processing. I need to play a video file with transparencies over a live feed so that the video isn't simply a rectangle. Here is the section of the code that I think I need to add something to or change. I'm extremely new to all of this and I'm extremely grateful to anyone that can help.
If you're video has an alpha channel, that's great,
otherwise, you should be able to blend() the other content.
Here's a basic proof of concept sketch. It overlays a grid of circles on top of a live feed. Use the space key to cycle though blend modes. Some will work better than others depending on your content and what you're trying to achieve:
import processing.video.*;
Capture cam;
int w = 320;
int h = 240;
int np = w*h;
PImage overlay;
int blendMode = 1;
int[] blendModes = {BLEND,ADD,SUBTRACT,DARKEST,LIGHTEST,DIFFERENCE,EXCLUSION,MULTIPLY,SCREEN,OVERLAY,HARD_LIGHT,SOFT_LIGHT,DODGE,BURN};
String[] blendModesNames = {"BLEND","ADD","SUBTRACT","DARKEST","LIGHTEST","DIFFERENCE","EXCLUSION","MULTIPLY","SCREEN","OVERLAY","HARD_LIGHT","SOFT_LIGHT","DODGE","BURN"};
void setup(){
size(w,h);
cam = new Capture(this,w,h);
cam.start();
//test content to overlay, a grid of circles
background(0);fill(255);
for(int y = 0 ; y < height; y += 30)
for(int x = 0 ; x < width; x+= 30)
ellipse(x,y,15,15);
overlay = get();
}
void draw(){
image(cam,0,0);
blend(overlay,0,0,width,height,0,0,width,height,blendModes[blendMode]);
}
void keyReleased(){
if(key == ' ') {
blendMode = (blendMode+1)%blendModes.length;
println("blendMode: " + blendModesNames[blendMode]);
}
}
void captureEvent(Capture c){
c.read();
}
I solved (maybe can be improved) by using 2 videos: first footage is the color map with white color on the background; second footage is the matte mask: white for the "important" part, and black the others. Then apply mask() function, herunder is the important part of the code:
Movie mov1;
Movie mov2;
void setup() {
....code...
mov1 = new Movie(this, "matte.mov");
mov2 = new Movie(this, "alpha.mov");
mov1.play();
mov1.pause();
mov2.play();
mov2.pause();
}
void draw() {
...code...
mov1.play();
mov2.play();
loadPixels();
mov2.mask(mov1);
image(mov2, 0, 0);
}
The video used for the test was 256x256, I always use power of two numbers for better performance (float maths). Hope this helps someone!