My shape will not rotate smoothly in processing - processing

I am trying to connect rotate a shape in processing using serial data from a gyroscope. The shape rotates fine from 0-90 but when the angle is greater than 90 it doesn't rotate until it reaches approx 180. The serial data is correct as I am printing it on the serial monitor. I should note that it does work rarely in the range of 90-180. What can I do.
void draw(){
pushMatrix();
translate(200,200);
rotateX(radians(90));
rotateY(myVal);
scale(50);
beginShape(QUADS);
while (mySerial.available() > 0){
myString = mySerial.readStringUntil(nl);
if(myString != null){
background(0.5);
myVal = float(myString) ;
myVal = radians(-1 * myVal);

The while loop is blocking the main animation/rendering thread.
An alternative is to pair bufferUntil() with serialEvent():
e.g.
void setup(){
...
mySerial.bufferUntil(lf);
}
...
void serialEvent(Serial p){
myString = mySerial.readString();
myVal = float(myString.trim()) ;
myVal = radians(-1 * myVal);
}

Related

How to make something "instantaneously" in Processing

So, i'm sorry if this is a weird or stupid question but I genuinely couldn't find an answer. So, the thing is I'm trying to do a visual representation of the multiplication tables, so i divide a circle in a certain number of "slices", for example I divide it in 10. Then I join each point with its product, for example for the 2 times table i make a line between 1 and 2, another one between 2 and 4, 3 and 6 and so on...
The thing is, if i surpass a certain amount of "slices" i can clearly see Processing drawing each one of the lines one by one. I wanted to progressively change the number of slices so you can see how does it evolve, but then the lines would just have to "appear" or change instantaneously since the "animation" makes no sense if you see it drawing every line. Is there a way I can improve the speed of the program, or just make it show all lines at once?
For reference, this is how i kinda want it to look like:
YouTube video
This is the code i'm using (with the ControlP5 library and soooo poorly optimized):
import controlP5.*;
ControlP5 cp5;
Knob myKnobA;
Knob myKnobB;
int ncosas = 30;
float sumangle = (2*PI)/ncosas;
float angle = HALF_PI + PI + sumangle;
int radius = 100;
int counter = 1;
int sumar = 15;
int tablade = 2;
int prueba = 30;
void setup(){
size(400,400);
background(255);
textAlign(CENTER,CENTER);
fill(0);
stroke(0);
textSize(8);
cp5 = new ControlP5(this);
myKnobA = cp5.addKnob("Servo")
.setRange(1,120)
.setValue(1)
.setPosition(20,20)
.setRadius(30)
.setDragDirection(Knob.HORIZONTAL)
.setCaptionLabel("N")
.setColorCaptionLabel(0)
;
myKnobB = cp5.addKnob("TablaD")
.setRange(1,50)
.setValue(1)
.setPosition(20,120)
.setRadius(30)
.setDragDirection(Knob.HORIZONTAL)
.setCaptionLabel("Tabla de")
.setColorCaptionLabel(0)
;
//translate(height/2,width/2);
//line(0,0,radius*sin(radians(prueba)),radius*cos(radians(prueba)));
}
void draw(){
if(counter <= ncosas){
dibujar();
}
}
void Servo(int theValue){
background(255);
counter = 1;
ncosas = theValue;
sumangle = (2*PI)/ncosas;
angle = HALF_PI + PI + sumangle;
}
void TablaD(int theValue){
background(255);
counter = 1;
tablade = theValue;
angle = HALF_PI + PI + sumangle;
}
void dibujar(){
pushMatrix();
translate(width*2.5/4,height/2);
circle(radius*sin(angle),radius*cos(angle),2);
//if(counter*tablade<=ncosas){
line(radius*sin(angle),radius*cos(angle),radius*sin((counter*tablade*sumangle)+(angle-counter*sumangle)),radius*cos((counter*tablade*sumangle)+(angle-counter*sumangle)));
//}
println(counter*tablade + " -> " + counter*tablade*degrees(sumangle));
text(counter,(radius+sumar)*sin(angle),(radius+sumar)*cos(angle));
angle += sumangle;
counter++;
popMatrix();
}
void keyPressed(){
if (key == 'D' || key == 'd'){
Servo(int(myKnobA.getValue())+1);
myKnobA.setValue(int(myKnobA.getValue())+1);
}
if (key == 'A' || key == 'a'){
Servo(int(myKnobA.getValue())-1);
myKnobA.setValue(int(myKnobA.getValue())-1);
}
if (key == 'W' || key == 'w'){
TablaD(int(myKnobB.getValue())+1);
myKnobB.setValue(int(myKnobB.getValue())+1);
}
if (key == 'S' || key == 's'){
TablaD(int(myKnobB.getValue())-1);
myKnobB.setValue(int(myKnobB.getValue())-1);
}
}
Thank you in advance
To expand on what John Coleman said, you need to execute the dibujar() command multiple times in draw(). In Processing, the canvas is rendered at the end of the draw() loop, so if you draw multiple lines in draw(), they will all appear at the same time.
This will involve some kind of loop. If you want to draw the entire multiplication circle at once, you could replace if with while in the draw() loop:
void draw(){
while (counter <= ncosas){
dibujar();
}
}
I believe this will draw the entire multiplication circle in a single frame. You can then adjust the knobs to change the parameters of the multiplication circle, and the multiplication circle will change as you adjust the knobs.

Processing | Program is lagging

I'm new to Processing and I need to make a program that, captured the main monitor, shows on the second screen the average color and makes a spiral using another color (perceptual dominant color) get by a function.
The problem is that the program is so slow (lag, 1FPS). I think it's because it has too many things to do everytime i do a screenshot, but I have no idea how to make it faster.
Also there could be many other problems, but the main one is that.
Thank you very much!
Here's the code:
import java.awt.Robot;
import java.awt.AWTException;
import java.awt.Rectangle;
import java.awt.color.ColorSpace;
PImage screenshot;
float a = 0;
int blockSize = 20;
int avg_c;
int per_c;
void setup() {
fullScreen(2); // 1920x1080
noStroke();
frame.removeNotify();
}
void draw() {
screenshot();
avg_c = extractColorFromImage(screenshot);
per_c = extractAverageColorFromImage(screenshot);
background(avg_c); // Average color
spiral();
}
void screenshot() {
try{
Robot robot_Screenshot = new Robot();
screenshot = new PImage(robot_Screenshot.createScreenCapture
(new Rectangle(0, 0, displayWidth, displayHeight)));
}
catch (AWTException e){ }
frame.setLocation(displayWidth/2, 0);
}
void spiral() {
fill (per_c);
for (int i = blockSize; i < width; i += blockSize*2)
{
ellipse(i, height/2+sin(a+i)*100, blockSize+cos(a+i)*5, blockSize+cos(a+i)*5);
a += 0.001;
}
}
color extractColorFromImage(PImage screenshot) { // Get average color
screenshot.loadPixels();
int r = 0, g = 0, b = 0;
for (int i = 0; i < screenshot.pixels.length; i++) {
color c = screenshot.pixels[i];
r += c>>16&0xFF;
g += c>>8&0xFF;
b += c&0xFF;
}
r /= screenshot.pixels.length; g /= screenshot.pixels.length; b /= screenshot.pixels.length;
return color(r, g, b);
}
color extractAverageColorFromImage(PImage screenshot) { // Get lab average color (perceptual)
float[] average = new float[3];
CIELab lab = new CIELab();
int numPixels = screenshot.pixels.length;
for (int i = 0; i < numPixels; i++) {
color rgb = screenshot.pixels[i];
float[] labValues = lab.fromRGB(new float[]{red(rgb),green(rgb),blue(rgb)});
average[0] += labValues[0];
average[1] += labValues[1];
average[2] += labValues[2];
}
average[0] /= numPixels;
average[1] /= numPixels;
average[2] /= numPixels;
float[] rgb = lab.toRGB(average);
return color(rgb[0] * 255,rgb[1] * 255, rgb[2] * 255);
}
public class CIELab extends ColorSpace {
#Override
public float[] fromCIEXYZ(float[] colorvalue) {
double l = f(colorvalue[1]);
double L = 116.0 * l - 16.0;
double a = 500.0 * (f(colorvalue[0]) - l);
double b = 200.0 * (l - f(colorvalue[2]));
return new float[] {(float) L, (float) a, (float) b};
}
#Override
public float[] fromRGB(float[] rgbvalue) {
float[] xyz = CIEXYZ.fromRGB(rgbvalue);
return fromCIEXYZ(xyz);
}
#Override
public float getMaxValue(int component) {
return 128f;
}
#Override
public float getMinValue(int component) {
return (component == 0)? 0f: -128f;
}
#Override
public String getName(int idx) {
return String.valueOf("Lab".charAt(idx));
}
#Override
public float[] toCIEXYZ(float[] colorvalue) {
double i = (colorvalue[0] + 16.0) * (1.0 / 116.0);
double X = fInv(i + colorvalue[1] * (1.0 / 500.0));
double Y = fInv(i);
double Z = fInv(i - colorvalue[2] * (1.0 / 200.0));
return new float[] {(float) X, (float) Y, (float) Z};
}
#Override
public float[] toRGB(float[] colorvalue) {
float[] xyz = toCIEXYZ(colorvalue);
return CIEXYZ.toRGB(xyz);
}
CIELab() {
super(ColorSpace.TYPE_Lab, 3);
}
private double f(double x) {
if (x > 216.0 / 24389.0) {
return Math.cbrt(x);
} else {
return (841.0 / 108.0) * x + N;
}
}
private double fInv(double x) {
if (x > 6.0 / 29.0) {
return x*x*x;
} else {
return (108.0 / 841.0) * (x - N);
}
}
private final ColorSpace CIEXYZ =
ColorSpace.getInstance(ColorSpace.CS_CIEXYZ);
private final double N = 4.0 / 29.0;
}
There's lots that can be done, even beyond what's already been mentioned.
Iteration & Threading
After taking the screenshot, immediately iterate over every 1/N pixels (perhaps every 4 or 8) of the buffered image. Then, during this iteration, calculate the LAB value for each pixel (as you have each pixel channel directly available), and meanwhile increment the running total of each RGB channel.
This saves us from iterating over the same pixels twice and avoids unncessary conversions (BufferedImage → PImage; and composing then decomposing pixel channels from PImage pixels).
Likewise, we avoid Processing's expensive resize() call (as suggested in another answer), which is not something we want to call every frame (even though it does speed the program up, it's not an efficient method).
Now, on top of iteration change, we can wrap the iteration in a Callable to easily run the workload across multiple system threads concurrently (after all, pixel iteration is embarrassingly parallel); the example below does this with 2 threads, each screenshotting and processing half of the display's pixels.
Optimise RGB→XYZ→LAB conversion
We're not so concerned about the backwards conversion since that's only done for one value per frame
It looks like you've implemented XYZ→LAB yourself and are using the RGB→XYZ converter from java.awt.color.
As has been identified, the forward conversion XYZ→LAB uses a cbrt() which is as a bottleneck. I also imagine that the RGB→XYZ implementation makes 3 calls to Math.Pow(x, 2.4) — 3 non-integer exponents per pixel adds considerably to the computation. The solution is faster math...
Jafama
Jafama is a drop-in java.math replacement -- simply import the library and replace any Math.__() calls with FastMath.__() for a free speedup (you could go even further by trading Jafama's E-15 precision with less accurate and even faster dedicated LUT-based classes).
So at the very least, swap out Math.cbrt() for FastMath.cbrt(). Then consider implementing RGB→XYZ yourself (example), again using Jafama in place of java.math.
You may even find that for such a project, converting to XYZ only is a sufficient color space to work with to overcome the well known weaknesses with RGB (and therefore save yourself from the XYZ→LAB conversion).
Cache LAB Calculation
Unless most pixels are changing every frame, then consider caching the LAB value for every pixel, recalculating it only when the pixel has changed between the current the previous frames. The tradeoff here is the overhead from checking every pixel against its previous value, versus how much calculation positive checks will save. Given that the LAB calculation is much more expensive it's very worthwhile here. The example below uses this technique.
Screen Capture
No matter how well optimised the rest of the program is, a considerable bottleneck is the AWT Robot's createScreenCapture(). It will struggles to go past 30FPS on large enough displays. I can't offer any exact advice but it's worth looking at other screen capture methods in Java.
Reworked code with iteration changes & threading
This code implements what has discussed above minus any changes to the LAB calculation.
float a = 0;
int blockSize = 20;
int avg_c;
int per_c;
java.util.concurrent.ExecutorService threadPool = java.util.concurrent.Executors.newFixedThreadPool(4);
List<java.util.concurrent.Callable<Boolean>> taskList;
float[] averageLAB;
int totalR = 0, totalG = 0, totalB = 0;
CIELab lab = new CIELab();
final int pixelStride = 8; // look at every 8th pixel
void setup() {
size(800, 800, FX2D);
noStroke();
frame.removeNotify();
taskList = new ArrayList<java.util.concurrent.Callable<Boolean>>();
Compute thread1 = new Compute(0, 0, width, height/2);
Compute thread2 = new Compute(0, height/2, width, height/2);
taskList.add(thread1);
taskList.add(thread2);
}
void draw() {
totalR = 0; // re init
totalG = 0; // re init
totalB = 0; // re init
averageLAB = new float[3]; // re init
final int numPixels = (width*height)/pixelStride;
try {
threadPool.invokeAll(taskList); // run threads now and block until completion of all
}
catch (Exception e) {
e.printStackTrace();
}
// calculate average LAB
averageLAB[0]/=numPixels;
averageLAB[1]/=numPixels;
averageLAB[2]/=numPixels;
final float[] rgb = lab.toRGB(averageLAB);
per_c = color(rgb[0] * 255, rgb[1] * 255, rgb[2] * 255);
// calculate average RGB
totalR/=numPixels;
totalG/=numPixels;
totalB/=numPixels;
avg_c = color(totalR, totalG, totalB);
background(avg_c); // Average color
spiral();
fill(255, 0, 0);
text(frameRate, 10, 20);
}
class Compute implements java.util.concurrent.Callable<Boolean> {
private final Rectangle screenRegion;
private Robot robot_Screenshot;
private final int[] previousRGB;
private float[][] previousLAB;
Compute(int x, int y, int w, int h) {
screenRegion = new Rectangle(x, y, w, h);
previousRGB = new int[w*h];
previousLAB = new float[w*h][3];
try {
robot_Screenshot = new Robot();
}
catch (AWTException e1) {
e1.printStackTrace();
}
}
#Override
public Boolean call() {
BufferedImage rawScreenshot = robot_Screenshot.createScreenCapture(screenRegion);
int[] ssPixels = new int[rawScreenshot.getWidth()*rawScreenshot.getHeight()]; // screenshot pixels
rawScreenshot.getRGB(0, 0, rawScreenshot.getWidth(), rawScreenshot.getHeight(), ssPixels, 0, rawScreenshot.getWidth()); // copy buffer to int[] array
for (int pixel = 0; pixel < ssPixels.length; pixel+=pixelStride) {
// get invididual colour channels
final int pixelColor = ssPixels[pixel];
final int R = pixelColor >> 16 & 0xFF;
final int G = pixelColor >> 8 & 0xFF;
final int B = pixelColor & 0xFF;
if (pixelColor != previousRGB[pixel]) { // if pixel has changed recalculate LAB value
float[] labValues = lab.fromRGB(new float[]{R/255f, G/255f, B/255f}); // note that I've fixed this; beforehand you were missing the /255, so it was always white.
previousLAB[pixel] = labValues;
}
averageLAB[0] += previousLAB[pixel][0];
averageLAB[1] += previousLAB[pixel][1];
averageLAB[2] += previousLAB[pixel][2];
totalR+=R;
totalG+=G;
totalB+=B;
previousRGB[pixel] = pixelColor; // cache last result
}
return true;
}
}
800x800px; pixelStride = 4; fairly static screen background
Yeesh, about 1 FPS on my machine:
To optimize code can be really hard, so instead of reading everything looking for stuff to improve, I started by testing where you were losing so much processing power. The answer was at this line:
per_c = extractAverageColorFromImage(screenshot);
The extractAverageColorFromImage method is well written, but it underestimate the amount of work it has to do. There is a quadratic relationship between the size of a screen and the number of pixels in this screen, so the bigger the screen the worst the situation. And this method is processing every pixel of the screenshot all the time, several time per screenshot.
This is a lot of work for an average color. Now, if there was a way to cut some corners... maybe a smaller screen, or a smaller screenshot... oh! there is! Let's resize the screenshot. After all, we don't need to go into such details as individual pixels for an average. In the screenshot method, add this line:
void screenshot() {
try {
Robot robot_Screenshot = new Robot();
screenshot = new PImage(robot_Screenshot.createScreenCapture(new Rectangle(0, 0, displayWidth, displayHeight)));
// ADD THE NEXT LINE
screenshot.resize(width/4, height/4);
}
catch (AWTException e) {
}
frame.setLocation(displayWidth/2, 0);
}
I divided the workload by 4, but I encourage you to tweak this number until you have the fastest satisfying result you can. This is just a proof of concept:
As you can see, resizing the screenshot and making it 4x smaller gives me 10x more speed. That's not a miracle, but it's much better, and I can't see a difference in the end result - but about that part, you'll have to use your own judgement, as you are the one who knows what your project is about. Hope it'll help!
Have fun!
Unfortunately I can't provide a detailed answer like laancelot (+1), but hopefully I can provide a few tips:
Resizing the image is definitely a good direction. Bare in mind you can also skip a number of pixels instead of incrementing every single pixel. (if you handle the pixel indices correctly, you can get a similar effect to resize without calling resize, though that won't save you a lot CPU time)
Don't create a new Robot instance multiple times a second. Create it once in setup and re-use it. (This is more of a good habit to get into)
Use a CPU profiler, such as the one in VisualVM to see what exactly is slow and aim to optimise the slowest stuff first.
point 1 example:
for (int i = 0; i < numPixels; i+= 100)
point 2 example:
Robot robot_Screenshot;
...
void setup() {
fullScreen(2); // 1920x1080
noStroke();
frame.removeNotify();
try{
robot_Screenshot = new Robot();
}catch(AWTException e){
println("error setting up screenshot Robot instance");
e.printStackTrace();
}
}
...
void screenshot() {
screenshot = new PImage(robot_Screenshot.createScreenCapture
(new Rectangle(0, 0, displayWidth, displayHeight)));
frame.setLocation(displayWidth/2, 0);
}
point 3 example:
Notice the slowest bit are actually AWT's fromRGB and Math.cbrt()
I'd suggest finding another alternative RGB -> XYZ -> L*a*b* conversion method that is simpler (mainly functions, less classes, with AWT or other dependencies) and hopefully faster.

Getting two consecutive frames from camera in Processing

I've been trying a couple of ways to achieve this using Processing, but each time, it doesn't get some that are perfectly consecutive. Would anyone know the "right way" of doing this?
Thanks in advance!
In theory it should be a matter of listening to the captureEvent() to get a new frame and keep track if the first frame was previously recorded, if so recorded the second one afterwards.
Here's a basic commented sketch to illustrate the point (press any key to grab another pair of frames):
import processing.video.*;
Capture camera;
PImage firstFrame;
PImage secondFrame;
void setup(){
size(1920,480);
camera = new Capture(this,640,480);
camera.start();
}
void draw(){
image(camera,0,0);
if(firstFrame != null){
image(firstFrame,640,0);
}
if(secondFrame != null){
image(secondFrame,1280,0);
}
}
//this is the callback from the video library when a new camera frame is available
void captureEvent(Capture c){
//read a new frame
c.read();
//if the first frame wasn't recorded yet, record(copy) it's pixels
if(firstFrame == null){
firstFrame = c.get();
}
//same for the second frame, but check if the first frame has been recorded first
if(firstFrame != null && secondFrame == null){
secondFrame = c.get();
}
}
void keyPressed(){
//reset consecutive frames on keypress
firstFrame = secondFrame = null;
}
In theory (as you can see in the Processing Video Library's source code), captureEvent is fired only when a new camera sample is ready.
In practice, you will find that two consecutive frames may look identical (even though they could be a split second apart in time), even noise as you pointed out in your comments.
It feels like what you're after is a frame that is consecutive, but different enough from the previous. If that's the case you can have a play with the FrameDifferencing example (Processing > Examples > Libraries > Video > Capture > FrameDifferencing)
Here's a modified version of the above sketch, using Golan Levin's FrameDifferencing code to only grab a second frame if it's different by a little bit:
import processing.video.*;
Capture camera;
PImage firstFrame;
PImage secondFrame;
PImage diff;
void setup(){
size(1920,960);
camera = new Capture(this,640,480);
camera.start();
diff = createImage(640,480,RGB);
}
void draw(){
image(camera,0,0);
if(firstFrame != null){
image(firstFrame,640,0);
}
if(secondFrame != null){
image(secondFrame,1280,0);
}
image(diff,0,480);
}
//this is the callback from the video library when a new camera frame is available
void captureEvent(Capture c){
//read a new frame
c.read();
//if the first frame wasn't recorded yet, record(copy) it's pixels
if(firstFrame == null){
firstFrame = c.get();
println("recorded first frame at",new java.util.Date());
}
//same for the second frame, but check if the first frame has been recorded first
if(firstFrame != null && secondFrame == null){
//if the difference between the first frame cand the current frame is even ever so slightly off, record the second frame
if(difference(firstFrame,camera) > 100){
secondFrame = c.get();
}
}
}
int difference(PImage first,PImage second){
final int numPixels = 640*480;
camera.loadPixels();
int movementSum = 0; // Amount of movement in the frame
for (int i = 0; i < numPixels; i++) { // For each pixel in the video frame...
color currColor = first.pixels[i];
color prevColor = second.pixels[i];
// Extract the red, green, and blue components from current pixel
int currR = (currColor >> 16) & 0xFF; // Like red(), but faster
int currG = (currColor >> 8) & 0xFF;
int currB = currColor & 0xFF;
// Extract red, green, and blue components from previous pixel
int prevR = (prevColor >> 16) & 0xFF;
int prevG = (prevColor >> 8) & 0xFF;
int prevB = prevColor & 0xFF;
// Compute the difference of the red, green, and blue values
int diffR = abs(currR - prevR);
int diffG = abs(currG - prevG);
int diffB = abs(currB - prevB);
// Render the difference image to the screen
diff.pixels[i] = color(diffR, diffG, diffB);
// Add these differences to the running tally
movementSum += diffR + diffG + diffB;
}
diff.updatePixels();
return movementSum;
}
void keyPressed(){
//reset consecutive frames on keypress
firstFrame = secondFrame = null;
}
In the example above 100 is an arbitrary value.
The max would be 255*3*640*480 (0-255 per channel * number of channels * width * height)
I looked at the KetaiCamera sources and issue reports, especially this one. Unfortunately this code is not built to provide real time camera frames.
You can try to take the AndroidCapture project as a starter, and modify the native Android class to achieve your goal.

Video with alpha channel in Processing

I was wondering if anyone can be amazing and help me with something I'm working on in Processing. I need to play a video file with transparencies over a live feed so that the video isn't simply a rectangle. Here is the section of the code that I think I need to add something to or change. I'm extremely new to all of this and I'm extremely grateful to anyone that can help.
If you're video has an alpha channel, that's great,
otherwise, you should be able to blend() the other content.
Here's a basic proof of concept sketch. It overlays a grid of circles on top of a live feed. Use the space key to cycle though blend modes. Some will work better than others depending on your content and what you're trying to achieve:
import processing.video.*;
Capture cam;
int w = 320;
int h = 240;
int np = w*h;
PImage overlay;
int blendMode = 1;
int[] blendModes = {BLEND,ADD,SUBTRACT,DARKEST,LIGHTEST,DIFFERENCE,EXCLUSION,MULTIPLY,SCREEN,OVERLAY,HARD_LIGHT,SOFT_LIGHT,DODGE,BURN};
String[] blendModesNames = {"BLEND","ADD","SUBTRACT","DARKEST","LIGHTEST","DIFFERENCE","EXCLUSION","MULTIPLY","SCREEN","OVERLAY","HARD_LIGHT","SOFT_LIGHT","DODGE","BURN"};
void setup(){
size(w,h);
cam = new Capture(this,w,h);
cam.start();
//test content to overlay, a grid of circles
background(0);fill(255);
for(int y = 0 ; y < height; y += 30)
for(int x = 0 ; x < width; x+= 30)
ellipse(x,y,15,15);
overlay = get();
}
void draw(){
image(cam,0,0);
blend(overlay,0,0,width,height,0,0,width,height,blendModes[blendMode]);
}
void keyReleased(){
if(key == ' ') {
blendMode = (blendMode+1)%blendModes.length;
println("blendMode: " + blendModesNames[blendMode]);
}
}
void captureEvent(Capture c){
c.read();
}
I solved (maybe can be improved) by using 2 videos: first footage is the color map with white color on the background; second footage is the matte mask: white for the "important" part, and black the others. Then apply mask() function, herunder is the important part of the code:
Movie mov1;
Movie mov2;
void setup() {
....code...
mov1 = new Movie(this, "matte.mov");
mov2 = new Movie(this, "alpha.mov");
mov1.play();
mov1.pause();
mov2.play();
mov2.pause();
}
void draw() {
...code...
mov1.play();
mov2.play();
loadPixels();
mov2.mask(mov1);
image(mov2, 0, 0);
}
The video used for the test was 256x256, I always use power of two numbers for better performance (float maths). Hope this helps someone!

Processing/ Rhythm record

I'm doing a student project. I'm trying to record a rhythmic composition and draw a grid of vertical lines, based on it. It's going to look like knocking a tram-pam-pam over a wooden box (arduino; standard firmata). Then processing needs to map the time of this record and the width of the screen - and draw vertical lines in the places of the knocks.
Please, help, where to look to record this time and then map it to the screen.
So far I have this code. But it only draws lines on the knocks when there is a screen space; and saves in pdf.
import processing.serial.*;
import cc.arduino.*;
import processing.pdf.*;
Arduino arduino;
Serial myPort;
int x = 0;
void setup() {
size(500, 500);
background(#ffffff);
println(Arduino.list());
arduino = new Arduino(this, "/dev/tty.usbmodem1411", 57600);
//Set the Arduino digital pins as inputs.
arduino.pinMode(0, Arduino.INPUT);
beginRecord(PDF, "everything.pdf");
}
void draw() {
stroke(0);
for (int i = 0; i <= 0; i++) {
if (arduino.analogRead(i)>0) {
line(x, 0, x, height);
}
else {
x +=1;
}
}
}
void keyPressed() {
endRecord();
codexit();
}
I got this done, finally. Suppose, that it can be done better and wider, but the fact is that it works.
The small video about the outcome:
https://www.dropbox.com/s/1dp5tqqx16zp4l7/Abramova-5FCC0022-video.mov?dl=0
And the code:
import processing.serial.*;
import cc.arduino.*;
Arduino arduino;
PrintWriter output;
// The serial port:
Serial myPort;
int t = millis();
int[] time;
void setup() {
size(500, 500);
background(#ffffff);
// Prints out the available serial ports.
println(Arduino.list());
// Modify this line, by changing the "0" to the index of the serial
// port corresponding to your Arduino board (as it appears in the list
// printed by the line above).
arduino = new Arduino(this, "/dev/tty.usbmodem1411", 57600);
// Alternatively, use the name of the serial port corresponding to your
// Arduino (in double-quotes), as in the following line.
//arduino = new Arduino(this, "/dev/tty.usbmodem621", 57600);
// Set the Arduino digital pins as inputs.
arduino.pinMode(0, Arduino.INPUT);
// Creates the output for the time, dedicated to the beats.
output = createWriter("time.txt");
}
void draw() {
// when arduino sends signal, store the current
// time in milliseconds since the program started
if (arduino.analogRead(0)>30) {
// grabs the time, passed before the beat from start
String numbers = "millis()";
delay(100);
output.print (millis() + ",");
}
}
void keyPressed() {
output.flush(); // Writes the remaining data to the file
output.close(); // Finishes the file
}
void keyReleased() {
// Interprets the string from the saved beat sequence
String[] numbers = loadStrings("time.txt");
time = int(split(numbers[0],','));
stroke(0);
strokeWeight(5);
// Draws lines, based on a string
for (int i = 1; i < time.length; i++) {
int c = time[i]-time[0];
int d = time[time.length - 2];
int e = time[0];
int f = d-e;
// Vertical lines
line(c*500/f, 0 , c*500/f , height);
// Horisontal lines
line(0, c*500/f, width, c*500/f);
// Drawing rects
// Yellow
fill (255, 255, 0);
int m = (time [1] - time [0])*500/f;
int n = (time [2] - time [0])*500/f;
rect (m, m, n-m, n-m);
// Blue
fill (0, 0, 255);
int o = (time [8] - time [0])*500/f;
int p = (time [4] - time [0])*500/f;
rect (o, m, p-o, p-o);
// Red
fill(255, 0, 0);
int v = (time [3] - time [0])*500/f;
int k = (time [6] - time [0])*500/f;
int z = (time [8] - time [0])*500/f;
rect(v, p, z-v, k-v);
}
}

Resources