Saving point cloud data into a file - processing

I have the following code working on Processing 2, with Kinect:
import org.openkinect.freenect.*;
import org.openkinect.processing.*;
// Kinect Library object
Kinect kinect;
// Angle for rotation
float a = 0;
// We'll use a lookup table so that we don't have to repeat the math over and over
float[] depthLookUp = new float[2048];
void setup() {
// Rendering in P3D
size(1200, 800, P3D);
kinect = new Kinect(this);
kinect.initDepth();
// Lookup table for all possible depth values (0 - 2047)
for (int i = 0; i < depthLookUp.length; i++) {
depthLookUp[i] = rawDepthToMeters(i);
}
}
void draw() {
background(0);
// Get the raw depth as array of integers
int[] depth = kinect.getRawDepth();
// We're just going to calculate and draw every 4th pixel (equivalent of 160x120)
int skip = 4;
// Translate and rotate
translate(width/2, height/2, -50);
rotateY(a);
for (int x = 0; x < kinect.width; x += skip) {
for (int y = 0; y < kinect.height; y += skip) {
int offset = x + y*kinect.width;
// Convert kinect data to world xyz coordinate
int rawDepth = depth[offset];
PVector v = depthToWorld(x, y, rawDepth);
stroke(255);
pushMatrix();
// Scale up by 200
float factor = 200;
translate(v.x*factor, v.y*factor, factor-v.z*factor);
// Draw a point
point(0, 0);
popMatrix();
}
}
// Rotate
a += 0.015f;
}
// These functions come from: http://graphics.stanford.edu/~mdfisher/Kinect.html
float rawDepthToMeters(int depthValue) {
if (depthValue < 2047) {
return (float)(1.0 / ((double)(depthValue) * -0.0030711016 + 3.3309495161));
}
return 0.0f;
}
PVector depthToWorld(int x, int y, int depthValue) {
final double fx_d = 1.0 / 5.9421434211923247e+02;
final double fy_d = 1.0 / 5.9104053696870778e+02;
final double cx_d = 3.3930780975300314e+02;
final double cy_d = 2.4273913761751615e+02;
PVector result = new PVector();
double depth = depthLookUp[depthValue];//rawDepthToMeters(depthValue);
result.x = (float)((x - cx_d) * depth * fx_d);
result.y = (float)((y - cy_d) * depth * fy_d);
result.z = (float)(depth);
return result;
}
I would like to save pointcloud data in a file, so I can import it later on another program, such as Cinema 4D.
how do I create this file?

Processing has several functions for saving data to file, the simplest of which is saveStrings().
To use the saveStrings() function, you would simply store whatever you wanted to save into a String array, and then pass that into the function along with a filename.
You can then use the loadStrings() function to read the data from a file back into a String array.
How you format the data into a String is entirely up to you. You might store it as comma separated values.
More info can be found in the reference.
If you want to store the data into a file that another program can read, you have to first look up exactly what format that file needs to be in. I'd start by opening up some example files in a basic text editor.

Related

How can I get the RGB/Greyscale value fro a single pixel

I'm creating a code in Processing that applies a filter to a photo by going over each pixel, extracting the RGB/Grayscale value and modifying the RGB values. The program would take the grayscale value and run it through a few if/else statements to determine how much to modify the RGB values. So far I have this for the code but I'm stumped on how to extract the RGB/Gray values of a pixel
PImage method(PImage image) {
loadPixels();
image.filter(GRAY);
for (int i = 0; i < image.width; i++) {
for (int j = 0; j < image.height; j++) {
//This part here is to store the RGB values
float R;
float G;
float B;
//Convert the RGB to Gray
float coordCol = (0.2989*R) + (0.5870*G) + (0.1140*B);
if (coordCol < 60) {
float rDark = R * 0.9;
float gDark = G * 0.9;
float bDark = B * 0.9;
} else if(60 <= coordCol && coordCol <= 190) {
float bTintBro = B * 0.7;
} else {
float bTintYel = B * 0.9;
}
}
}
return image; // change this to return a new PImage object
}
I've tried many methods, get(), pixel[], filter(GRAY), etc but so far I still can't get the RGB values for a pixel
It's a question many will ask themselves because processing encodes it's colors in a non-intuitive manner. But you're in luck, because they totally know about it being that way! The helpful folks that coded Processing made a couple methods that will get you exactly what you want. Here's the documentation for the one to get the R value, you should be able to track the others from there.
Also, here's a short proof of concept demonstrating how to get the ARGB values from your sketch:
int rr, gg, bb, aa;
PImage bg;
void setup() {
size(600, 400);
// now setting up random colors for a test background
bg = createImage(width, height, RGB);
bg.loadPixels();
for (int i=0; i<width*height; i++) {
bg.pixels[i] = color(random(200), random(200), random(200), random(200));
}
updatePixels();
}
void draw() {
background(bg);
// giving visual feedback
fill(255);
textSize(15);
text("R: " + rr, 10, 20);
text("G: " + gg, 10, 40);
text("B: " + bb, 10, 60);
text("A: " + aa, 10, 80);
}
// THIS IS WHERE THE INFO YOU WANT IS
void mouseClicked() {
loadPixels();
int index = mouseX*mouseY;
rr = (int)red(pixels[index]);
gg = (int)green(pixels[index]);
bb = (int)blue(pixels[index]);
aa = (int)alpha(pixels[index]);
}
I hope it helps. Have fun!

Sierpinski carpet in processing

So I made the Sierpinski carpet fractal in processing using a Square data type which draw a square and has a function generate() that generates 9 equal squares out of itself and returns an ArrayList of (9-1)=8 squares removing the middle one (it is not added to the returned ArrayList) in order to generate the Sierpinski carpet.
Here is the class Square -
class Square {
PVector pos;
float r;
Square(float x, float y, float r) {
pos = new PVector(x, y);
this.r = r;
}
void display() {
noStroke();
fill(120,80,220);
rect(pos.x, pos.y, r, r);
}
ArrayList<Square> generate() {
ArrayList<Square> rects = new ArrayList<Square>();
float newR = r/3;
for (int i=0; i<3; i++) {
for (int j=0; j<3; j++) {
if (!(i==1 && j==1)) {
Square sq = new Square(pos.x+i*newR, pos.y+j*newR, newR);
rects.add(sq);
}
}
}
return rects;
}
}
This is the main sketch which moves forward the generation on mouse click -
ArrayList<Square> current;
void setup() {
size(600, 600);
current = new ArrayList<Square>();
current.add(new Square(0, 0, width));
}
void draw() {
background(255);
for (Square sq : current) {
sq.display();
}
}
void mousePressed() {
ArrayList<Square> next = new ArrayList<Square>();
for(Square sq: current) {
ArrayList<Square> rects = sq.generate();
next.addAll(rects);
}
current = next;
}
The problem :
The output that I am getting has very thin white lines which are not supposed to be there :
First generation -
Second generation -
Third generation -
My guess is that these lines are just the white background that shows up due to the calculations in generate() being off by a pixel or two. However I am not sure about how to get rid of these. Any help would be appreciated!
Here's a smaller example that demonstrates your problem:
size(1000, 100);
noStroke();
background(0);
float squareWidth = 9.9;
for(float squareX = 0; squareX < width; squareX += squareWidth){
rect(squareX, 0, squareWidth, height);
}
Notice that the black background is showing through the squares. Please try to post this kind of minimal example instead of your whole sketch in the future.
Anyway, there are three ways to fix this:
Option 1: Call the noSmooth() function.
By default, Processing uses anti-aliasing to make your drawings look smoother. Usually this is a good thing, but it can also add some fuzziness to the edges of shapes. If you disable anti-aliasing, your shapes will be more clear and you won't see the artifacts.
Option 2: Use a stroke with the same color as the fill.
As you've already discovered, this draws an outline around the shape.
Option 3: Use int values instead of float values.
You're storing your coordinates and sizes in float values, which can contain decimal places. The problem is, the screen (the actual pixels on your monitor) don't have decimal places (there is no such thing as half a pixel), so they're represented by int values. So when you convert a float value to an int, the decimal part is dropped, which can cause small gaps in your shapes.
If you just switch to using int values, the problem goes away:
size(1000, 100);
noStroke();
background(0);
int squareWidth = 10;
for(int squareX = 0; squareX < width; squareX += squareWidth){
rect(squareX, 0, squareWidth, height);
}

Kinect Depth Histogram in Processing

I'm trying to create a histogram displaying the distances scanned by a Kinect vs. their occurrences. I've adapted the Histogram example code to create a depth histogram, but it's currently displaying the depth at each pixel (from left to right) multiple times across the depth image width.
What I'm looking to do is reorder the depth information so that it ranges from the lowest value (that isn't 0) to the highest on the x axis, and shows their occurrences on the y. I'm using Processing, so I'm unsure if this is the right site to be posting on, but I've tried on the posting forum and not gotten any help. If anyone can show me where I'm going wrong, that'd be awesome. My current code is below, and a screenshot of my current output can be found here
import SimpleOpenNI.*;
SimpleOpenNI kinect;
void setup() {
size(1200, 580);
kinect = new SimpleOpenNI(this);
kinect.enableDepth();
}
void draw () {
kinect.update();
PImage depthImage = kinect.depthImage();
image (depthImage, 11, 0);
int[] depthValues = kinect.depthMap();
int[] hist = new int[716800];
for (int x = 11; x < depthImage.width; x++) {
for (int y = 0; y < depthImage.height; y++) {
int i = x + y * 640;
hist[i] = depthValues[i];
}
}
int histMax = max(hist);
stroke(20);
for (int i = 0; i < depthImage.width; i += 2) {
int which = int(map(i, 0, depthImage.width, 0, histMax));
int y = int(map(hist[which], 0, histMax, depthImage.height, 0));
line(i, depthImage.height, i, y);
}
}
I think you're asking two questions here.
How to get the histogram to go from 0-N:
Use Processing's sort() function to sort the array.
hist = sort(hist); // sorts your array numerically
How to get the histogram to fill the screen:
I'm not entirely sure why it's drawing twice, but I think you can clean up your code quite a bit.
// how far apart are the bars - set based on screen dimensions
int barSpacing = width / hist.length;
for (int i=0; i<hist.length; i++) {
// get value and map into usable range (note 10 not 0 for min)
int h = int(map(hist[i], 0,histMax, 10,height));
// set x position onscreen
int x = i * barSpacing;
// draw the bar
line(x,height, x,height-h);
}

Processing: Simultaneous drawing of random particle trajectories

class loc {
float[] x;
float[] y;
float v_o_x, v_o_y;
float[] locationx = new float[0];
float[] locationy = new float[0];
loc(float x_o, float y_o, float v_o, float theta, int t_end) {
theta = radians(theta);
v_o_x = v_o_x = v_o * cos(theta);
v_o_y = abs(v_o) * sin(theta);
for (int i=0; i<t_end; i++) {
locationx = append(locationx, (v_o_x * i + x_o));
locationy = append(locationy, (0.5*10*pow(i, 2) - v_o_y*i + y_o));
}
this.x = locationx;
this.y = locationy;
}
}
loc locations;
int wait = 75; // change delay between animation
int i = 0;
int j = 0;
float randV = random(-70, 70);
float randAng = random(30, 50);
int len = 17;
void setup() {
size(1500, 800);
background(255);
}
void draw() {
fill(0);
int d = 20; // diameter
float[] xx, yy;
if (i < len) {
locations = new loc(width/2, height/3.5, randV, randAng, len);
xx = locations.x;
yy = locations.y;
//background(255);
rect(width/2-d, height/3.5+d, d*2, d*2);
float s = 255/locations.x.length;
fill((0+i*s));
ellipse(xx[i], yy[i], d, d);
i += 1;
delay(wait);
} else {
randV = random(-70, 70);
randAng = random(30, 50);
i = 0;
}
}
I have a simple code written that animates the trajectory of a ball for a random initial angle and velocity. As it currently runs, it will send one ball out, wait for it to land, and then send another random ball out. My hopes are to get it to simultaneously send out multiple random balls, to create a sort of fountain effect. I have had a lot of trouble getting it to do that, any suggestions?
Right now you've got some variables that represent the position (and past positions) of a single ball. For the sake of the question, I'll ignore for a second that you don't ever seem to use some of those variables.
You could copy all of those variables and repeat them for every ball you want. You would have ballOneLocations, ballTwoLocations, etc.
But that's pretty horrible, so you should wrap all of those variables up into a Ball class. Each instance of Ball would represent a separate ball and its past locations.
Then all you'd need to do is create an array or an ArrayList of Ball instances, and loop through them to update and draw them.
Here is a tutorial on how to use OOP in Processing to create multiple balls bouncing around the screen.
Agreed with Kevin Workman, classes are the way to go here.
One of the best resources for this stuff is Daniel Shiffman, particularly his book Nature of Code. Your question is dealt with in the Particle Systems chapter (Chapter 4).

Pixel reordering is wrong when trying to process and display image copy with lower res

I'm currently making an application using processing intended to take an image and apply 8bit style processing to it: that is to make it look pixelated. To do this it has a method that take a style and window size as parameters (style is the shape in which the window is to be displayed - rect, ellipse, cross etc, and window size is a number between 1-10 squared) - to produce results similar to the iphone app pxl ( http://itunes.apple.com/us/app/pxl./id499620829?mt=8 ). This method then counts through the image's pixels, window by window averages the colour of the window and displays a rect(or which every shape/style chosen) at the equivalent space on the other side of the sketch window (the sketch when run is supposed to display the original image on the left mirror it with the processed version on the right).
The problem Im having is when drawing the averaged colour rects, the order in which they display becomes skewed..
Although the results are rather amusing, they are not what I want. Here the code:
//=========================================================
// GLOBAL VARIABLES
//=========================================================
PImage img;
public int avR, avG, avB;
private final int BLOCKS = 0, DOTS = 1, VERTICAL_CROSSES = 2, HORIZONTAL_CROSSES = 3;
public sRGB styleColour;
//=========================================================
// METHODS FOR AVERAGING WINDOW COLOURS, CREATING AN
// 8 BIT REPRESENTATION OF THE IMAGE AND LOADING AN
// IMAGE
//=========================================================
public sRGB averageWindowColour(color [] c){
// RGB Variables
float r = 0;
float g = 0;
float b = 0;
// Iterator
int i = 0;
int sizeOfWindow = c.length;
// Count through the window's pixels, store the
// red, green and blue values in the RGB variables
// and sum them into the average variables
for(i = 0; i < c.length; i++){
r = red (c[i]);
g = green(c[i]);
b = blue (c[i]);
avR += r;
avG += g;
avB += b;
}
// Divide the sum of the red, green and blue
// values by the number of pixels in the window
// to obtain the average
avR = avR / sizeOfWindow;
avG = avG / sizeOfWindow;
avB = avB / sizeOfWindow;
// Return the colour
return new sRGB(avR,avG,avB);
}
public void eightBitIT(int style, int windowSize){
img.loadPixels();
for(int wx = 0; wx < img.width; wx += (sqrt(windowSize))){
for(int wy = 0; wy < img.height; wy += (sqrt(windowSize))){
color [] tempCols = new color[windowSize];
int i = 0;
for(int x = 0; x < (sqrt(windowSize)); x ++){
for(int y = 0; y < (sqrt(windowSize)); y ++){
int loc = (wx+x) + (y+wy)*(img.width-windowSize);
tempCols[i] = img.pixels[loc];
// println("Window loc X: "+(wx+(img.width+5))+" Window loc Y: "+(wy+5)+" Window pix X: "+x+" Window Pix Y: "+y);
i++;
}
}
//this is ment to be in a switch test (0 = rect, 1 ellipse etc)
styleColour = new sRGB(averageWindowColour(tempCols));
//println("R: "+ red(styleColour.returnColourScaled())+" G: "+green(styleColour.returnColourScaled())+" B: "+blue(styleColour.returnColourScaled()));
rectMode(CORNER);
noStroke();
fill(styleColour.returnColourScaled());
//println("Rect Loc X: "+(wx+(img.width+5))+" Y: "+(wy+5));
ellipse(wx+(img.width+5),wy+5,sqrt(windowSize),sqrt(windowSize));
}
}
}
public PImage load(String s){
PImage temp = loadImage(s);
temp.resize(600,470);
return temp;
}
void setup(){
background(0);
// Load the image and set size of screen to its size*2 + the borders
// and display the image.
img = loadImage("oscilloscope.jpg");
size(img.width*2+15,(img.height+10));
frameRate(25);
image(img,5,5);
// Draw the borders
strokeWeight(5);
stroke(255);
rectMode(CORNERS);
noFill();
rect(2.5,2.5,img.width+3,height-3);
rect(img.width+2.5,2.5,width-3,height-3);
stroke(255,0,0);
strokeWeight(1);
rect(5,5,9,9); //window example
// process the image
eightBitIT(BLOCKS, 16);
}
void draw(){
//eightBitIT(BLOCKS, 4);
//println("X: "+mouseX+" Y: "+mouseY);
}
This has been bugging me for a while now as I can't see where in my code im offsetting the coordinates so they display like this. I know its probably something very trivial but I can seem to work it out. If anyone can spot why this skewed reordering is happening i would be much obliged as i have quite a lot of other ideas i want to implement and this is holding me back...
Thanks,

Resources