I wrote a waveform renderer that takes an audio file and creates something like this:
The logic is pretty simple. I calculate the number of audio samples required for each pixel, read those samples, average them and draw a column of pixels according to the resulting value.
Typically, I will render a whole song on around 600-800 pixels, so the wave is pretty compressed. Unfortunately this usually results in unappealing visuals as almost the entire song is just rendered at almost the same heights. There is no variation.
Interestingly, if you look at the waveforms on SoundCloud almost none of them are as boring as my results. They all have some variation. What could be the trick here? I don't think they just add random noise.
I don't think SoundCloud is doing anything particularly special. There are plenty of songs I see on their front page that are very flat. It has more to do with the way detail is perceived and what the overall dynamics of the song are like. The main difference is that SoundCloud is drawing absolute value. (The negative side of the image is just a mirror.)
For demonstration, here is a basic white noise plot with straight lines:
Now, typically a fill is used to make the overall outline easier to see. This already does a lot for the appearance:
Larger waveforms ("zoomed out" in particular) typically use a mirror effect because the dynamics become more pronounced:
Bars are another way to visualize and can give an illusion of detail:
A pseudo routine for a typical waveform graphic (average of abs and mirror) might look like this:
for (each pixel in width of image) {
var sum = 0
for (each sample in subset contained within pixel) {
sum = sum + abs(sample)
}
var avg = sum / length of subset
draw line(avg to -avg)
}
This is effectively like compressing the time axis as RMS of the window. (RMS could also be used but they are almost the same.) Now the waveform shows overall dynamics.
That is not too different from what you are already doing, just abs, mirror and fill. For boxes like SoundCloud uses, you would be drawing rectangles.
Just as a bonus, here is an MCVE written in Java to generate a waveform with boxes as described. (Sorry if Java is not your language.) The actual drawing code is near the top. This program also normalizes, i.e., the waveform is "stretched" to the height of the image.
This simple output is the same as the above pseudo routine:
This output with boxes is very similar to SoundCloud:
import javax.swing.*;
import java.awt.*;
import java.awt.event.*;
import java.awt.image.*;
import java.io.*;
import javax.sound.sampled.*;
public class BoxWaveform {
static int boxWidth = 4;
static Dimension size = new Dimension(boxWidth == 1 ? 512 : 513, 97);
static BufferedImage img;
static JPanel view;
// draw the image
static void drawImage(float[] samples) {
Graphics2D g2d = img.createGraphics();
int numSubsets = size.width / boxWidth;
int subsetLength = samples.length / numSubsets;
float[] subsets = new float[numSubsets];
// find average(abs) of each box subset
int s = 0;
for(int i = 0; i < subsets.length; i++) {
double sum = 0;
for(int k = 0; k < subsetLength; k++) {
sum += Math.abs(samples[s++]);
}
subsets[i] = (float)(sum / subsetLength);
}
// find the peak so the waveform can be normalized
// to the height of the image
float normal = 0;
for(float sample : subsets) {
if(sample > normal)
normal = sample;
}
// normalize and scale
normal = 32768.0f / normal;
for(int i = 0; i < subsets.length; i++) {
subsets[i] *= normal;
subsets[i] = (subsets[i] / 32768.0f) * (size.height / 2);
}
g2d.setColor(Color.GRAY);
// convert to image coords and do actual drawing
for(int i = 0; i < subsets.length; i++) {
int sample = (int)subsets[i];
int posY = (size.height / 2) - sample;
int negY = (size.height / 2) + sample;
int x = i * boxWidth;
if(boxWidth == 1) {
g2d.drawLine(x, posY, x, negY);
} else {
g2d.setColor(Color.GRAY);
g2d.fillRect(x + 1, posY + 1, boxWidth - 1, negY - posY - 1);
g2d.setColor(Color.DARK_GRAY);
g2d.drawRect(x, posY, boxWidth, negY - posY);
}
}
g2d.dispose();
view.repaint();
view.requestFocus();
}
// handle most WAV and AIFF files
static void loadImage() {
JFileChooser chooser = new JFileChooser();
int val = chooser.showOpenDialog(null);
if(val != JFileChooser.APPROVE_OPTION) {
return;
}
File file = chooser.getSelectedFile();
float[] samples;
try {
AudioInputStream in = AudioSystem.getAudioInputStream(file);
AudioFormat fmt = in.getFormat();
if(fmt.getEncoding() != AudioFormat.Encoding.PCM_SIGNED) {
throw new UnsupportedAudioFileException("unsigned");
}
boolean big = fmt.isBigEndian();
int chans = fmt.getChannels();
int bits = fmt.getSampleSizeInBits();
int bytes = bits + 7 >> 3;
int frameLength = (int)in.getFrameLength();
int bufferLength = chans * bytes * 1024;
samples = new float[frameLength];
byte[] buf = new byte[bufferLength];
int i = 0;
int bRead;
while((bRead = in.read(buf)) > -1) {
for(int b = 0; b < bRead;) {
double sum = 0;
// (sums to mono if multiple channels)
for(int c = 0; c < chans; c++) {
if(bytes == 1) {
sum += buf[b++] << 8;
} else {
int sample = 0;
// (quantizes to 16-bit)
if(big) {
sample |= (buf[b++] & 0xFF) << 8;
sample |= (buf[b++] & 0xFF);
b += bytes - 2;
} else {
b += bytes - 2;
sample |= (buf[b++] & 0xFF);
sample |= (buf[b++] & 0xFF) << 8;
}
final int sign = 1 << 15;
final int mask = -1 << 16;
if((sample & sign) == sign) {
sample |= mask;
}
sum += sample;
}
}
samples[i++] = (float)(sum / chans);
}
}
} catch(Exception e) {
problem(e);
return;
}
if(img == null) {
img = new BufferedImage(size.width, size.height, BufferedImage.TYPE_INT_ARGB);
}
drawImage(samples);
}
static void problem(Object msg) {
JOptionPane.showMessageDialog(null, String.valueOf(msg));
}
public static void main(String[] args) {
SwingUtilities.invokeLater(new Runnable() {
#Override
public void run() {
JFrame frame = new JFrame("Box Waveform");
JPanel content = new JPanel(new BorderLayout());
frame.setContentPane(content);
JButton load = new JButton("Load");
load.addActionListener(new ActionListener() {
#Override
public void actionPerformed(ActionEvent ae) {
loadImage();
}
});
view = new JPanel() {
#Override
protected void paintComponent(Graphics g) {
super.paintComponent(g);
if(img != null) {
g.drawImage(img, 1, 1, img.getWidth(), img.getHeight(), null);
}
}
};
view.setBackground(Color.WHITE);
view.setPreferredSize(new Dimension(size.width + 2, size.height + 2));
content.add(view, BorderLayout.CENTER);
content.add(load, BorderLayout.SOUTH);
frame.pack();
frame.setResizable(false);
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setLocationRelativeTo(null);
frame.setVisible(true);
}
});
}
}
Note: for the sake of simplicity, this program loads the entire audio file in to memory. Some JVMs may throw OutOfMemoryError. To correct this, run with increased heap size as described here.
Related
I have tried many methods, and can't seem to grasp the idea of extracting an index from my array of strings to help me generate my desired number of building with a desired height, please help, here is my example
edit: Hi, i saw your feedback and posted my code below, hopefully it helps with the idea overall, as much as it is just creating rects, its more complicated as i need to involve arrays and string splitting along with loops. i more or less got that covered but i as i said above, i cannot extract the values from my array of string and create my facades at my own desired number and height
String buffer = " ";
String bh = "9,4,6,8,12,2";
int[] b = int(split(bh, ","));
int buildingheight = b.length;
void setup () {
size(1200, 800);
background(0);
}
void draw () {
}
void Textbox() {
textSize(30);
text(buffer, 5, height-10);
}
void keyTyped() {
if (key == BACKSPACE) {
if (buffer.length() > 0) {
buffer = buffer.substring(0, buffer.length() - 1);
}
} else if (key == ENTER) {
background(0);
stroke(0);
GenerateFacade();
println(buffer);
}
else {
buffer = buffer + key;
Textbox();
}
}
void GenerateFacade() {
fill(128);
for (int i = 0; i < b.length; i++) {
for (int j = 0; j < b.length; j++) {
if (int(b[j]) > buildingheight) {
buildingheight = int(b[j]);
}
}
rect(i*width/b.length, height - (int(b[i])*height/buildingheight), width/b.length, int(b[i])*height/buildingheight);
}
}
For the next time it would be great if you provide us with some code so we know what you tried and maybe can point you to the problem you have.
You need just the keyPressed function and some variables
int x = 0;
int buildingWidth = 50;
int buildingHeight = height / 6;
void setup(){
size(1000, 500);
background(255);
}
void draw(){
}
void keyPressed(){
if(key >= '0' && key <= '9'){
int numberPressed = key - '0' ;
fill(0);
rect(x, height - buildingHeight * numberPressed,
buildingWidth, buildingHeight * numberPressed);
x += buildingWidth;
}
}
This is my result
I am trying to replicate a project for Kinect for this music video, but the code is seriously outdated.
After weeks searching, I have not found anything about this.
I would be greatly thankful to anyone who points out to me what is deprecated in the following code:
(I'm using Processing 3)
import org.openkinect.*;
import org.openkinect.processing.*;
import java.io.*;
// Kinect Library object
Kinect kinect;
float a = 0;
// Size of kinect image
int w = 640;
int h = 480;
// writing state indicator
boolean write = false;
// treshold filter initial value
int fltValue = 950;
// "recording" object. each vector element holds a coordinate map vector
Vector <Object> recording = new Vector<Object>();
// We'll use a lookup table so that we don't have to repeat the math over and over
float[] depthLookUp = new float[2048];
void setup() {
size(800,600,P3D);
kinect = new Kinect(this);
kinect.start();
kinect.enableDepth(true);
// We don't need the grayscale image in this example
// so this makes it more efficient
kinect.processDepthImage(false);
// Lookup table for all possible depth values (0 - 2047)
for (int i = 0; i < depthLookUp.length; i++) {
depthLookUp[i] = rawDepthToMeters(i);
}
}
void draw() {
background(0);
fill(255);
textMode(SCREEN);
text("Kinect FR: " + (int)kinect.getDepthFPS() + "\nProcessing FR: " + (int)frameRate,10,16);
// Get the raw depth as array of integers
int[] depth = kinect.getRawDepth();
// We're just going to calculate and draw every 4th pixel (equivalent of 160x120)
int skip = 4;
// Translate and rotate
translate(width/2,height/2,-50);
rotateY(a);
//noStroke();
//lights();
int index = 0;
PVector[] frame = new PVector[19200];
for(int x=0; x<w; x+=skip) {
for(int y=0; y<h; y+=skip) {
int offset = x+y*w;
// Convert kinect data to world xyz coordinate
int rawDepth = depth[offset];
boolean flt = true;
PVector v = depthToWorld(x,y,rawDepth);
if (flt && rawDepth > fltValue)
{
v = depthToWorld(x,y,2047);
}
frame[index] = v;
index++;
stroke(map(rawDepth,0,2048,0,256));
pushMatrix();
// Scale up by 200
float factor = 400;
translate(v.x*factor,v.y*factor,factor-v.z*factor);
//sphere(1);
point(0,0);
//line (0,0,1,1);
popMatrix();
}
}
if (write == true) {
recording.add(frame);
}
// Rotate
//a += 0.015f;
}
// These functions come from:http://graphics.stanford.edu/~mdfisher/Kinect.html
float rawDepthToMeters(int depthValue) {
if (depthValue < 2047) {
return (float)(1.0 / ((double)(depthValue) * -0.0030711016 + 3.3309495161));
}
return 0.0f;
}
PVector depthToWorld(int x, int y, int depthValue) {
final double fx_d = 1.0 / 5.9421434211923247e+02;
final double fy_d = 1.0 / 5.9104053696870778e+02;
final double cx_d = 3.3930780975300314e+02;
final double cy_d = 2.4273913761751615e+02;
PVector result = new PVector();
double depth = depthLookUp[depthValue];//rawDepthToMeters(depthValue);
result.x = (float)((x - cx_d) * depth * fx_d);
result.y = (float)((y - cy_d) * depth * fy_d);
result.z = (float)(depth);
return result;
}
void stop() {
kinect.quit();
super.stop();
}
int currentFile = 0;
void saveFile() {
}
void keyPressed() { // Press a key to save the data
if (key == '1')
{
fltValue += 50;
println("fltValue: " + fltValue);
}
else if (key == '2')
{
fltValue -= 50;
println("fltValue: " + fltValue);
}
else if (key=='4'){
if (write == true) {
write = false;
println( "recorded " + recording.size() + " frames.");
// saveFile();
// save
Enumeration e = recording.elements();
println("Stopped Recording " + currentFile);
int i = 0;
while (e.hasMoreElements()) {
// Create one directory
boolean success = (new File("out"+currentFile)).mkdir();
PrintWriter output = createWriter("out"+currentFile+"/frame" + i++ +".txt");
PVector [] frame = (PVector []) e.nextElement();
for (int j = 0; j < frame.length; j++) {
output.println(j + ", " + frame[j].x + ", " + frame[j].y + ", " + frame[j].z );
}
output.flush(); // Write the remaining data
output.close();
}
currentFile++;
}
}
else if (key == '3') {
println("Started Recording "+currentFile);
recording.clear();
write = true;
}
}
If the code works, then I wouldn't worry too much about it. Deprecated can just mean that a newer version is available, not that the older version stopped working.
However, if the code does not work, then updating to a newer library is probably a good idea anyway. Check out the library section of the Processing homepage, which lists several Kinect libraries.
In fact, one of those libraries is the updated version of the old library you're using: Open Kinect for Processing.
Edit: It looks like both of the errors you mentioned are due to missing import statements. You need to import both Vector and Enumeration to use them:
import java.util.Vector;
import java.util.Enumeration;
I'm trying to save an image after certain time, the problem is that the image size is bigger than the display so when I use the save or saveFrame function it only saves the image that I can see in the display. There is any other way to save the whole image?
This is my code:
PImage picture, pictureFilter, img;
int total, cont, current;
ArrayList<ArrayList<Position>> columns;
String[] fontList;
public class Position {
public int x;
public int y;
}
void setup() {
fontList = PFont.list();
picture = loadImage("DSC05920b.JPG");
pictureFilter = loadImage("filtrePort2.jpg");
frame.setResizable(true);
size(picture.width, picture.height);
columns = new ArrayList<ArrayList<Position>>();
for(int i = 0; i < picture.width; i++) {
ArrayList<Position> row = new ArrayList<Position>();
for(int j = 0; j < picture.height; j++){
Position p = new Position();
p.x = i;
p.y = j;
row.add(p);
}
columns.add(row);
}
total = picture.width * picture.height;
cont = total;
current = 0;
img = createImage(picture.width, picture.height, RGB);
}
float randomLetter() {
float value = 23;
boolean found = false;
while(!found) {
value = random(48, 122);
if(value >48 && value <58) found = true;
if(value >65 && value <91) found = true;
if(value >97 && value <123) found = true;
}
return value;
}
void draw() {
int x = int(random(0, columns.size()));
ArrayList<Position> rows = columns.get(x);
int y = int(random(0, rows.size()));
Position p = rows.get(y);
color c = pictureFilter.get(p.x, p.y);
int r = (c >> 16) & 0xFF; // Faster way of getting red(argb)
if(r < 240) {
PFont f = createFont(fontList[int(random(0,fontList.length))],random(5, 24),true);
textFont(f);
fill(picture.get(p.x,p.y));
char letter = (char) int(randomLetter());
text(letter, p.x, p.y);
}
if(rows.size() == 1) {
if(columns.size() == 1) {
saveFrame("lol.jpg");
columns.remove(x);
} else {
columns.remove(x);
}
} else {
println(rows.size());
rows.remove(y);
}
--cont;
float percent = float(total-cont)/float(total)*100;
if(int(percent) != current) {
current = int(percent);
save("image_" + current + ".jpg");
}
println("DONE: " + (total-cont) + "/" + total + " Progress: " + percent + "%");
}
The code do a lot of stuff but the part that its not working well is at the final when I check if the percentage have been increased in order to save the image
You can write this into a PGraphics context - aka a graphics buffer.
The buffer can be as big as you need it to be, and you can choose whether to draw it on the screen or not..
// Create the buffer at the size you need, and choose the renderer
PGraphics pg = createGraphics(myImage.width, myImage.height, P2D);
// Wrap all your drawing functions in the pg context - e.g.
PFont f = createFont(fontList[int(random(0,fontList.length))],random(5, 24),true);
textFont(f);
pg.beginDraw();
pg.fill(picture.get(p.x,p.y));
char letter = (char) int(randomLetter());
pg.text(letter, p.x, p.y);
pg.endDraw();
// Draw your PG to the screen and resize the representation of it to the screen bounds
image(pg, 0, 0, width, height); // <-- this wont actually clip/resize the image
// Save it
pg.save("image_" + current + ".jpg");
The PImage class contains a save() function that exports to file. The API should be your first stop for questions like this.
I'm having a ton of trouble making a simple video delay in processing. I looked around on the internet and I keep finding the same bit of code and I can't get it to work at all. When I first tried it, it did nothing (at all). Here's my modified version (which at least seems to load frames into the buffer), I really have no idea why it doesn't work and I'm getting really tired of pulling out my hair. Please... please, for the love of god, please somebody point out the stupid mistake I'm making here.
And now, without further delay (hah, get it?), the code:
import processing.video.*;
VideoBuffer vb;
Movie myMovie;
Capture cam;
float seconds = 1;
void setup() {
size(320,240, P3D);
frameRate(30);
String[] cameras = Capture.list();
if (cameras.length == 0) {
println("There are no cameras available for capture.");
exit();
} else {
println("Available cameras:");
for (int i = 0; i < cameras.length; i++) {
println(cameras[i]);
}
cam = new Capture(this, cameras[3]);
cam.start();
}
vb = new VideoBuffer(90, width, height);
}
void draw() {
if (cam.available() == true) {
cam.read();
vb.addFrame(cam);
}
image(cam, 0, 0);
image( vb.getFrame(), 150, 0 );
}
class VideoBuffer
{
PImage[] buffer;
int inputFrame = 0;
int outputFrame = 0;
int frameWidth = 0;
int frameHeight = 0;
VideoBuffer( int frames, int vWidth, int vHeight )
{
buffer = new PImage[frames];
for(int i = 0; i < frames; i++)
{
this.buffer[i] = new PImage(vWidth, vHeight);
}
this.inputFrame = 0;
this.outputFrame = 1;
this.frameWidth = vWidth;
this.frameHeight = vHeight;
}
// return the current "playback" frame.
PImage getFrame()
{
return this.buffer[this.outputFrame];
}
// Add a new frame to the buffer.
void addFrame( PImage frame )
{
// copy the new frame into the buffer.
this.buffer[this.inputFrame] = frame;
// advance the input and output indexes
this.inputFrame++;
this.outputFrame++;
println(this.inputFrame + " " + this.outputFrame);
// wrap the values..
if(this.inputFrame >= this.buffer.length)
{
this.inputFrame = 0;
}
if(this.outputFrame >= this.buffer.length)
{
this.outputFrame = 0;
}
}
}
This works in Processing 2.0.1.
import processing.video.*;
Capture cam;
PImage[] buffer;
int w = 640;
int h = 360;
int nFrames = 60;
int iWrite = 0, iRead = 1;
void setup(){
size(w, h);
cam = new Capture(this, w, h);
cam.start();
buffer = new PImage[nFrames];
}
void draw() {
if(cam.available()) {
cam.read();
buffer[iWrite] = cam.get();
if(buffer[iRead] != null){
image(buffer[iRead], 0, 0);
}
iWrite++;
iRead++;
if(iRead >= nFrames-1){
iRead = 0;
}
if(iWrite >= nFrames-1){
iWrite = 0;
}
}
}
There is a problem inside your addFrame-Method. You just store a reference to the PImage object, whose pixels get overwritten all the time. You have to use buffer[inputFrame] = frame.get() instead of buffer[inputFrame] = frame. The get() method returns a copy of the image.
I can't get around a peculiar problem with SimpleOpenNI for Processing ao I'm asking for your help.
I'd like to store snapshots of pixel depth data (returned by .depthMapRealWorld() method as PVector arrays) on discrete time intervals, then process them further for a presentation. I tried adding them in an ArrayList, but it seems that the depthMapRealWorld() method is returning only a reference to a current depth data, not a real array. I tried in this sequence:
Just getting the data and adding it in an arraylist. On every call of the update() method the whole arraylist contained the same PVector array, even if the array at the zero position was added many iterations away!
Then I made the PVector array, along with its creation time, part of a class. Rewrote the sketch a little, but it didn't help. All of the arrays in the arraylist werw still the same.
Finally, in the constructor of the class, I "manually" copied the xyz coordinates of every vector from the PVector array into a int array. That seemed to solve the problem - the int arrays in the arraylist are now different from each other. But this solution introduced serious performance problems.
The question is: is there a more efficient way of storing these PVector arrays and retaining their value?
code:
import processing.opengl.*;
import SimpleOpenNI.*;
SimpleOpenNI kinect;
float rotation = 0;
int time = 0;
ArrayList dissolver;
ArrayList<Integer> timer;
int pSize = 10;
Past past;
void setup() {
dissolver = new ArrayList();
timer = new ArrayList();
size(1024, 768, OPENGL);
kinect = new SimpleOpenNI(this);
kinect.enableDepth();
translate(width/2, height/2, -100);
rotateX(radians(180));
stroke(255);
}
void draw() {
background(0);
translate(width/2, height/2, 500);
rotateX(radians(180));
kinect.update();
stroke (255, 255, 255);
past = new Past (kinect.depthMapRealWorld(), time);
if (dissolver.size() == pSize) { //remove the oldest arraylist element if when list gets full
dissolver.remove(0); //
}
if (time % 20 == 0) {
dissolver.add (past);
Past p1 = (Past) dissolver.get (0);
float [][] o2 = p1.getVector();
println ("x coord of a random point at arraylist position 0: " + o2[50000][0]); //for testing
}
if (dissolver.size() == pSize-1) {
//dissolve ();
}
time ++;
}
void dissolve () { //from the previous nonworking version; ignore
for (int offset = 0; offset < pSize-1; offset ++) {
PVector[] offPoints = (PVector[]) dissolver.get (offset);
int offTime = timer.get(offset);
for (int i = 0; i < offPoints.length; i+=10) {
int col = (time-offTime)*2; //why??
stroke (255, 0, col);
PVector currentPoint = offPoints[i];
if (currentPoint.z <1500) {
point(currentPoint.x, currentPoint.y, currentPoint.z); // - 2*(time-offTime) + random(0, 100)
}
}
}
}
class Past {
private PVector [] depth; //should contain this, not int
private float [][] depth1;
private int time;
Past (PVector [] now, int t) {
//should be like this: depth = now;
//clumsy and performancewise catastrophic solution below
depth1 = new float [now.length][3];
for (int i = 0; i< now.length; i+=10) {
PVector temp = now[i];
depth1 [i][0] = temp.x;
depth1 [i][1] = temp.y;
depth1 [i][2] = temp.z;
}
//arrayCopy(now, depth); this didn't work either
time = t;
}
float [][] getVector () {
return depth1;
}
int getTime () {
return time;
}
}
If I understood correctly, you want to store the 3D positions(ArrayList of PVectors) for each frame, right ?
If so, you should be able to simply store PVectors and reference them later.
Here's a basic sketch to illustrate this:
import processing.opengl.*;
import SimpleOpenNI.*;
SimpleOpenNI kinect;
ArrayList<ArrayList<PVector>> frames = new ArrayList<ArrayList<PVector>>();
ArrayList<PVector> frame;
boolean isRecording = true;
boolean isRecFrame;
void setup() {
size(1024, 768, OPENGL);
kinect = new SimpleOpenNI(this);
kinect.enableDepth();
stroke(255);
}
void draw() {
background(0);
translate(width/2, height/2, 500);
rotateX(PI);
translate(0,0,-1000);
kinect.update();
if(isRecording){
isRecFrame = (frameCount % 20 == 0);//record every 20 frames
int[] depthMap = kinect.depthMap();
int steps = 5; // to speed up the drawing, draw every N point
int index;
PVector realWorldPoint;
if(isRecFrame) frame = new ArrayList<PVector>();
for(int y=0;y < kinect.depthHeight();y+=steps)
{
for(int x=0;x < kinect.depthWidth();x+=steps)
{
index = x + y * kinect.depthWidth();
if(depthMap[index] > 0)
{
realWorldPoint = kinect.depthMapRealWorld()[index];
point(realWorldPoint.x,realWorldPoint.y,realWorldPoint.z);
if(isRecFrame) frame.add(realWorldPoint.get());
}
}
}
if(isRecFrame) frames.add(frame);
}else{//playback
ArrayList<PVector> currentFrame = frames.get(frameCount%frames.size());//playback is faster than recording now for testing purposes - add a decent frame counter here at some point
for(PVector p : currentFrame) point(p.x,p.y,p.z);
}
}
void keyPressed(){
if(key == ' ') isRecording = !isRecording;
}
Use the SPACE key to toggle between recording and playback.
The main thing to note is I'm storing a copy of the real world position for each depth pixel (frame.add(realWorldPoint.get());). Another thing to keep in mind is that currently you're storing these coordinates in memory which at some point will fill. If you only store a limited number of frames that should be fine, if not you might want to save to the points to disk. This way you can reuse recordings with other sketches. A basic way would be to sore them in a csv file:
void saveCSV(ArrayList<PVector> pts){
String csv = "x,y,z\n";
for(PVector p : pts) csv += p.x + "," + p.y + "," + p.z + "\n";
saveStrings("frame_"+frameCount+".csv",csv.split("\n"));
}
Another would be to use a more suitable format for point clouds, like PLY.
Saving an ASCII PLY is fairly straight forward:
void savePLY(ArrayList<PVector> pts){
String ply = "ply\n";
ply += "format ascii 1.0\n";
ply += "element vertex " + pts.size() + "\n";
ply += "property float x\n";
ply += "property float y\n";
ply += "property float z\n";
ply += "end_header\n";
for(PVector p : pts)ply += p.x + " " + p.y + " " + p.z + "\n";
saveStrings("frame_"+frameCount+".ply",ply.split("\n"));
}
You can later open/explore/process these files with tools like MeshLab.