Bicubic Image Interpolation Algorithm - Glitches - image

I am trying to implement bicubic image interpolation. I have only pasted relevant sections of the code. I have skipped the code dealing with loading the image into a buffer and reading pixels from them etc. I am reasonably sure my math is correct. However, I seem to be having terrible artifacts in the output.
All the action happens in the resize method.
Am hoping that experienced graphics programmers may be able to share their hunches as to what I could be doing wrong.
The following are the input and output images that I get when resizing the input to twice its width and height.
double Interpolator::interpolate(const double p0, const double p1, const double p2, const double p3, const double x){
return (-0.5f*p0+1.5f*p1-1.5f*p2+0.5*p3)*pow(x,3)+
(p0-2.5f*p1+2.f*p2-0.5f*p3)*pow(x,2)+
(-0.5f*p0+0.5f*p2)*x+
p1;
}
bool Image::equals(double a, double b, double threshold){
if(fabs(a-b)<=threshold)
return true;
return false;
}
void Image::interpolate(const Pixel p[], double offset, Pixel& result){
result.r = Interpolator::interpolate(p[0].r,p[1].r,p[2].r,p[3].r,offset);
result.g = Interpolator::interpolate(p[0].g,p[1].g,p[2].g,p[3].g,offset);
result.b = Interpolator::interpolate(p[0].b,p[1].b,p[2].b,p[3].b,offset);
result.a = Interpolator::interpolate(p[0].a,p[1].a,p[2].a,p[3].a,offset);
}
void Image::getSamplingCoords(const int nearest,
const int max,
int coords[]){
coords[0] = nearest-1;
if(coords[0]<0)
coords[0] = nearest;
coords[1] = nearest;
coords[2] = nearest+1;
if(coords[2]>=max)
coords[2] = nearest;
coords[3] = nearest+2;
//The following check should not be necessary
//since this is never expected to occur. Nevertheless...
if(coords[3]>=max)
coords[3] = nearest;
}
void Image::interpolateAlongY(int x, int y, int yMax, double yOffset, Pixel& result){
if(equals(yOffset,0.f,ERROR_THRESHOLD)){
//No interpolation required
getPixel(x,y,result);
return;
}
int yCoords[4];
getSamplingCoords(y, yMax, yCoords);
Pixel interpolants[4];
for(int i=0; i<4; ++i){
getPixel(x, yCoords[i], interpolants[i]);
}
interpolate(interpolants, y, result);
}
void Image::resize(const int newWidth, const int newHeight){
//Ensure that we have a valid buffer already
if(buffer==NULL){
printf("ERROR: Must load an image before resizing it!");
assert(false);
}
//We first need to create a new buffer with the new dimensions
unsigned char* newBuffer = new unsigned char[newWidth*newHeight*channelCount];
for(int j=0; j<newHeight; ++j){
for(int i=0; i<newWidth; ++i){
size_t newIndexOffset = (j*newWidth+i)*channelCount;
//For this pixel in the target image we
//a) Find the nearest pixel in the source image
//b) Find the offset from the aforementioned nearest pixel
int xNear,yNear;
double xOffset,yOffset;
double x = ((double)width/(double)newWidth)*i;
double y = ((double)height/(double)newHeight)*j;
xNear = floor(x);
yNear = floor(y);
xOffset = x-xNear;
yOffset = y-yNear;
//If offset is 0, we don't need any interpolation
//we simply need to sample the source pixel and proceed
// if(equals(xOffset,0.f,ERROR_THRESHOLD) && equals(yOffset,0.f,ERROR_THRESHOLD)){
// Pixel result;
// getPixel(xNear, yNear, result);
// *(newBuffer+newIndexOffset) = result.r;
// *(newBuffer+newIndexOffset+1) = result.g;
// *(newBuffer+newIndexOffset+2) = result.b;
// if(channelCount==4)
// *(buffer+newIndexOffset+3) = result.a;
// continue;
// }
//We make a check that xNear and yNear obtained above
//are always smaller than the edge pixels at the extremeties
if(xNear>=width || yNear>=height){
printf("ERROR: Nearest pixel computation error!");
assert(false);
}
//Next we find four pixels along the x direction around this
//nearest pixel
int xCoords[4];
getSamplingCoords(xNear,width,xCoords);
//For each of these sampling xCoords, we interpolate 4 nearest points
//along Y direction
Pixel yInterps[4];
for(int k=0; k<4; k++){
interpolateAlongY(xCoords[k], yNear, height, yOffset, yInterps[k]);
}
//Finally, the resultant pixel is a cubic interpolation
//on the 4 obtained pixels above
Pixel result;
if(equals(xOffset,0.f,ERROR_THRESHOLD)){
result.r = yInterps[0].r;
result.g = yInterps[0].g;
result.b = yInterps[0].b;
result.a = yInterps[0].a;
}else{
interpolate(yInterps, xOffset, result);
}
*(newBuffer+newIndexOffset) = result.r;
*(newBuffer+newIndexOffset+1) = result.g;
*(newBuffer+newIndexOffset+2) = result.b;
if(channelCount==4)
*(newBuffer+newIndexOffset+3) = result.a;
}
}
//Now we can deallocate the memory of our current buffer
delete [] buffer;
//Reassign our newly sampled buffer to our own
buffer = newBuffer;
//Reset our image dimensions
height = newHeight;
width = newWidth;
}

interpolateAlongY is wrong, the last line is
interpolate(interpolants, y, result);
and should be
interpolate(interpolants, yOffset, result);
Additionally the comment in getSamplingCoords for coords[3] is wrong.
Edit:
After I looked through JansonD's answer I noticed two more problems, the excepetions with if(equals(xOffset,0.f,ERROR_THRESHOLD)) are not even necessary, the interpolation function does the right thing when the value of the offset is close to 0.
The other thing is, that your interpolation function is probably a lowpass filter as you add information from the neighboring pixels, so you are probably loosing detail on high frequency data like edges.

In addition to Josef's answer regarding:
interpolate(interpolants, y, result);
Needing to reference yOffset, there is also:
if(equals(xOffset,0.f,ERROR_THRESHOLD)){
result.r = yInterps[0].r;
result.g = yInterps[0].g;
result.b = yInterps[0].b;
result.a = yInterps[0].a;
Which should use yInterps[1].
Also none of the output values are clamped, so there may be underflow and overflow around sharp edges.
And the comment for:
if(coords[3]>=max)
coords[3] = nearest;
Isn't the only thing wrong, but indeed the clamp should be to max-1, and not nearest, unless you mean to reflect the edge pixels rather than repeat them.

Related

Processing | Program is lagging

I'm new to Processing and I need to make a program that, captured the main monitor, shows on the second screen the average color and makes a spiral using another color (perceptual dominant color) get by a function.
The problem is that the program is so slow (lag, 1FPS). I think it's because it has too many things to do everytime i do a screenshot, but I have no idea how to make it faster.
Also there could be many other problems, but the main one is that.
Thank you very much!
Here's the code:
import java.awt.Robot;
import java.awt.AWTException;
import java.awt.Rectangle;
import java.awt.color.ColorSpace;
PImage screenshot;
float a = 0;
int blockSize = 20;
int avg_c;
int per_c;
void setup() {
fullScreen(2); // 1920x1080
noStroke();
frame.removeNotify();
}
void draw() {
screenshot();
avg_c = extractColorFromImage(screenshot);
per_c = extractAverageColorFromImage(screenshot);
background(avg_c); // Average color
spiral();
}
void screenshot() {
try{
Robot robot_Screenshot = new Robot();
screenshot = new PImage(robot_Screenshot.createScreenCapture
(new Rectangle(0, 0, displayWidth, displayHeight)));
}
catch (AWTException e){ }
frame.setLocation(displayWidth/2, 0);
}
void spiral() {
fill (per_c);
for (int i = blockSize; i < width; i += blockSize*2)
{
ellipse(i, height/2+sin(a+i)*100, blockSize+cos(a+i)*5, blockSize+cos(a+i)*5);
a += 0.001;
}
}
color extractColorFromImage(PImage screenshot) { // Get average color
screenshot.loadPixels();
int r = 0, g = 0, b = 0;
for (int i = 0; i < screenshot.pixels.length; i++) {
color c = screenshot.pixels[i];
r += c>>16&0xFF;
g += c>>8&0xFF;
b += c&0xFF;
}
r /= screenshot.pixels.length; g /= screenshot.pixels.length; b /= screenshot.pixels.length;
return color(r, g, b);
}
color extractAverageColorFromImage(PImage screenshot) { // Get lab average color (perceptual)
float[] average = new float[3];
CIELab lab = new CIELab();
int numPixels = screenshot.pixels.length;
for (int i = 0; i < numPixels; i++) {
color rgb = screenshot.pixels[i];
float[] labValues = lab.fromRGB(new float[]{red(rgb),green(rgb),blue(rgb)});
average[0] += labValues[0];
average[1] += labValues[1];
average[2] += labValues[2];
}
average[0] /= numPixels;
average[1] /= numPixels;
average[2] /= numPixels;
float[] rgb = lab.toRGB(average);
return color(rgb[0] * 255,rgb[1] * 255, rgb[2] * 255);
}
public class CIELab extends ColorSpace {
#Override
public float[] fromCIEXYZ(float[] colorvalue) {
double l = f(colorvalue[1]);
double L = 116.0 * l - 16.0;
double a = 500.0 * (f(colorvalue[0]) - l);
double b = 200.0 * (l - f(colorvalue[2]));
return new float[] {(float) L, (float) a, (float) b};
}
#Override
public float[] fromRGB(float[] rgbvalue) {
float[] xyz = CIEXYZ.fromRGB(rgbvalue);
return fromCIEXYZ(xyz);
}
#Override
public float getMaxValue(int component) {
return 128f;
}
#Override
public float getMinValue(int component) {
return (component == 0)? 0f: -128f;
}
#Override
public String getName(int idx) {
return String.valueOf("Lab".charAt(idx));
}
#Override
public float[] toCIEXYZ(float[] colorvalue) {
double i = (colorvalue[0] + 16.0) * (1.0 / 116.0);
double X = fInv(i + colorvalue[1] * (1.0 / 500.0));
double Y = fInv(i);
double Z = fInv(i - colorvalue[2] * (1.0 / 200.0));
return new float[] {(float) X, (float) Y, (float) Z};
}
#Override
public float[] toRGB(float[] colorvalue) {
float[] xyz = toCIEXYZ(colorvalue);
return CIEXYZ.toRGB(xyz);
}
CIELab() {
super(ColorSpace.TYPE_Lab, 3);
}
private double f(double x) {
if (x > 216.0 / 24389.0) {
return Math.cbrt(x);
} else {
return (841.0 / 108.0) * x + N;
}
}
private double fInv(double x) {
if (x > 6.0 / 29.0) {
return x*x*x;
} else {
return (108.0 / 841.0) * (x - N);
}
}
private final ColorSpace CIEXYZ =
ColorSpace.getInstance(ColorSpace.CS_CIEXYZ);
private final double N = 4.0 / 29.0;
}
There's lots that can be done, even beyond what's already been mentioned.
Iteration & Threading
After taking the screenshot, immediately iterate over every 1/N pixels (perhaps every 4 or 8) of the buffered image. Then, during this iteration, calculate the LAB value for each pixel (as you have each pixel channel directly available), and meanwhile increment the running total of each RGB channel.
This saves us from iterating over the same pixels twice and avoids unncessary conversions (BufferedImage → PImage; and composing then decomposing pixel channels from PImage pixels).
Likewise, we avoid Processing's expensive resize() call (as suggested in another answer), which is not something we want to call every frame (even though it does speed the program up, it's not an efficient method).
Now, on top of iteration change, we can wrap the iteration in a Callable to easily run the workload across multiple system threads concurrently (after all, pixel iteration is embarrassingly parallel); the example below does this with 2 threads, each screenshotting and processing half of the display's pixels.
Optimise RGB→XYZ→LAB conversion
We're not so concerned about the backwards conversion since that's only done for one value per frame
It looks like you've implemented XYZ→LAB yourself and are using the RGB→XYZ converter from java.awt.color.
As has been identified, the forward conversion XYZ→LAB uses a cbrt() which is as a bottleneck. I also imagine that the RGB→XYZ implementation makes 3 calls to Math.Pow(x, 2.4) — 3 non-integer exponents per pixel adds considerably to the computation. The solution is faster math...
Jafama
Jafama is a drop-in java.math replacement -- simply import the library and replace any Math.__() calls with FastMath.__() for a free speedup (you could go even further by trading Jafama's E-15 precision with less accurate and even faster dedicated LUT-based classes).
So at the very least, swap out Math.cbrt() for FastMath.cbrt(). Then consider implementing RGB→XYZ yourself (example), again using Jafama in place of java.math.
You may even find that for such a project, converting to XYZ only is a sufficient color space to work with to overcome the well known weaknesses with RGB (and therefore save yourself from the XYZ→LAB conversion).
Cache LAB Calculation
Unless most pixels are changing every frame, then consider caching the LAB value for every pixel, recalculating it only when the pixel has changed between the current the previous frames. The tradeoff here is the overhead from checking every pixel against its previous value, versus how much calculation positive checks will save. Given that the LAB calculation is much more expensive it's very worthwhile here. The example below uses this technique.
Screen Capture
No matter how well optimised the rest of the program is, a considerable bottleneck is the AWT Robot's createScreenCapture(). It will struggles to go past 30FPS on large enough displays. I can't offer any exact advice but it's worth looking at other screen capture methods in Java.
Reworked code with iteration changes & threading
This code implements what has discussed above minus any changes to the LAB calculation.
float a = 0;
int blockSize = 20;
int avg_c;
int per_c;
java.util.concurrent.ExecutorService threadPool = java.util.concurrent.Executors.newFixedThreadPool(4);
List<java.util.concurrent.Callable<Boolean>> taskList;
float[] averageLAB;
int totalR = 0, totalG = 0, totalB = 0;
CIELab lab = new CIELab();
final int pixelStride = 8; // look at every 8th pixel
void setup() {
size(800, 800, FX2D);
noStroke();
frame.removeNotify();
taskList = new ArrayList<java.util.concurrent.Callable<Boolean>>();
Compute thread1 = new Compute(0, 0, width, height/2);
Compute thread2 = new Compute(0, height/2, width, height/2);
taskList.add(thread1);
taskList.add(thread2);
}
void draw() {
totalR = 0; // re init
totalG = 0; // re init
totalB = 0; // re init
averageLAB = new float[3]; // re init
final int numPixels = (width*height)/pixelStride;
try {
threadPool.invokeAll(taskList); // run threads now and block until completion of all
}
catch (Exception e) {
e.printStackTrace();
}
// calculate average LAB
averageLAB[0]/=numPixels;
averageLAB[1]/=numPixels;
averageLAB[2]/=numPixels;
final float[] rgb = lab.toRGB(averageLAB);
per_c = color(rgb[0] * 255, rgb[1] * 255, rgb[2] * 255);
// calculate average RGB
totalR/=numPixels;
totalG/=numPixels;
totalB/=numPixels;
avg_c = color(totalR, totalG, totalB);
background(avg_c); // Average color
spiral();
fill(255, 0, 0);
text(frameRate, 10, 20);
}
class Compute implements java.util.concurrent.Callable<Boolean> {
private final Rectangle screenRegion;
private Robot robot_Screenshot;
private final int[] previousRGB;
private float[][] previousLAB;
Compute(int x, int y, int w, int h) {
screenRegion = new Rectangle(x, y, w, h);
previousRGB = new int[w*h];
previousLAB = new float[w*h][3];
try {
robot_Screenshot = new Robot();
}
catch (AWTException e1) {
e1.printStackTrace();
}
}
#Override
public Boolean call() {
BufferedImage rawScreenshot = robot_Screenshot.createScreenCapture(screenRegion);
int[] ssPixels = new int[rawScreenshot.getWidth()*rawScreenshot.getHeight()]; // screenshot pixels
rawScreenshot.getRGB(0, 0, rawScreenshot.getWidth(), rawScreenshot.getHeight(), ssPixels, 0, rawScreenshot.getWidth()); // copy buffer to int[] array
for (int pixel = 0; pixel < ssPixels.length; pixel+=pixelStride) {
// get invididual colour channels
final int pixelColor = ssPixels[pixel];
final int R = pixelColor >> 16 & 0xFF;
final int G = pixelColor >> 8 & 0xFF;
final int B = pixelColor & 0xFF;
if (pixelColor != previousRGB[pixel]) { // if pixel has changed recalculate LAB value
float[] labValues = lab.fromRGB(new float[]{R/255f, G/255f, B/255f}); // note that I've fixed this; beforehand you were missing the /255, so it was always white.
previousLAB[pixel] = labValues;
}
averageLAB[0] += previousLAB[pixel][0];
averageLAB[1] += previousLAB[pixel][1];
averageLAB[2] += previousLAB[pixel][2];
totalR+=R;
totalG+=G;
totalB+=B;
previousRGB[pixel] = pixelColor; // cache last result
}
return true;
}
}
800x800px; pixelStride = 4; fairly static screen background
Yeesh, about 1 FPS on my machine:
To optimize code can be really hard, so instead of reading everything looking for stuff to improve, I started by testing where you were losing so much processing power. The answer was at this line:
per_c = extractAverageColorFromImage(screenshot);
The extractAverageColorFromImage method is well written, but it underestimate the amount of work it has to do. There is a quadratic relationship between the size of a screen and the number of pixels in this screen, so the bigger the screen the worst the situation. And this method is processing every pixel of the screenshot all the time, several time per screenshot.
This is a lot of work for an average color. Now, if there was a way to cut some corners... maybe a smaller screen, or a smaller screenshot... oh! there is! Let's resize the screenshot. After all, we don't need to go into such details as individual pixels for an average. In the screenshot method, add this line:
void screenshot() {
try {
Robot robot_Screenshot = new Robot();
screenshot = new PImage(robot_Screenshot.createScreenCapture(new Rectangle(0, 0, displayWidth, displayHeight)));
// ADD THE NEXT LINE
screenshot.resize(width/4, height/4);
}
catch (AWTException e) {
}
frame.setLocation(displayWidth/2, 0);
}
I divided the workload by 4, but I encourage you to tweak this number until you have the fastest satisfying result you can. This is just a proof of concept:
As you can see, resizing the screenshot and making it 4x smaller gives me 10x more speed. That's not a miracle, but it's much better, and I can't see a difference in the end result - but about that part, you'll have to use your own judgement, as you are the one who knows what your project is about. Hope it'll help!
Have fun!
Unfortunately I can't provide a detailed answer like laancelot (+1), but hopefully I can provide a few tips:
Resizing the image is definitely a good direction. Bare in mind you can also skip a number of pixels instead of incrementing every single pixel. (if you handle the pixel indices correctly, you can get a similar effect to resize without calling resize, though that won't save you a lot CPU time)
Don't create a new Robot instance multiple times a second. Create it once in setup and re-use it. (This is more of a good habit to get into)
Use a CPU profiler, such as the one in VisualVM to see what exactly is slow and aim to optimise the slowest stuff first.
point 1 example:
for (int i = 0; i < numPixels; i+= 100)
point 2 example:
Robot robot_Screenshot;
...
void setup() {
fullScreen(2); // 1920x1080
noStroke();
frame.removeNotify();
try{
robot_Screenshot = new Robot();
}catch(AWTException e){
println("error setting up screenshot Robot instance");
e.printStackTrace();
}
}
...
void screenshot() {
screenshot = new PImage(robot_Screenshot.createScreenCapture
(new Rectangle(0, 0, displayWidth, displayHeight)));
frame.setLocation(displayWidth/2, 0);
}
point 3 example:
Notice the slowest bit are actually AWT's fromRGB and Math.cbrt()
I'd suggest finding another alternative RGB -> XYZ -> L*a*b* conversion method that is simpler (mainly functions, less classes, with AWT or other dependencies) and hopefully faster.

Sierpinski carpet in processing

So I made the Sierpinski carpet fractal in processing using a Square data type which draw a square and has a function generate() that generates 9 equal squares out of itself and returns an ArrayList of (9-1)=8 squares removing the middle one (it is not added to the returned ArrayList) in order to generate the Sierpinski carpet.
Here is the class Square -
class Square {
PVector pos;
float r;
Square(float x, float y, float r) {
pos = new PVector(x, y);
this.r = r;
}
void display() {
noStroke();
fill(120,80,220);
rect(pos.x, pos.y, r, r);
}
ArrayList<Square> generate() {
ArrayList<Square> rects = new ArrayList<Square>();
float newR = r/3;
for (int i=0; i<3; i++) {
for (int j=0; j<3; j++) {
if (!(i==1 && j==1)) {
Square sq = new Square(pos.x+i*newR, pos.y+j*newR, newR);
rects.add(sq);
}
}
}
return rects;
}
}
This is the main sketch which moves forward the generation on mouse click -
ArrayList<Square> current;
void setup() {
size(600, 600);
current = new ArrayList<Square>();
current.add(new Square(0, 0, width));
}
void draw() {
background(255);
for (Square sq : current) {
sq.display();
}
}
void mousePressed() {
ArrayList<Square> next = new ArrayList<Square>();
for(Square sq: current) {
ArrayList<Square> rects = sq.generate();
next.addAll(rects);
}
current = next;
}
The problem :
The output that I am getting has very thin white lines which are not supposed to be there :
First generation -
Second generation -
Third generation -
My guess is that these lines are just the white background that shows up due to the calculations in generate() being off by a pixel or two. However I am not sure about how to get rid of these. Any help would be appreciated!
Here's a smaller example that demonstrates your problem:
size(1000, 100);
noStroke();
background(0);
float squareWidth = 9.9;
for(float squareX = 0; squareX < width; squareX += squareWidth){
rect(squareX, 0, squareWidth, height);
}
Notice that the black background is showing through the squares. Please try to post this kind of minimal example instead of your whole sketch in the future.
Anyway, there are three ways to fix this:
Option 1: Call the noSmooth() function.
By default, Processing uses anti-aliasing to make your drawings look smoother. Usually this is a good thing, but it can also add some fuzziness to the edges of shapes. If you disable anti-aliasing, your shapes will be more clear and you won't see the artifacts.
Option 2: Use a stroke with the same color as the fill.
As you've already discovered, this draws an outline around the shape.
Option 3: Use int values instead of float values.
You're storing your coordinates and sizes in float values, which can contain decimal places. The problem is, the screen (the actual pixels on your monitor) don't have decimal places (there is no such thing as half a pixel), so they're represented by int values. So when you convert a float value to an int, the decimal part is dropped, which can cause small gaps in your shapes.
If you just switch to using int values, the problem goes away:
size(1000, 100);
noStroke();
background(0);
int squareWidth = 10;
for(int squareX = 0; squareX < width; squareX += squareWidth){
rect(squareX, 0, squareWidth, height);
}

Saving point cloud data into a file

I have the following code working on Processing 2, with Kinect:
import org.openkinect.freenect.*;
import org.openkinect.processing.*;
// Kinect Library object
Kinect kinect;
// Angle for rotation
float a = 0;
// We'll use a lookup table so that we don't have to repeat the math over and over
float[] depthLookUp = new float[2048];
void setup() {
// Rendering in P3D
size(1200, 800, P3D);
kinect = new Kinect(this);
kinect.initDepth();
// Lookup table for all possible depth values (0 - 2047)
for (int i = 0; i < depthLookUp.length; i++) {
depthLookUp[i] = rawDepthToMeters(i);
}
}
void draw() {
background(0);
// Get the raw depth as array of integers
int[] depth = kinect.getRawDepth();
// We're just going to calculate and draw every 4th pixel (equivalent of 160x120)
int skip = 4;
// Translate and rotate
translate(width/2, height/2, -50);
rotateY(a);
for (int x = 0; x < kinect.width; x += skip) {
for (int y = 0; y < kinect.height; y += skip) {
int offset = x + y*kinect.width;
// Convert kinect data to world xyz coordinate
int rawDepth = depth[offset];
PVector v = depthToWorld(x, y, rawDepth);
stroke(255);
pushMatrix();
// Scale up by 200
float factor = 200;
translate(v.x*factor, v.y*factor, factor-v.z*factor);
// Draw a point
point(0, 0);
popMatrix();
}
}
// Rotate
a += 0.015f;
}
// These functions come from: http://graphics.stanford.edu/~mdfisher/Kinect.html
float rawDepthToMeters(int depthValue) {
if (depthValue < 2047) {
return (float)(1.0 / ((double)(depthValue) * -0.0030711016 + 3.3309495161));
}
return 0.0f;
}
PVector depthToWorld(int x, int y, int depthValue) {
final double fx_d = 1.0 / 5.9421434211923247e+02;
final double fy_d = 1.0 / 5.9104053696870778e+02;
final double cx_d = 3.3930780975300314e+02;
final double cy_d = 2.4273913761751615e+02;
PVector result = new PVector();
double depth = depthLookUp[depthValue];//rawDepthToMeters(depthValue);
result.x = (float)((x - cx_d) * depth * fx_d);
result.y = (float)((y - cy_d) * depth * fy_d);
result.z = (float)(depth);
return result;
}
I would like to save pointcloud data in a file, so I can import it later on another program, such as Cinema 4D.
how do I create this file?
Processing has several functions for saving data to file, the simplest of which is saveStrings().
To use the saveStrings() function, you would simply store whatever you wanted to save into a String array, and then pass that into the function along with a filename.
You can then use the loadStrings() function to read the data from a file back into a String array.
How you format the data into a String is entirely up to you. You might store it as comma separated values.
More info can be found in the reference.
If you want to store the data into a file that another program can read, you have to first look up exactly what format that file needs to be in. I'd start by opening up some example files in a basic text editor.

XNA Vector2 path contained inside rectangle

Hello I am new to XNA and trying to develop a game prototype where the character moves from one location to another using mouse clicks.
I have a Rectangle representing the current position. I get the target location as a Vector2 using player mouse input. I extract the direction vector from the source to the target by Vector2 subtraction.
//the cursor's coordinates should be the center of the target position
float x = mouseState.X - this.position.Width / 2;
float y = mouseState.Y - this.position.Height / 2;
Vector2 targetVector = new Vector2(x, y);
Vector2 dir = (targetVector - this.Center); //vector from source center to target
//center
I represent the world using a tile map, every cell is 32x32 pixels.
int tileMap[,];
What I want to do is check whether the direction vector above passes through any blue tiles on the map. A blue tile is equal 1 on the map.
I am not sure how to do this. I thought about using linear line equation and trigonometric formulas but I'm finding it hard to implement. I've tried normalizing the vector and multiplying by 32 to get 32 pixel length intervals along the path of the vector but it doesn't seem to work. Can anyone tell me if there's anything wrong in it, or another way to solve this problem? Thanks
//collision with blue wall. Returns point of impact
private bool CheckCollisionWithBlue(Vector2 dir)
{
int num = Worldmap.size; //32
int i = 0;
int intervals = (int)(dir.Length() / num + 1); //the number of 32-pixel length
//inervals on the vector, with an edge
Vector2 unit = Vector2.Normalize(dir) * num; //a vector of length 32 in the same
//direction as dir.
Vector2 v = unit;
while (i <= intervals & false)
{
int x = (int)(v.X / num);
int y = (int)(v.Y / num);
int type = Worldmap.getType(y, x);
if (type == 1) //blue tile
{
return true;
}
else
{
i++;
v = unit * i;
}
}
return false;
}
You need the initial postion too, not only direction
Maybe you need more resolution
¿what? remove the "false" evaluation
The calcs for next pos are a bit complicated
private bool CheckCollisionWithBlue(Vector2 source, Vector2 dir)
{
int num = 8; // pixel blocks of 8
int i = 0;
int intervals = (int)(dir.Length() / num);
Vector2 step = Vector2.Normalize(dir)*num;
while (i <= intervals)
{
int x = (int)(source.X);
int y = (int)(source.Y);
int type = Worldmap.getType(y, x);
if (type == 1) //blue tile
{
return true;
}
else
{
i++;
source+=step;
}
}
return false;
}
This will improve something your code, but maybe innacurate... it depends on what are you trying to do...
You maybe can find interesting the bresenham's line algorithm http://en.wikipedia.org/wiki/Bresenham's_line_algorithm
You should realize that you are not doing a volume collision but a line collision, if the ship or character or whatever that is at source position maybe you have to add more calcs

Pixel reordering is wrong when trying to process and display image copy with lower res

I'm currently making an application using processing intended to take an image and apply 8bit style processing to it: that is to make it look pixelated. To do this it has a method that take a style and window size as parameters (style is the shape in which the window is to be displayed - rect, ellipse, cross etc, and window size is a number between 1-10 squared) - to produce results similar to the iphone app pxl ( http://itunes.apple.com/us/app/pxl./id499620829?mt=8 ). This method then counts through the image's pixels, window by window averages the colour of the window and displays a rect(or which every shape/style chosen) at the equivalent space on the other side of the sketch window (the sketch when run is supposed to display the original image on the left mirror it with the processed version on the right).
The problem Im having is when drawing the averaged colour rects, the order in which they display becomes skewed..
Although the results are rather amusing, they are not what I want. Here the code:
//=========================================================
// GLOBAL VARIABLES
//=========================================================
PImage img;
public int avR, avG, avB;
private final int BLOCKS = 0, DOTS = 1, VERTICAL_CROSSES = 2, HORIZONTAL_CROSSES = 3;
public sRGB styleColour;
//=========================================================
// METHODS FOR AVERAGING WINDOW COLOURS, CREATING AN
// 8 BIT REPRESENTATION OF THE IMAGE AND LOADING AN
// IMAGE
//=========================================================
public sRGB averageWindowColour(color [] c){
// RGB Variables
float r = 0;
float g = 0;
float b = 0;
// Iterator
int i = 0;
int sizeOfWindow = c.length;
// Count through the window's pixels, store the
// red, green and blue values in the RGB variables
// and sum them into the average variables
for(i = 0; i < c.length; i++){
r = red (c[i]);
g = green(c[i]);
b = blue (c[i]);
avR += r;
avG += g;
avB += b;
}
// Divide the sum of the red, green and blue
// values by the number of pixels in the window
// to obtain the average
avR = avR / sizeOfWindow;
avG = avG / sizeOfWindow;
avB = avB / sizeOfWindow;
// Return the colour
return new sRGB(avR,avG,avB);
}
public void eightBitIT(int style, int windowSize){
img.loadPixels();
for(int wx = 0; wx < img.width; wx += (sqrt(windowSize))){
for(int wy = 0; wy < img.height; wy += (sqrt(windowSize))){
color [] tempCols = new color[windowSize];
int i = 0;
for(int x = 0; x < (sqrt(windowSize)); x ++){
for(int y = 0; y < (sqrt(windowSize)); y ++){
int loc = (wx+x) + (y+wy)*(img.width-windowSize);
tempCols[i] = img.pixels[loc];
// println("Window loc X: "+(wx+(img.width+5))+" Window loc Y: "+(wy+5)+" Window pix X: "+x+" Window Pix Y: "+y);
i++;
}
}
//this is ment to be in a switch test (0 = rect, 1 ellipse etc)
styleColour = new sRGB(averageWindowColour(tempCols));
//println("R: "+ red(styleColour.returnColourScaled())+" G: "+green(styleColour.returnColourScaled())+" B: "+blue(styleColour.returnColourScaled()));
rectMode(CORNER);
noStroke();
fill(styleColour.returnColourScaled());
//println("Rect Loc X: "+(wx+(img.width+5))+" Y: "+(wy+5));
ellipse(wx+(img.width+5),wy+5,sqrt(windowSize),sqrt(windowSize));
}
}
}
public PImage load(String s){
PImage temp = loadImage(s);
temp.resize(600,470);
return temp;
}
void setup(){
background(0);
// Load the image and set size of screen to its size*2 + the borders
// and display the image.
img = loadImage("oscilloscope.jpg");
size(img.width*2+15,(img.height+10));
frameRate(25);
image(img,5,5);
// Draw the borders
strokeWeight(5);
stroke(255);
rectMode(CORNERS);
noFill();
rect(2.5,2.5,img.width+3,height-3);
rect(img.width+2.5,2.5,width-3,height-3);
stroke(255,0,0);
strokeWeight(1);
rect(5,5,9,9); //window example
// process the image
eightBitIT(BLOCKS, 16);
}
void draw(){
//eightBitIT(BLOCKS, 4);
//println("X: "+mouseX+" Y: "+mouseY);
}
This has been bugging me for a while now as I can't see where in my code im offsetting the coordinates so they display like this. I know its probably something very trivial but I can seem to work it out. If anyone can spot why this skewed reordering is happening i would be much obliged as i have quite a lot of other ideas i want to implement and this is holding me back...
Thanks,

Resources