Node.js / coffeescript performance on a math-intensive algorithm - performance

I am experimenting with node.js to build some server-side logic, and have implemented a version of the diamond-square algorithm described here in coffeescript and Java. Given all the praise I have heard for node.js and V8 performance, I was hoping that node.js would not lag too far behind the java version.
However on a 4096x4096 map, Java finishes in under 1s but node.js/coffeescript takes over 20s on my machine...
These are my full results. x-axis is grid size. Log and linear charts:
Is this because there is something wrong with my coffeescript implementation, or is this just the nature of node.js still?
Coffeescript
genHeightField = (sz) ->
timeStart = new Date()
DATA_SIZE = sz
SEED = 1000.0
data = new Array()
iters = 0
# warm up the arrays to tell the js engine these are dense arrays
# seems to have neligible effect when running on node.js though
for rows in [0...DATA_SIZE]
data[rows] = new Array();
for cols in [0...DATA_SIZE]
data[rows][cols] = 0
data[0][0] = data[0][DATA_SIZE-1] = data[DATA_SIZE-1][0] =
data[DATA_SIZE-1][DATA_SIZE-1] = SEED;
h = 500.0
sideLength = DATA_SIZE-1
while sideLength >= 2
halfSide = sideLength / 2
for x in [0...DATA_SIZE-1] by sideLength
for y in [0...DATA_SIZE-1] by sideLength
avg = data[x][y] +
data[x + sideLength][y] +
data[x][y + sideLength] +
data[x + sideLength][y + sideLength]
avg /= 4.0;
data[x + halfSide][y + halfSide] =
avg + Math.random() * (2 * h) - h;
iters++
#console.log "A:" + x + "," + y
for x in [0...DATA_SIZE-1] by halfSide
y = (x + halfSide) % sideLength
while y < DATA_SIZE-1
avg =
data[(x-halfSide+DATA_SIZE-1)%(DATA_SIZE-1)][y]
data[(x+halfSide)%(DATA_SIZE-1)][y]
data[x][(y+halfSide)%(DATA_SIZE-1)]
data[x][(y-halfSide+DATA_SIZE-1)%(DATA_SIZE-1)]
avg /= 4.0;
avg = avg + Math.random() * (2 * h) - h;
data[x][y] = avg;
if x is 0
data[DATA_SIZE-1][y] = avg;
if y is 0
data[x][DATA_SIZE-1] = avg;
#console.log "B: " + x + "," + y
y += sideLength
iters++
sideLength /= 2
h /= 2.0
#console.log iters
console.log (new Date() - timeStart)
genHeightField(256+1)
genHeightField(512+1)
genHeightField(1024+1)
genHeightField(2048+1)
genHeightField(4096+1)
Java
import java.util.Random;
class Gen {
public static void main(String args[]) {
genHeight(256+1);
genHeight(512+1);
genHeight(1024+1);
genHeight(2048+1);
genHeight(4096+1);
}
public static void genHeight(int sz) {
long timeStart = System.currentTimeMillis();
int iters = 0;
final int DATA_SIZE = sz;
final double SEED = 1000.0;
double[][] data = new double[DATA_SIZE][DATA_SIZE];
data[0][0] = data[0][DATA_SIZE-1] = data[DATA_SIZE-1][0] =
data[DATA_SIZE-1][DATA_SIZE-1] = SEED;
double h = 500.0;
Random r = new Random();
for(int sideLength = DATA_SIZE-1;
sideLength >= 2;
sideLength /=2, h/= 2.0){
int halfSide = sideLength/2;
for(int x=0;x<DATA_SIZE-1;x+=sideLength){
for(int y=0;y<DATA_SIZE-1;y+=sideLength){
double avg = data[x][y] +
data[x+sideLength][y] +
data[x][y+sideLength] +
data[x+sideLength][y+sideLength];
avg /= 4.0;
data[x+halfSide][y+halfSide] =
avg + (r.nextDouble()*2*h) - h;
iters++;
//System.out.println("A:" + x + "," + y);
}
}
for(int x=0;x<DATA_SIZE-1;x+=halfSide){
for(int y=(x+halfSide)%sideLength;y<DATA_SIZE-1;y+=sideLength){
double avg =
data[(x-halfSide+DATA_SIZE-1)%(DATA_SIZE-1)][y] +
data[(x+halfSide)%(DATA_SIZE-1)][y] +
data[x][(y+halfSide)%(DATA_SIZE-1)] +
data[x][(y-halfSide+DATA_SIZE-1)%(DATA_SIZE-1)];
avg /= 4.0;
avg = avg + (r.nextDouble()*2*h) - h;
data[x][y] = avg;
if(x == 0) data[DATA_SIZE-1][y] = avg;
if(y == 0) data[x][DATA_SIZE-1] = avg;
iters++;
//System.out.println("B:" + x + "," + y);
}
}
}
//System.out.print(iters +" ");
System.out.println(System.currentTimeMillis() - timeStart);
}
}

As other answerers have pointed out, JavaScript's arrays are a major performance bottleneck for the type of operations you're doing. Because they're dynamic, it's naturally much slower to access elements than it is with Java's static arrays.
The good news is that there is an emerging standard for statically typed arrays in JavaScript, already supported in some browsers. Though not yet supported in Node proper, you can easily add them with a library: https://github.com/tlrobinson/v8-typed-array
After installing typed-array via npm, here's my modified version of your code:
{Float32Array} = require 'typed-array'
genHeightField = (sz) ->
timeStart = new Date()
DATA_SIZE = sz
SEED = 1000.0
iters = 0
# Initialize 2D array of floats
data = new Array(DATA_SIZE)
for rows in [0...DATA_SIZE]
data[rows] = new Float32Array(DATA_SIZE)
for cols in [0...DATA_SIZE]
data[rows][cols] = 0
# The rest is the same...
The key line in there is the declaration of data[rows].
With the line data[rows] = new Array(DATA_SIZE) (essentially equivalent to the original), I get the benchmark numbers:
17
75
417
1376
5461
And with the line data[rows] = new Float32Array(DATA_SIZE), I get
19
47
215
855
3452
So that one small change cuts the running time down by about 1/3, i.e. a 50% speed increase!
It's still not Java, but it's a pretty substantial improvement. Expect future versions of Node/V8 to narrow the performance gap further.
Caveat: It's got to be mentioned that normal JS numbers are double-precision, i.e. 64-bit floats. Using Float32Array will thus reduce precision, making this a bit of an apples-and-oranges comparison—I don't know how much of the performance improvement is from using 32-bit math, and how much is from faster array access. A Float64Array is part of the V8 spec, but isn't yet implemented in the v8-typed-array library.)

If you're looking for performance in algorithms like this, both coffee/js and Java are the wrong languages to be using. Javascript is especially poor for problems like this because it does not have an array type - arrays are just hash maps where keys must be integers, which obviously will not be as quick as a real array. What you want is to write this algorithm in C and call that from node (see http://nodejs.org/docs/v0.4.10/api/addons.html). Unless you're really good at hand-optimizing machine code, good C will easily outstrip any other language.

Forget about Coffeescript for a minute, because that's not the root of the problem. That code just gets written to regular old javascript anyway when node runs it.
Just like any other javascript environment, node is single-threaded. The V8 engine is bloody fast, but for certain types of applications you might not be able to exceed the speed of the jvm.
I would first suggest trying to right out your diamond algorithm directly in js before moving to CS. See what kinds of speed optimizations you can make.
Actually, I'm kind of interested in this problem now too and am going to take a look at doing this.
Edit #2 This is my 2nd re-write with some optimizations such as pre-populating the data array. Its not significantly faster, but the code is a bit cleaner.
var makegrid = function(size){
size++; //increment by 1
var grid = [];
grid.length = size,
gsize = size-1; //frequently used value in later calculations.
//setup grid array
var len = size;
while(len--){
grid[len] = (new Array(size+1).join(0).split('')); //creates an array of length "size" where each index === 0
}
//populate four corners of the grid
grid[0][0] = grid[gsize][0] = grid[0][gsize] = grid[gsize][gsize] = corner_vals;
var side_length = gsize;
while(side_length >= 2){
var half_side = Math.floor(side_length / 2);
//generate new square values
for(var x=0; x<gsize; x += side_length){
for(var y=0; y<gsize; y += side_length){
//calculate average of existing corners
var avg = ((grid[x][y] + grid[x+side_length][y] + grid[x][y+side_length] + grid[x+side_length][y+side_length]) / 4) + (Math.random() * (2*height_range - height_range));
//calculate random value for avg for center point
grid[x+half_side][y+half_side] = Math.floor(avg);
}
}
//generate diamond values
for(var x=0; x<gsize; x+= half_side){
for(var y=(x+half_side)%side_length; y<gsize; y+= side_length){
var avg = Math.floor( ((grid[(x-half_side+gsize)%gsize][y] + grid[(x+half_side)%gsize][y] + grid[x][(y+half_side)%gsize] + grid[x][(y-half_side+gsize)%gsize]) / 4) + (Math.random() * (2*height_range - height_range)) );
grid[x][y] = avg;
if( x === 0) grid[gsize][y] = avg;
if( y === 0) grid[x][gsize] = avg;
}
}
side_length /= 2;
height_range /= 2;
}
return grid;
}
makegrid(256)
makegrid(512)
makegrid(1024)
makegrid(2048)
makegrid(4096)

I have always assumed that when people described javascript runtime's as 'fast' they mean relative to other interpreted, dynamic languages. A comparison to ruby, python or smalltalk would be interesting. Comparing JavaScript to Java is not a fair comparison.
To answer your question, I believe that the results you are seeing are indicative of what you can expect comparing these two vastly different languages.

Related

Is it posible to know the brightness of a picture in Flutter?

I am building an application which has a Camera inside.
After I take a photo, I want to analyze it to know the brightness of this picture, if it is bad I have to take again the photo.
This is my code right now, it's a javascript function that I found and writing in Dart:
Thanks to #Abion47
EDIT 1
for (int i = 0; i < pixels.length; i++) {
int pixel = pixels[i];
int b = (pixel & 0x00FF0000) >> 16;
int g = (pixel & 0x0000FF00) >> 8;
int r = (pixel & 0x000000FF);
avg = ((r + g + b) / 3).floor();
colorSum += avg;
}
brightness = (colorSum / (width * height)).floor();
}
brightness = (colorSum / (width * height)).round();
// I tried with this other code
//brightness = (colorSum / pixels.length).round();
return brightness;
But I've got less brightness on white than black, the numbers are a little bit weird.
Do you know a better way to know the brightness?
SOLUTION:
Under further investigation we found the solution, we had an error doing the image decoding, but we used a Image function to do it.
Here is our final code:
Image image = decodeImage(file.readAsBytesSync());
var data = image.getBytes();
var colorSum = 0;
for(var x = 0; x < data.length; x += 4) {
int r = data[x];
int g = data[x + 1];
int b = data[x + 2];
int avg = ((r + g + b) / 3).floor();
colorSum += avg;
}
var brightness = (colorSum / (image.width * image.height)).floor();
return brightness;
Hope it helps you.
There are several things wrong with your code.
First, you are getting a range error because you are attempting to access a pixel that doesn't exist. This is probably due to width and/or height being greater than the image's actual width or height. There are a lot of ways to try and get these values, but for this application it doesn't actually matter since the end result is to get an average value across all pixels in the image, and you don't need the width or height of the image for that.
Second, you are fetching the color values by serializing the color value into a hex string and then parsing the individual channel substrings. Your substring is going to result in incorrect values because:
foo.substring(a, b) takes the substring of foo from a to b, exclusive. That means that a and b are indices, not lengths, and the resulting string will not include the character at b. So assuming hex is "01234567", when you do hex.substring(0, 2), you get "01", and then you do hex.substring(3, 5) you get "34" while hex.substring(6, 8) gets you "67". You need to do hex.substring(0, 2) followed by hex.substring(2, 4) and hex.substring(4, 6) to get the first three channels.
That being said, you are fetching the wrong channels. The image package stores its pixel values in ABGR format, meaning the first two characters in the hex string are going to be the alpha channel which is unimportant when calculating image brightness. Instead, you want the second, third, and forth channels for the blue, green, and red values respectively.
And having said all that, this is an extremely inefficient way to do this anyway when the preferred way to retrieve channel data from an integer color value is with bitwise operations on the integer itself. (Never convert a number to a string or vice versa unless you absolutely have to.)
So in summary, what you want will likely be something akin to the following;
final pixels = image.data;
double colorSum = 0;
for (int i = 0; i < pixels.length; i++) {
int pixel = pixels[i];
int b = (pixel & 0x00FF0000) >> 16;
int g = (pixel & 0x0000FF00) >> 8;
int r = (pixel & 0x000000FF);
avg = (r + g + b) / 3;
colorSum += avg;
}
return colorSum / pixels.length;

How to find trend (growth/decrease/stationarity) of a data series

I am trying to extract the OEE trend of a manufacturing machine. I already have a dataset of OEE calculated more or less every 30 seconds for each manufacturing machine and stored in a database.
What I want to do is to extract a subset of the dataset (say, last 30 minutes) and state if the OEE has grown, decreased or has been stable (withing a certain threshold). My task is NOT to forecast what will be the next value of OEE, but just to know if has decreased (desired return value: -1), grown (desired return value: +1) or been stable (desired return value: 0) based on the dataset. I am using Java 8 in my project.
Here is an example of dataset:
71.37
71.37
70.91
70.30
70.30
70.42
70.42
69.77
69.77
69.29
68.92
68.92
68.61
68.61
68.91
68.91
68.50
68.71
69.27
69.26
69.89
69.85
69.98
69.93
69.39
68.97
69.03
From this dataset is possible to state that the OEE has been decreasing (of couse based on a threshold), thus the algorithm would return -1.
I have been searching on the web unsuccessfully. I have found this, or this github project, or this stackoverflow question. However, all those are (more or less) complex forecasting algorithm. I am searching for a much easier solution. Any help is apreciated.
You could go for a
sliding average of the last n values.
Or a
sliding median of the last n values.
It highly depends on your application what is appropriate. But both these are very simple to implement and in a lot of cases more than good enough.
As you know from math, one would use d/dt, which more or less is using the step differences.
A trend is should have some weight.
class Trend {
int direction;
double probability;
}
Trend trend(double[] lastData) {
double[] deltas = Arrays.copyOf(lastData, lastData.length - 1);
for (int i = 0; i < deltas.length; ++i) {
deltas[i] -= lastData[i + 1];
}
// Trend based on two parts:
int parts = 2;
int splitN = (deltas.length + 1) / parts;
int i = 0;
int[] trends = new int[parts];
for (int j = 0; j < parts.length; ++j) {
int n = Math.min(splitN, parts.length - i);
double partAvg = DoubleStream.of(deltas).skip(i).limit(n).sum() / n;
trends[j] = tendency(partAvg);
}
Trend result = new Trend();
trend.direction = trends[parts - 1];
double avg = IntStream.of(trends).average().orElse((double)trend.direction);
trend.probability = ((direction - avg) + 1) / 2;
return trends[parts - 1];
}
int tendency(double sum) {
final double EPS = 0.0001;
return sum < -EPS ? -1 : sum > EPS ? 1 : 0;
}
This is not very sophisticated. For more elaborate treatment a math forum might be useful.

Mandelbrot optimization in openmp

Well i have to paralellisize the mandelbrot program in C. I think i have done it well and i cant get better times. My question if someone has an idea to improve the code, ive been thinking perhaps in nested parallel regions between the outer and insider for...
Also i have doubts if its more elegant or recommended to put all the pragmas in a single line or to write separate pragmas ( one for omp parallel and shared and private variables and a conditional, and another pragma with omp for and schedule dynamic).
Ive the doubt if constants can be used as private variables because i think is cleaner to have constants instead of defined variables.
Also i have written a conditional ( if numcpu >1) it has no sense to use parallel region and make a normal sequential execution.
Finally as i have read the dynamic chunk it depends on hardware and your system configuration... so i have left it as a constant, so it can be easily changed.
Also i adapt the number of threads to the number of processors available..
int main(int argc, char *argv[])
{
omp_set_dynamic(1);
int xactual, yactual;
//each iteration, it calculates: newz = oldz*oldz + p, where p is the current pixel, and oldz stars at the origin
double pr, pi; //real and imaginary part of the pixel p
double newRe, newIm, oldRe, oldIm; //real and imaginary parts of new and old z
double zoom = 1, moveX = -0.5, moveY = 0; //you can change these to zoom and change position
pixel_t *pixels = malloc(sizeof(pixel_t)*IMAGEHEIGHT*IMAGEWIDTH);
clock_t begin, end;
double time_spent;
begin=clock();
int numcpu;
numcpu = omp_get_num_procs();
//FILE * fp;
printf("El número de procesadores que utilizaremos es: %d", numcpu);
omp_set_num_threads(numcpu);
#pragma omp parallel shared(pixels, moveX, moveY, zoom) private(xactual, yactual, pr, pi, newRe, newIm) (if numcpu>1)
{
//int xactual=0;
// int yactual=0;
#pragma omp for schedule(dynamic, CHUNK)
//loop through every pixel
for(yactual = 0; yactual < IMAGEHEIGHT; yactual++)
for(xactual = 0; xactual < IMAGEWIDTH; xactual++)
{
//calculate the initial real and imaginary part of z, based on the pixel location and zoom and position values
pr = 1.5 * (xactual - IMAGEWIDTH / 2) / (0.5 * zoom * IMAGEWIDTH) + moveX;
pi = (yactual - IMAGEHEIGHT / 2) / (0.5 * zoom * IMAGEHEIGHT) + moveY;
newRe = newIm = oldRe = oldIm = 0; //these should start at 0,0
//"i" will represent the number of iterations
int i;
//start the iteration process
for(i = 0; i < ITERATIONS; i++)
{
//remember value of previous iteration
oldRe = newRe;
oldIm = newIm;
//the actual iteration, the real and imaginary part are calculated
newRe = oldRe * oldRe - oldIm * oldIm + pr;
newIm = 2 * oldRe * oldIm + pi;
//if the point is outside the circle with radius 2: stop
if((newRe * newRe + newIm * newIm) > 4) break;
}
// color(i % 256, 255, 255 * (i < maxIterations));
if(i == ITERATIONS)
{
//color(0, 0, 0); // black
pixels[yactual*IMAGEWIDTH+xactual][0] = 0;
pixels[yactual*IMAGEWIDTH+xactual][1] = 0;
pixels[yactual*IMAGEWIDTH+xactual][2] = 0;
}
else
{
double z = sqrt(newRe * newRe + newIm * newIm);
int brightness = 256 * log2(1.75 + i - log2(log2(z))) / log2((double)ITERATIONS);
//color(brightness, brightness, 255)
pixels[yactual*IMAGEWIDTH+xactual][0] = brightness;
pixels[yactual*IMAGEWIDTH+xactual][1] = brightness;
pixels[yactual*IMAGEWIDTH+xactual][2] = 255;
}
}
} //end of parallel region
end= clock();
time_spent = (double)(end - begin) / CLOCKS_PER_SEC;
fprintf(stderr, "Elapsed time: %.2lf seconds.\n", time_spent);
You could extend the implementation to leverage SIMD extensions. As far as I know the latest OpenMP standard includes vector constructs. Check out this article that describes the new capabilities.
This whitepaper explains how SSE3 can be used when calculating the Mandelbrot set.

Is this part of a real IFFT process really optimal?

When calculating (I)FFT it is possible to calculate "N*2 real" data points using a ordinary complex (I)FFT of N data points.
Not sure about my terminology here, but this is how I've read it described.
There are several posts about this on stackoverflow already.
This can speed things up a bit when only dealing with such "real" data which is often the case when dealing with for example sound (re-)synthesis.
This increase in speed is offset by the need for a pre-processing step that somehow... uhh... fidaddles? the data to achieve this. Look I'm not even going to try to convince anyone I fully understand this but thanks to previously mentioned threads, I came up with the following routine, which does the job nicely (thank you!).
However, on my microcontroller this costs a bit more than I'd like even though trigonometric functions are already optimized with LUTs.
But the routine itself just looks like it should be possible to optimize mathematically to minimize processing. To me it seems similar to plain 2d rotation. I just can't quite wrap my head around it, but it just feels like this could be done with fewer both trigonometric calls and arithmetic operations.
I was hoping perhaps someone else might easily see what I don't and provide some insight into how this math may be simplified.
This particular routine is for use with IFFT, before the bit-reversal stage.
pseudo-version:
INPUT
MAG_A/B = 0 TO 1
PHA_A/B = 0 TO 2PI
INDEX = 0 TO PI/2
r = MAG_A * sin(PHA_A)
i = MAG_B * sin(PHA_B)
rsum = r + i
rdif = r - i
r = MAG_A * cos(PHA_A)
i = MAG_B * cos(PHA_B)
isum = r + i
idif = r - i
r = -cos(INDEX)
i = -sin(INDEX)
rtmp = r * isum + i * rdif
itmp = i * isum - r * rdif
OUTPUT rsum + rtmp
OUTPUT itmp + idif
OUTPUT rsum - rtmp
OUTPUT itmp - idif
original working code, if that's your poison:
void fft_nz_set(fft_complex_t complex[], unsigned bits, unsigned index, int32_t mag_lo, int32_t pha_lo, int32_t mag_hi, int32_t pha_hi) {
unsigned size = 1 << bits;
unsigned shift = SINE_TABLE_BITS - (bits - 1);
unsigned n = index; // index for mag_lo, pha_lo
unsigned z = size - index; // index for mag_hi, pha_hi
int32_t rsum, rdif, isum, idif, r, i;
r = smmulr(mag_lo, sine(pha_lo)); // mag_lo * sin(pha_lo)
i = smmulr(mag_hi, sine(pha_hi)); // mag_hi * sin(pha_hi)
rsum = r + i; rdif = r - i;
r = smmulr(mag_lo, cosine(pha_lo)); // mag_lo * cos(pha_lo)
i = smmulr(mag_hi, cosine(pha_hi)); // mag_hi * cos(pha_hi)
isum = r + i; idif = r - i;
r = -sinetable[(1 << SINE_BITS) - (index << shift)]; // cos(pi_c * (index / size) / 2)
i = -sinetable[index << shift]; // sin(pi_c * (index / size) / 2)
int32_t rtmp = smmlar(r, isum, smmulr(i, rdif)) << 1; // r * isum + i * rdif
int32_t itmp = smmlsr(i, isum, smmulr(r, rdif)) << 1; // i * isum - r * rdif
complex[n].r = rsum + rtmp;
complex[n].i = itmp + idif;
complex[z].r = rsum - rtmp;
complex[z].i = itmp - idif;
}
// For reference, this would be used as follows to generate a sawtooth (after IFFT)
void synth_sawtooth(fft_complex_t *complex, unsigned fft_bits) {
unsigned fft_size = 1 << fft_bits;
fft_sym_dc(complex, 0, 0); // sets dc bin [0]
for(unsigned n = 1, z = fft_size - 1; n <= fft_size >> 1; n++, z--) {
// calculation of amplitude/index (sawtooth) for both n and z
fft_sym_magnitude(complex, fft_bits, n, 0x4000000 / n, 0x4000000 / z);
}
}

Data structures and algorithms for adaptive "uniform" mesh?

I need a data structure for storing float values at an uniformly sampled 3D mesh:
x = x0 + ix*dx where 0 <= ix < nx
y = y0 + iy*dy where 0 <= iy < ny
z = z0 + iz*dz where 0 <= iz < nz
Up to now I have used my Array class:
Array3D<float> A(nx, ny,nz);
A(0,0,0) = 0.0f; // ix = iy = iz = 0
Internally it stores the float values as an 1D array with nx * ny * nz elements.
However now I need to represent an mesh with more values than I have RAM,
e.g. nx = ny = nz = 2000.
I think many neighbour nodes in such an mesh may have similar values so I was thinking if there was some simple way that I could "coarsen" the mesh adaptively.
For instance if the 8 (ix,iy,iz) nodes of an cell in this mesh have values that are less than 5% apart; they are "removed" and replaced by just one value; the mean of the 8 values.
How could I implement such a data structure in a simple and efficient way?
EDIT:
thanks Ante for suggesting lossy compression. I think this could work the following way:
#define BLOCK_SIZE 64
struct CompressedArray3D {
CompressedArray3D(int ni, int nj, int nk) {
NI = ni/BLOCK_SIZE + 1;
NJ = nj/BLOCK_SIZE + 1;
NK = nk/BLOCK_SIZE + 1;
blocks = new float*[NI*NJ*NK];
compressedSize = new unsigned int[NI*NJ*NK];
}
void setBlock(int I, int J, int K, float values[BLOCK_SIZE][BLOCK_SIZE][BLOCK_SIZE]) {
unsigned int csize;
blocks[I*NJ*NK + J*NK + K] = compress(values, csize);
compressedSize[I*NJ*NK + J*NK + K] = csize;
}
float getValue(int i, int j, int k) {
int I = i/BLOCK_SIZE;
int J = j/BLOCK_SIZE;
int K = k/BLOCK_SIZE;
int ii = i - I*BLOCK_SIZE;
int jj = j - J*BLOCK_SIZE;
int kk = k - K*BLOCK_SIZE;
float *compressedBlock = blocks[I*NJ*NK + J*NK + K];
unsigned int csize = compressedSize[I*NJ*NK + J*NK + K];
float values[BLOCK_SIZE][BLOCK_SIZE][BLOCK_SIZE];
decompress(compressedBlock, csize, values);
return values[ii][jj][kk];
}
// number of blocks:
int NI, NJ, NK;
// number of samples:
int ni, nj, nk;
float** blocks;
unsigned int* compressedSize;
};
For this to be useful I need a lossy compression that is:
extremely fast, also on small datasets (e.g. 64x64x64)
compress quite hard > 3x, never mind if it looses quite a bit of info.
Any good candidates?
It sounds like you're looking for a LOD (level of detail) adaptive mesh. It's a recurring theme in video games and terrain simulation.
For terrain, see here: http://vterrain.org/LOD/Papers/ -- look for the ROAM video which is IIRC not only adaptive by distance, but also by view direction.
For non-terrain entities, there is a huge body of work (here's one example: Generic Adaptive Mesh Refinement).
I would suggest to use OctoMap to handle large 3D data.
And to extend it as shown here to handle geometrical properties.

Resources