I am trying to build a function grapher,
The user enters xmin, xmax, ymin, ymax, function.
I got the x, y for all points.
Now i want to translate this initial referential to a Canvas starting at 0,0 up to
250,250.
Is there a short way or should i just check
if x < 0
new x = (x - xmin) * (250 / (xmax - xmin)) ?
etc ..
Also this basic approach does not optimise sampling.
For example if my function f(x) = 5 i dont need to sample the xrange in 500 points,
i only need two points. I could do some heuristic checks.
But for a function like sin(2/x) i need more sampling around x (-1,1) how would you aproach such a thing ?
Thanks
Instead of iterating over x in the original coordinates, iterate over the canvas and then transform back to the original coordinates:
for (int xcanvas = 0; xcanvas <= 250; i++) {
double x = ((xmax - xmin) * xcanvas / 250.0) + xmin;
double y = f(x);
int ycanvas = 250 * (y - ymin) / (ymax - ymin) + .5;
// Plot (xcanvas, ycanvas)
}
This gives you exactly one function evaluation for each column of the canvas.
You can estimate the derivative (if you have one).
You can use bidirectional (dichotomic) approach: estimate the difference and split the segment if necessary.
I think I would start by reasoning about this in terms of transformations from canvas to maths contexts.
(canvas_x, canvas_y) -> (maths_x, maths_y)
(maths_x, maths_y) -> (canvas_x, canvas_y)
maths_x -> maths_y
You iterate over the points that a displayable, looping over canvas_x.
This would translate to some simple functions:
maths_x = maths_x_from_canvas_x(canvas_x, min_maths_x, max_maths_x)
maths_y = maths_y_from_maths_x(maths_x) # this is the function to be plotted.
canvas_y = canvas_y_from_maths_y(maths_y, min_maths_y, max_maths_y)
if (canvas_y not out of bounds) plot(canvas_x, canvas_y)
Once you get here, it's relatively simple to write these simple functions into code.
Optimize from here.
I think that for this approach, you won't need to know too much about sample frequencies, because you sample at a rate appropriate for the display. It wouldn't be optimal - your example of y = 5 is a good example, but you'd be guaranteed not to sample more than you can display.
Related
The goal is to find coordinates in a figure with an unknown shape. What IS known is a list of coordinates of the boundary of that figure, for example:
boundary = [(0,0),(1,0),(2,0),(3,0),(3,1),(3,2),(3,3),(2,3),(2,2),(1,2),(1,3),(0,3),(0,2),(0,1]
which would look something like this:
Square with a gab
This is a very basic example and i'd like to do it with very larg lists of very different kinds of figures.
The question is how to get a random coordinate that lies within the figure WITHOUT hardcoding the anything about the shape of the figure, because this will be unknown at the beginning? Is there a way to know for certain or is making an estimate the best option? How would I implement an estimate like that?
Here is tentative answer. You sample numbers in two steps.
Before, do preparation work - split your figure into simple elementary objects. In your case you split it into rectangles, often people triangulate and split it into triangles.
So you have number N of simple objects, each with area of Ai and total area A = Sum(Ai).
First sampling step - select which rectangle you pick point from.
In some pseudocode
r = randomU01(); // random value in [0...1) range
for(i in N) {
r = r - A_i/A;
if (r <= 0) {
k = i;
break;
}
}
So you picked up one rectangle with index k, and then just sample point uniformly in that rectangle
x = A_k.dim.x * randomU01();
y = A_k.dim.y * randomU01();
return (x + A_k.lower_left_corner.x, y + A_k.lower_left_corner.y);
And that is it. Very similar technique for triangulated figure.
Rectangle selection could be optimized by doing binary search or even more complicated alias method
UPDATE
If your boundary is generic, then the only good way to go is to triangulate your polygon using any good library out there (f.e. Triangle), then select one of the triangles based on area (step 1), then sample uniformly point in the triangle using two random U01 numbers r1 and r2,
P = (1 - sqrt(r1)) * A + (sqrt(r1)*(1 - r2)) * B + (r2*sqrt(r1)) * C
i.e., in pseudocode
r1 = randomU01();
s1 = sqrt(r1);
r2 = randomU01();
x = (1.0-s1)*A.x + s1*(1.0-r2)*B.x + r2*s1*C.x;
y = (1.0-s1)*A.y + s1*(1.0-r2)*B.y + r2*s1*C.y;
return (x,y);
Is there any algorithm / method to find the smallest regular hexagon around a set of points (x, y).
And by smallest I mean smallest area.
My current idea was to find the smallest circle enclosing the points, and then create a hexagon from there and check if all the points are inside, but that is starting to sound like a never ending problem.
Requirements
First of all, let's define a hexagon as quadruple [x0, y0, t0, s], where (x0, y0), t0 and s are its center, rotation and side-length respectively.
Next, we need to find whether an arbitrary point is inside the hexagon. The following functions do this:
function getHexAlpha(t, hex)
t = t - hex.t0;
t = t - 2*pi * floor(t / (2*pi));
return pi/2 - abs(rem(t, pi/3) - (pi/6));
end
function getHexRadious( P, hex )
x = P.x - hex.x0;
y = P.y - hex.y0;
t = atan2(y, x);
return hex.s * cos(pi/6) / sin(getHexAlpha(t, hex));
end
function isInHex(P, hex)
r = getHexRadious(P, hex);
d = sqrt((P.x - hex.x0)^2 + (P.y - hex.y0)^2);
return r >= d;
end
Long story short, the getHexRadious function formulates the hexagon in polar form and returns distance from center of hexagon to its boundary at each angle. Read this post for more details about getHexRadious and getHexRadious functions. This is how these work for a set of random points and an arbitrary hexagon:
The Algorithm
I suggest a two-stepped algorithm:
1- Guess an initial hexagon that covers most of points :)
2- Tune s to cover all points
Chapter 1: (2) Following Tarantino in Kill Bill Vol.1
For now, let's assume that our arbitrary hexagon is a good guess. Following functions keep x0, y0, t0 and tune s to cover all points:
function getHexSide( P, hex )
x = P.x - hex.x0;
y = P.y - hex.y0;
r = sqrt(x^2 + y^2);
t = atan2(y, x);
return r / (cos(pi/6) / sin(getHexAlpha(t, hex)));
end
function findMinSide( P[], hex )
for all P[i] in P
S[i] = getHexSide(P, hex);
end
return max(S[]);
end
The getHexSide function is reverse of getHexRadious. It returns the minimum required side-length for a hexagon with x0, y0, t0 to cover point P. This is the outcome for previous test case:
Chapter 2: (1)
As a guess, we can find two points furthest away from each other and fit one of hexagon diameters' on them:
function guessHex( P[] )
D[,] = pairwiseDistance(P[]);
[i, j] = indexOf(max(max(D[,])));
[~, j] = max(D(i, :));
hex.x0 = (P[i].x + P[j].x) / 2;
hex.y0 = (P[i].y + P[j].y) / 2;
hex.s = D[i, j]/2;
hex.t0 = atan2(P.y(i)-hex.y0, P.x(i)-hex.x0);
return hex;
end
Although this method can find a relatively small polygon, but as a greedy approach, it never guarantees to find the optimum solutions.
Chapter 3: A Better Guess
Well, this problem is definitely an optimization problem with its objective being to minimize area of hexagon (or s variable). I don't know if it has an analytical solution, and SO is not the right place to discuss it. But any optimization algorithm can be used to provide a better initial guess. I used GA to solve this with findMinSide as its cost function. In fact GA generates many guesses about x0, y0, and t0 and the best one will be selected. It finds better results but is more time consuming. Still no guarantee to find the optimum!
Optimization of Optimization
When it comes to optimization algorithms, performance is always an issue. Keep in mind that hexagon only needs to enclose the convex-hall of points. If you are dealing with large sets of points, it's better to find the convex-hall and get rid of the rest of the points.
So I'm currently working on a Java Processing program where I want to simulate high numbers of particles interacting with collision and gravity. This obviously causes some performance issue when particle count gets high, so I try my best to optimize and avoid expensive operations such as square-root, otherwise used in finding distance between two points.
However, now I'm wondering how I could do the algoritm that figures out the direction a particle should move, given it only knows the distance squared and the difference between particles' x and y (dx, dy).
Here's a snip of the code (yes, I know I should use vectors instead of seperate x/y-couples. Yes, I know I should eventually handle particles by grids and clusters for further optimization) Anyways:
void applyParticleGravity(){
int limit = 2*particleRadius+1; //Gravity no longer applied if particles are within collision reach of eachother.
float ax, ay, bx, by, dx, dy;
float distanceSquared, f;
float gpp = GPP; //Constant is used, since simulation currently assumes all particles have equal mass: GPP = Gravity constant * Particle Mass * Particle Mass
Vector direction = new Vector();
Particle a, b;
int nParticles = particles.size()-1; //"particles" is an arraylist with particles objects, each storing an x/y coordinate and velocity.
for (int i=0; i<nParticles; i++){
a = particles.get(i);
ax = a.x;
ay = a.y;
for (int j=i+1; j<nParticles; j++){
b = particles.get(j);
bx = b.x;
by = b.y;
dx = ax-bx;
dy = ay-by;
if (Math.abs(dx) > limit && Math.abs(dy) > limit){ //Not too close to eachother
distanceSquared = dx*dx + dy*dy; //Avoiding square roots
f = gpp/distanceSquared; //Gravity formula: Force = G*(m1*m2)/d^2
//Perform some trigonometric magic to decide direction.x and direction.y as a numbet between -1 and 1.
a.fx += f*direction.x; //Adds force to particle. At end of main iteration, x-position is increased by fx/mass and so forth.
a.fy += f*direction.y;
b.fx -= f*direction.x; //Apply inverse force to other particle (Newton's 3rd law)
b.fy -= f*direction.y;
}
}
}
}
Is there a more accurate way of deciding the x and y pull strength with some trigonometric magic without killing performance when particles are several hundreds? Something I thought about was doing some sort of (int)dx/dy with % operator or so and get an index of a pre-calculated array of values.
Anyone have a clue? Thanks!
hehe, I think we're working on the same kind of thing, except I'm using HTML5 canvas. I came across this trying to figure out the same thing. I didn't find anything but I figured out what I was going for, and I think it will work for you too.
You want an identity vector that points from one particle to the another. The length will be 1, and x and y will be between -1 and 1. Then you take this identity vector and multiply it by your force scalar, which you're already calculating
To "point at" one particle from another, without using square root, first get the heading (in radians) from particle1 to particle2:
heading = Math.atan2(dy, dx)
Note that y is first, I think this is how it works in Java. I used x first in Javascript and that worked for me.
Get the x and y components of this heading using sin/cos:
direction.x = Math.sin(heading)
direction.y = Math.cos(heading)
You can see an example here:
https://github.com/nijotz/triforces/blob/c7b85d06cf8a65713d9b84ae314d5a4a015876df/src/cljs/triforces/core.cljs#L41
It's Clojurescript, but it may help.
I have a particle distribution, i.e. a set of 3D array x,y and z that give the positions of N particles. I divide my domain into cells and I would like to program an algorithm which gives me how many particles I have in a cell.
I am looking for something that doesn't use too much memory. If the distribution of particles were mono-dimensional a smart idea is to sort the particles with decreasing x.
In this way we only need to save, for every cell, the particle with smaller x within the cell. For example I know that the 7th particle is the particle with the smaller x that belong to cell i. Therefore, in cell i, we have to find particles 0 to 7.
My question is: how can I extend this to 3D? Or, how can I build a chaining mesh?
This is not a trivial problem. You might want to look at R-trees and indeed Spatial databases in general.
I think your problem can be solved much easier.
Make a 3D-array of 'cells'. Loop through your particles and increment value of a cell current particle belongs to.
Sample code:
cells = int[X][Y][Z]
for p in particles:
cx = cast_to_int((p.x / maxX) * X)
cy = cast_to_int((p.y / maxY) * Y)
cz = cast_to_int((p.z / maxZ) * Z)
cells[cx][cy][cz]++
UPD: works only if all cells have the same correspondent sizes (i.e. x1 = x2 = xn, y1 = y2 = yn...).
I am using the diamond-square algorithm to generate random terrain.
It works fine except I get these large cone shapes either sticking out of or into the terrain.
The problem seems to be that every now and then a point gets set either way too high or way too low.
Here is a picture of the problem
And it can be better seen when I set the smoothness really high
And here is my code -
private void CreateHeights()
{
if (cbUseLand.Checked == false)
return;
int
Size = Convert.ToInt32(System.Math.Pow(2, int.Parse(tbDetail.Text)) + 1),
SideLength = Size - 1,
d = 1025 / (Size - 1),
HalfSide;
Heights = new Point3D[Size, Size];
float
r = float.Parse(tbHeight.Text),
Roughness = float.Parse(RoughnessBox.Text);
//seeding all the points
for (int x = 0; x < Size; x++)
for (int y = 0; y < Size; y++)
Heights[x, y] = Make3DPoint(x * d, 740, y * d);
while (SideLength >= 2)
{
HalfSide = SideLength / 2;
for (int x = 0; x < Size - 1; x = x + SideLength)
{
for (int y = 0; y < Size - 1; y = y + SideLength)
{
Heights[x + HalfSide, y + HalfSide].y =
(Heights[x, y].y +
Heights[x + SideLength, y].y +
Heights[x, y + SideLength].y +
Heights[x + SideLength, y + SideLength].y) / 4 - r + ((float)(random.NextDouble() * r) * 2);
}
}
for (int x = 0; x < Size - 1; x = x + SideLength)
{
for (int y = 0; y < Size - 1; y = y + SideLength)
{
if (y != 0)
Heights[x + HalfSide, y].y = (Heights[x, y].y + Heights[x + SideLength, y].y + Heights[x + HalfSide, y + HalfSide].y + Heights[x + HalfSide, y - HalfSide].y) / 4 - r + ((float)(random.NextDouble() * r) * 2);
if (x != 0)
Heights[x, y + HalfSide].y = (Heights[x, y].y + Heights[x, y + SideLength].y + Heights[x + HalfSide, y + HalfSide].y + Heights[x - HalfSide, y + HalfSide].y) / 4 - r + ((float)(random.NextDouble() * r) * 2);
}
}
SideLength = SideLength / 2;
r = r / Roughness;
}
}
Gavin S. P. Miller gave a SIGGRAPH '86 talk about how Fournier, Fussel & Carpenter's original algorithm was fundamentally flawed. So what you're seeing is normal for any naive implementation of the Diamond Square algorithm. You will require a separate approach for smoothing, either post each Diamond-Square compound step, or as a post-process to all diamond-square iterations (or both). Miller addressed this. Weighting and box or gaussian filtering are one option; seeding the initial array to a greater degree than just the initial 4 points (i.e., replicating the resultsets of the first few steps of diamond-square either manually or using some built-in intelligence, but instead supplying unbiased values); the more initial information you give the array before increasing the detail using diamond-square, the better your results will be.
The reason appears to be in how the Square step is performed. In the Diamond step, we've taken the average of the four corners of a square to produce that square's centre. Then, in the subsequent Square step, we take the average of four orthogonally-adjacent neighbours, one of which is the square's centre point we just produced. Can you see the problem? Those original corner height values are contributing too much to the subsequent diamond-square iteration, because they are contributing both through their own influence AND through the midpoint that they created. This causes the spires (extrusive and intrusive), because locally-derived points tend more strongly toward those early points... and because (typically 3) other points do as well, this creates "circular" influences around those points, as you iterate to higher depths using Diamond-Square. So these kinds of "aliasing" issues only appear when the initial state of the array is underspecified; in fact, the artifacting that occurs can be seen as a direct geometric consequence of using only 4 points to start with.
You can do one of the following:
Do local filtering -- generally expensive.
Pre-seed the initial array more thoroughly -- requires some intelligence.
Never smooth too many steps down from a given set of initial points -- which applies even if you do seed the initial array, it's all just a matter of relative depths in conjunction with your own maximum displacement parameters.
I believe the size of the displacement r in each iteration should be proportional to the size of the current rectangle. The logic behind this is that a fractal surface is scale invariant, so the variation in height in any rectangle should be proportional to the size of that rectangle.
In your code, the variation in height is proportional to r, so you should keep it proportional to the size of your current grid size. In other words: multiply r by the roughness before the loop and divide r by 2 in each iteration.
So, instead of
r = r / Roughness;
you should write
r = r / 2;
The actual flaw in the above algorithm is an error in conceptualization and implementation. Diamond square as an algorithm has some artifacting but this is range based artifacts. So the technical max for some pixels is higher than some other pixels. Some pixels are directly given values by the randomness while others acquire their values by the diamond and squared midpoint interpolation processes.
The error here is that you started from zero. And repeatedly added the value to the current value. This causes the range of diamond squared to start at zero and extend upwards. It must actually start at zero and go both up and down depending on the randomness. So the top range thing won't matter. But, if you don't realize this and naively implement everything as added to the value, rather than starting at zero and fluctuating from there, you will expose the hidden artifacts.
Miller's notes were right, but the flaw is generally hidden within the noise. This implementation is shows those problems. That is NOT normal. And can be fixed a few different ways. This was one of the reasons why after I extended this algorithm to remove all the memory restrictions and size restrictions and made it infinite and deterministic1, I then still switched away from the core idea here (the problems extending it to 3d and optimizing for GPUs also played a role.2
Instead of just smoothening with an average, you can use a 2-D median filter to take out extremes. It is simple to implement, and usually generates the desired effect with a lot of noise.