I'm looking to adapt the 3D Perlin noise algorithm to lower dimensions, but I'm having trouble with the gradient function, since I don't fully understand the reasoning.
The original Perlin gradient function takes four arguments: a hash and a three-dimensional coordinate (x, y, z). The result of the function is returned based on the value of hash mod 16, as listed below.
0: x + y
1: -x + y
2: x - y
3: -x - y
4: x + z
5: -x + z
6: x - z
7: -x - z
8: y + z
9: -y + z
10: y - z
11: -y - z
12: y + x
13: -y + z
14: y - x
15: -y - z
The return values from 0 to 11 make a kind of pattern, since every combination is represented once. The last four, however, are duplicates. Why were they chosen to fit the last four return values? And what would be the analagous cases with two (x, y) and one (x) dimensions?
... is late answer better than none? ;-)
The grad function in the "improved noise" implementation calculates a dot product between the vector x, y, z and a pseudo random gradient vector.
In this implementation, the gradient vector is selected from 12 options.
They drop uniformity of the selection and add numbers 12 to 15, because it is faster to do hash & 15 than hash % 12
For a 2D perlin noise I have used only 4 gradient vectors without any visible problems like this:
return ((hash & 1) ? x : -x) + ((hash & 2) ? y : -y);
Related
I have a y = sin(x) curve and x is between 0 and pi (first quadrant - no negative values). Something like this:
I want to equally divide area under the curve to n pieces and get the (biggest) x value for each piece.
Any ideas would be appreciated for an algorithm.
The area under the curve is its integral. The integral of sin(x) from 0 to u is 1-cos(u), so the integral from 0 to πis 2. Inverting that formula finds the points t for which u gets a certain value. So, we're looking for the values t=acos(1-u) for the values of u that divide [0, 2] into n equal parts.
In code:
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(-0.2, 3.3, 500)
y = np.sin(x)
plt.plot(x, y)
n = 7
u = np.linspace(0, 2, n + 1, endpoint=True)
t = np.arccos(1 - u)
print("The limits of the areas are:", list(t))
colors = plt.cm.Set2.colors
for i in range(n):
filter = (x > t[i]) & (x <= t[i + 1])
plt.fill_between(x[filter], 0, y[filter], color=colors[i])
plt.xticks(t)
plt.gca().spines['bottom'].set_position('zero')
plt.gca().spines['top'].set_color('none')
plt.gca().spines['right'].set_color('none')
plt.tight_layout()
plt.show()
Though it is not entirely consequent answer, maybe it would be interesting.
Deviding the area in question into three equal parts.
On the basis OA, as on the hypotenuse, build a right angled triangle with the ratio of the legs OC:AC= 4:5. Raise the vertical from the point C. Shift that vertical symmetrically also to the righy side. Division completed. Error about 1%.
Now on the merits/ One must use a recurrent formulae: X0= 0, X(i+1)= X(i)+Δ(i+1); Δ(i+1)= arccossqrt((p(i))^2-q(i))-p(i)), where p(i)= cos(X(i))(2/n-cos(X(i)); q(i)= cos(2X(i))+4/n(1/n-cos(X(i))
For a given line of an equation y = mx + c where m is the gradient and c is the y-intercept. How would I determine the "edge-points" on a graph?
To clarify what I mean by "edge-points", I've added an example below.
The edge-points are circled in red. To determine the edge-points here it would simply be (0, c) & (maximum x-value, m * maximum x-value + c). However, the problem arises when I consider lines with a different m value. For example:
I can't apply the same logic here, instead the edge-points would be ((maximum y-value - c) * m, maximum y-value) & (-c * m, 0), which was derived from the equation y = mx + c. So my question is how would I determine these 2 edge-points given any m or c? Is there a certain pattern I'm not seeing here?
You need to solve 4 simple linear equations (not equation system!):
y = m * 0 + c = c
y = m * maxX + c
0 = m * x + c
maxY = m * x + c
and get points of intersections with axes and with max lines. Then filter out points with negative coordinates and too large ones, because you want only the 1st quadrant
The first one equation is already solved y=c
The second gives point of intersection with right vertical line
The third gives point of intersection with OX axis
The fourth gives point of intersection with top horizontal line
Example:
maxX = 5
maxY = 5
line y = 2 *x - 1
x0, y0 = 0, -1
x1, y1 = 5, 9
x2, y2 = 1/2, 0
x3, y3 = 2, 5
First pair contains negative coordinate y=-1
Second pair contains y=9 > maxY
Third and fourth ones fulfill your constraints.
So this line gives segment (1/2, 0)-(2, 5) (like near vertical segment at your second picture)
This algo might be considered as simple kind of line clippping by rectangle
For the line to be in the given rectangle there is a constraint given by the x values and a constraint given by the y values.
The x constraint trivially leads to an interval in which the x values must be.
The y constraint also gives you such an interval for the x values but only after some easy calculation.
Now determine the intersection of the two intervals (which may also be empty).
This question is for learning purpose. I am writing my own function to plot an equation. For example:
function e(x) { return sin(x); }
plot(e);
I wrote a plot function that takes function as parameter. The plotting code is simple, x run from some value to some value and increase by small step. This is plot that the plot() manage to produce.
But there is the problem. It cannot express the circle equation like x2 + y2 = 1. So the question would be how should the plot and equation function look like to be able to handle two variables.
Noted that I am not only interesting in two circle equation. A more generalize way of plotting function with two variables.
Well to plot a non function 1D equation (x,y variables) you have 3 choices:
convert to parametric form
so for example x^2 + y^2 = 1 will become:
x = cos(t);
y = sin(t);
t = <0,2*PI>
So plot each function as 1D function plot while t is used as parameter. But for this you need to exploit mathematic identities and substitute ... That is not easily done programaticaly.
convert to 1D functions
non function means you got more than 1 y values for some x values. If you separate your equation into intervals and divide to all cases covering whole plot then you can plot each derived function instead.
So you derive y algebraicaly (let assume unit circle again):
x^2 + y^2 = 1
y^2 = 1 - x^2
y = +/- sqrt (1 - x^2)
----------------------
y1 = +sqrt (1 - x^2)
y2 = -sqrt (1 - x^2)
x = <-1,+1>
this is also not easily done programaticaly but it is a magnitude easier than #1.
do a 2D plot using equation as predicator
simply loop your view through all pixels and render only those for which the equation is true. So again unit circle:
for (x=-1.0;x<=+1.0;x+=0.001)
for (y=-1.0;y<=+1.0;y+=0.001)
if (fabs((x*x)+(y*y)-1.0)<=1e-6)
plot_pixel(x,y,some_color); // x,y should be rescaled and offset to the actual plot view
So you just convert your equation to implicit form:
x^2 + y^2 = 1
-----------------
x^2 + y^2 - 1 = 0
and compare to zero with some threshold (to avoid FPU accuracy problems):
| x^2 + y^2 - 1 | <= threshold_near_zero
The threshold is half size of plot lines width. So this way you can easily change plot width to any pixel size... As you can see this is easily done programaticaly but the plot is slower as you need to loop through all the pixels of the plot view. The step for x,y for loops should match pixel size of the view scale.
Also while using equation as predicate you should handle math singularities as with blind probing you will most likely hit some like division by zero, domain errors for asin,acos,sqrt,etc.
So for arbitrary 1D non function use #3. unless you got some mighty symbolic math engine for #1 or #2.
Defination of a function : A function f takes an input x, and returns a single output f(x).
Now it means for any input there will be one and only one unique output. Like y = sin(x). this is a function on x and y definnes that function.
For equaltion like (x*x) + (y*y) = 1. there are two possible values of y for a single value of `x, hence it can not be termed as a valid equaltion for a function.
If you need to draw it then one possible solution is to plot two points for a single value of x, i.e. sqrt(1-(x*x)) and other -1*sqrt(1-(x*x)). Plot both the values (one will be positive other will be negative with the same absolute value).
I would like to convert a point coordinates to a new generated coordinate system
the original system start in the top left corner of the image (0,0)
The information that I have in a new system are :
1- I have the value of the new original (x0,y0) in some where in the image
2- also I have 2 points on both new axes ( 4 points in total 2 in each line)
using this I can calculate the line equation for the 2 lines of axes (y=a1x+b1) ,(y=a2x+b2)
3- I have vector for each line (Vx, Vy)
Note: sometime the new axes rotate (the lines are not exactly horizontal or vertical)
How can I convert the points coordinates to this new system
any help will be so appreciated
here is the image
First express your lines like a1*(x-x0)+b1*(y-y0)=0 and a2*(x-x0)+b2*(y-y0)=0 and their intersection x0,y0 is accounted for already in the equations.
updated signs
The transformation from x,y to z,w is
z = -sqrt(a1^2+b1^2)*(a2*(x-x0)+b2*(y-y0))/(a2*b1-a1*b2)
w = sqrt(a2^2+b2^2)*(a1*(x-x0)+b1*(y-y0))/(a1*b2-a2*b1)
and the inverse
x = x0 - b1*z/sqrt(a1^2+b1^2) + b2*w/sqrt(a2^2+b2^2)
y = y0 + a1*z/sqrt(a1^2+b1^2) - a2*w/sqrt(a2^2+b2^2)
It would be helpful to scale the coefficients such that sqrt(a1^2+b1^2)=1 and sqrt(a2^2+b2^2)=1.
Note that this works for non-orthogonal lines also. As long as they are not parallel and a2*b1-a1*b2!=0 it is going to work.
Example
The z line (-2)*(x-3) + (1)*(y-1) = 0 and w line (-1)*(x-3) + (-4)*(y-1) = 0 meet at (3,1). The coefficients are thus a1=-2, b1=1, a2=-1, b2=-4.
The coordinates (x,y)=(2,1) transform to
z = -sqrt((-2)^2+1^2) ((-1) (x-3)+(-4) (y-1))/((-1) 1-(-2) (-4)) = 0.2484
w = sqrt((-1)^2+(-4)^2) ((-2) (x-3)+1 (y-1))/((-2) (-4)-(-1) 1) = 0.9162
With the inverse
x = -1 z/sqrt((-2)^2+1^2)+(-4) w/sqrt((-1)^2+(-4)^2)+3 = 2
y = (-2) z/sqrt((-2)^2+1^2)-(-1) w/sqrt((-1)^2+(-4)^2)+1 = 1
Development
For a line a1*(x-x0)+b1*(y-y0)=0 the direction vector along the line is e1 = [e1x,e1y]= [-b1/sqrt(a1^2+b1^2),a1/sqrt(a1^2+b1^2)]. Similarly for the other line.
The screen coordinates of a local point [z,w] is found by starting at the origin x0, y0 and moving by z along the first line and then by w along the second line. So
x = x0 + e1x*z + e2x*w = x0 -b1/sqrt(a1^2+b1^2)*z - b2/sqrt(a2^2+b2^2)*w
y = y0 + e1y*z + e2y*w = y0 +a1/sqrt(a1^2+b1^2)*z + a2/sqrt(a2^2+b2^2)*w
Now I need to flip the direction of the second line to make it work per the original posting visualization, by reversing the sign of w.
To find z, w from x, y invert the above two equations.
Trilinear interpolation approximates the value of a point (x, y, z) inside a cube using the values at the cube vertices. I´m trying to do an "inverse" trilinear interpolation. Knowing the values at the cube vertices and the value attached to a point how can I find (x, y, z)? Any help would be highly appreciated. Thank you!
You are solving for 3 unknowns given 1 piece of data, and as you are using a linear interpolation your answer will typically be a plane (2 free variables). Depending on the cube there may be no solutions or a 3D solution space.
I would do the following. Let v be the initial value. For each "edge" of the 12 edges (pair of adjacent vertices) of the cube look to see if 1 vertex is >=v and the other <=v - call this an edge that crosses v.
If no edges cross v, then there are no possible solutions.
Otherwise, for each edge that crosses v, if both vertices for the edge equal v, then the whole edge is a solution. Otherwise, linearly interpolate on the edge to find the point that has a value of v. So suppose the edge is (x1, y1, z1)->v1 <= v <= (x2, y2, z2)->v2.
s = (v-v1)/(v2-v1)
(x,y,z) = (s*(x2-x1)+x1, (s*(y2-y1)+y1, s*(z2-z1)+z1)
This will give you all edge points that are equal to v. This is a solution, but possibly you want an internal solution - be aware that if there is an internal solution there will always be an edge solution.
If you want an internal solution then just take any point linearly between the edge solutions - as you are linearly interpolating then the result will also be v.
I'm not sure you can for all cases. For example using tri-linear filtering for colours where each colour (C) at each point is identical means that wherever you interpolate to you will still get the colour C returned. In this situation ANY x,y,z could be valid. As such it would be impossible to say for definite what the initial interpolation values were.
I'm sure for some cases you can reverse the maths but, i imagine, there are far too many cases where this is impossible to do without knowing more of the input information.
Good luck, I hope someone will prove me wrong :)
The wikipedia page for trilinear interpolation has link to a NASA page which allegedly describes the inversing process - have you had a look at that?
The problem as you're describing it somewhat ill-defined.
What you're asking for basically translates to this: I have a 3D function and I know its values in 8 known points. I'd like to know what is the point in which the function received value V.
The trouble is that in most likelihood there is an infinite number of such points which make a set of surfaces, lines or points, depending on the data.
One way to find this set is to use an iso-surfacing algorithm like Marching cubes.
Let's start with 2d: think of a bilinear hill over a square km,
with heights say 0 10 20 30 at the 4 corners
and a horizontal plane cutting the hill at height z.
Draw a line from the 0 corner to the 30 corner (whether adjacent or diagonal).
The plane must cut this line, for any z,
so all points x,y,z fall on this one line, right ? Hmm.
OK, there are many solutions -- any z plane cuts the hill in a contour curve.
Say we want solutions to be spread out over the whole hill,
i.e. minimize two things at once:
vertical distance z - bilin(x,y),
distance from x,y to some point in the square.
Scipy.optimize.leastsq is one way of doing this, sample code below;
trilinear is similar.
(Optimizing any two things at once requires an arbitrary tradeoff or weighting:
food vs. money, work vs. play ...
Cf. Bounded rationality
)
""" find x,y so bilin(x,y) ~ z and x,y near the middle """
from __future__ import division
import numpy as np
from scipy.optimize import leastsq
zmax = 30
corners = [ 0, 10, 20, zmax ]
midweight = 10
def bilin( x, y ):
""" bilinear interpolate
in: corners at 0 0 0 1 1 0 1 1 in that order (binary)
see wikipedia Bilinear_interpolation ff.
"""
z00,z01,z10,z11 = corners # 0 .. 1
return (z00 * (1-x) * (1-y)
+ z01 * (1-x) * y
+ z10 * x * (1-y)
+ z11 * x * y)
vecs = np.array([ (x, y) for x in (.25, .5, .75) for y in (.25, .5, .75) ])
def nearvec( x, vecs ):
""" -> (min, nearest vec) """
t = (np.inf,)
for v in vecs:
n = np.linalg.norm( x - v )
if n < t[0]: t = (n, v)
return t
def lsqmin( xy ): # z, corners
x,y = xy
near = nearvec( np.array(xy), vecs )[0] * midweight
return (z - bilin( x, y ), near )
# i.e. find x,y so both bilin(x,y) ~ z and x,y near a point in vecs
#...............................................................................
if __name__ == "__main__":
import sys
ftol = .1
maxfev = 10
exec "\n".join( sys.argv[1:] ) # ftol= ...
x0 = np.array(( .5, .5 ))
sumdiff = 0
for z in range(zmax+1):
xetc = leastsq( lsqmin, x0, ftol=ftol, maxfev=maxfev, full_output=1 )
# (x, {cov_x, infodict, mesg}, ier)
x,y = xetc[0] # may be < 0 or > 1
diff = bilin( x, y ) - z
sumdiff += abs(diff)
print "%.2g %8.2g %5.2g %5.2g" % (z, diff, x, y)
print "ftol %.2g maxfev %d midweight %.2g => av diff %.2g" % (
ftol, maxfev, midweight, sumdiff/zmax)