I wrote a user-defined-function (UDF) that returns true iff the input point (x,y,z) is on plane P.
The function IsOnPlane gets the x,y,z coordinates of a point, and an index of the plane id P as an input.
For example, if P is the plane spanned by {(0,0,1),(0,1,0),(0,1,1)} then :
IsOnPlane(0,700,2, P)=TRUE and IsOnPlane(1,2,3, P)=FALSE
The planes are represented as the coefficient of the plane equation:
Ax+By+Cz+D=0
In order to implement the IsOnPlane function, I am using a different representation of plane (spanning vectors).
Is there a way to do the conversion once and store it in memory, and then use it to evaluate the return value of the function each time the UDF is called ?
Related
I want to do scattered interpolation in Matlab, but scatteredInterpolant does not do quite what I want.
scatteredInterpolant allows me to provide a set of input sampling positions and the corresponding sample values. Then I can query the interpolated values by supplying a set of positions:
F = scatteredInterpolant(xpos, ypos, samplevals)
interpvals = F(xgrid, ygrid)
This is sort of the opposite of what I want. I already have a fixed set of sample positions, xpos/ypos, and output grid, xgrid/ygrid, and then I want to vary the sample values. The use case is that I have many quantities sampled at the same sampling positions, that should all be interpolated to the same output grid.
I have an idea how to do this for nearest neighbor and linear interpolation, but not for more general cases, in particular for natural neighbor interpolation.
This is what I want, in mock code:
G = myScatteredInterpolant(xpos, ypos, xgrid, ygrid, interp_method)
interpvals = G(samplevals)
In terms of what this means, I suppose G holds a (presumably sparse) matrix of weights, W, and then G(samplevals) basically does W * samplevals, where the weights in the matrix W depends on the input and output grid, as well as the interpolation method (nearest neighbor, linear, natural neighbor). Calculating the matrix W is probably much more expensive than evaluating the product W * samplevals, which is why I want this to be reused.
Is there any code in Matlab, or in a similar language that I could adapt, that does this? Can it somehow be extracted from scatteredInterpolant in reasonable processing time?
Assume we have a 3D grid that spans some 3D space. This grid is made out of cubes, the cubes need not have integer length, they can have any possible floating point length.
Our goal is, given a point and a direction, to check linearly each cube in our path once and exactly once.
So if this was just a regular 3D array and the direction is say in the X direction, starting at position (1,2,0) the algorithm would be:
for(i in number of cubes)
{
grid[1+i][2][0]
}
But of course the origin and the direction are arbitrary and floating point numbers, so it's not as easy as iterating through only one dimension of a 3D array. And the fact the side lengths of the cubes are also arbitrary floats makes it slightly harder as well.
Assume that your cube side lengths are s = (sx, sy, sz), your ray direction is d = (dx, dy, dz), and your starting point is p = (px, py, pz). Then, the ray that you want to traverse is r(t) = p + t * d, where t is an arbitrary positive number.
Let's focus on a single dimension. If you are currently at the lower boundary of a cube, then the step length dt that you need to make on your ray in order to get to the upper boundary of the cube is: dt = s / d. And we can calculate this step length for each of the three dimensions, i.e. dt is also a 3D vector.
Now, the idea is as follows: Find the cell where the ray's starting point lies in and find the parameter values t where the first intersection with the grid occurs per dimension. Then, you can incrementally find the parameter values where you switch from one cube to the next for each dimension. Sort the changes by the respective t value and just iterate.
Some more details:
cell = floor(p - gridLowerBound) / s <-- the / is component-wise division
I will only cover the case where the direction is positive. There are some minor changes if you go in the negative direction but I am sure that you can do these.
Find the first intersections per dimension (nextIntersection is a 3D vector):
nextIntersection = ((cell + (1, 1, 1)) * s - p) / d
And calculate the step length:
dt = s / d
Now, just iterate:
if(nextIntersection.x < nextIntersection.y && nextIntersection.x < nextIntersection.z)
cell.x++
nextIntersection.x += dt.x
else if(nextIntersection.y < nextIntersection.z)
cell.y++
nextIntersection.y += dt.y
else
cell.z++
nextIntersection.z += dt.z
end if
if cell is outside of grid
terminate
I have omitted the case where two or three cells are changed at the same time. The above code will only change one at a time. If you need this, feel free to adapt the code accordingly.
Well if you are working with floats, you can make the equation for the line in direction specifiedd. Which is parameterized by t. Because in between any two floats there is a finite number of points, you can simply check each of these points which cube they are in easily cause you have point (x,y,z) whose components should be in, a respective interval defining a cube.
The issue gets a little bit harder if you consider intervals that are, dense.
The key here is even with floats this is a discrete problem of searching. The fact that the equation of a line between any two points is a discrete set of points means you merely need to check them all to the cube intervals. What's better is there is a symmetry (a line) allowing you to enumerate each point easily with arithmetic expression, one after another for checking.
Also perhaps consider integer case first as it is same but slightly simpler in determining the discrete points as it is a line in Z_2^8?
I would like to fit a MR binary data of 281*398*104 matrix which is not a perfect sphere, and find out the center and radius of sphere and error also. I know LMS or SVD is a good choice to fit for sphere.
I have tried sphereFit from matlab file exchange but got an error,
>> sphereFit(data)
Warning: Matrix is singular to working precision.
> In sphereFit at 33
ans =
NaN NaN NaN
Would you let me know where is the problem, or any others solution?
If you want to use sphere fitting algorithm you should first extract the boundary points of the object you assume to be a sphere. The result should be represented by a N-by-3 array containing coordinates of the points. Then you can apply sphereFit function.
In order to obtain boundary point of a binary object, there are several methods. One method is to apply morphological erosion (you need the "imerode" function from the image processing toolbox) with small structuring element, then compute set difference between the two images, and finally use the "find" function to transform binary image into a coordinate array.
the idea is as follow:
dataIn = imerode(data, ones([3 3 3]));
bnd = data & ~data2;
inds = find(bnd);
[y, x, z] = ind2sub(size(data), inds); % be careful about x y order
points = [x y z];
sphere = sphereFitting(points);
By the way, the link you gave refers to circle fitting, I suppose you wanted to point to a sphere fitting submission?
regards,
I have a 3D Cartesian grid data that needs to be used to create a 3D regular mesh for interpolation method. x,y & z are 3 vectors with data points that are used to form this grid. My question is, how can i efficiently give 2 index to these points say,
where c000 is indexed as 1 point (1,1,1), c100 is indexed as 2 for (2,1,1) for (x,y,z)
coordinate points and another index to identify the 8 points forming the cube. Say if I have a point C, I must retrieve the nearest 8 points for interpolation. so for points c000,c100,c110,c010,c001,c101,c111,c011 point index and cube index. Since the data available is huge, the focus is to use faster implementation. pls give me some hints on how to proceed.
About the maths:
Identifying the cube which a point p surrounds requires a mapping
U ⊂ ℝ+**3 -> ℕ:
p' (= p - O_) -> hash_r(p');
"O_" being located at (min_x(G),min_y(G),min_z(G)) of the Grid G.
Along each axis, the cube numbering is trivial.
Given a compound cube number
n_ = (s,t,u)
and N_x, N_y, N_z being the size of your X_, Y_, Z_, a suitable hash would be
hash_n(n_) = s
| t * 2**(floor(log_2(N_x))+1)
| u * 2**(floor(log_2(N_x)) + floor(log_2(N_y)) + 2).
To calculate e.g. "s" for a point C, take
s = floor((C[0] - O_)/ a)
"a" being the edge length of the cubes.
About taking that to C++
Given you have enough space to allocate
(2**(floor(log_2(max(N_x, N_y, N_z)))+1)**3
buckets, a std::unordered_map<hash_t,Cube> using that (perfect) hash would offer O(1) for finding the cube for a point p.
A lesser pompous std::map<hash_t,Cube> using a less based on that hash would offer O(log(N)) find complexity.
Why does scaling (uniformly) object down cause object to be lighter in OpenGL ES 1.x?
It would be more sense to be that it would darker because aren't the normals being scaled down also making the object darker? But for some reason the object becomes lighter. When I scale up, the object becomes darker. In my opinion this should be the other way round.
Please do not suggest using GL_NORMALIZE etc. I am just curious why OpenGL implementation works like that.
Simple question, complex answer. This is the relevant extract from the redbook:
Transforming Normals
Normal vectors don't transform in the
same way as vertices, or position
vectors. Mathematically, it's better
to think of normal vectors not as
vectors, but as planes perpendicular
to those vectors. Then, the
transformation rules for normal
vectors are described by the
transformation rules for perpendicular
planes. A homogeneous plane is denoted
by the row vector (a , b, c, d), where
at least one of a, b, c, or d is
nonzero. If q is a nonzero real
number, then (a, b, c, d) and (qa, qb,
qc, qd) represent the same plane. A
point (x, y, z, w)T is on the plane
(a, b, c, d) if ax+by+cz+dw = 0. (If w
= 1, this is the standard description of a euclidean plane.) In order for
(a, b, c, d) to represent a euclidean
plane, at least one of a, b, or c must
be nonzero. If they're all zero, then
(0, 0, 0, d) represents the "plane at
infinity," which contains all the
"points at infinity."
If p is a homogeneous plane and v is a
homogeneous vertex, then the statement
"v lies on plane p" is written
mathematically as pv = 0, where pv is
normal matrix multiplication. If M is
a nonsingular vertex transformation
(that is, a 4 × 4 matrix that has an
inverse M-1), then pv = 0 is
equivalent to pM-1Mv = 0, so Mv lies
on the plane pM-1. Thus, pM-1 is the
image of the plane under the vertex
transformation M.
If you like to think of normal vectors
as vectors instead of as the planes
perpendicular to them, let v and n be
vectors such that v is perpendicular
to n. Then, nTv = 0. Thus, for an
arbitrary nonsingular transformation
M, nTM-1Mv = 0, which means that nTM-1
is the transpose of the transformed
normal vector. Thus, the transformed
normal vector is (M-1)Tn. In other
words, normal vectors are transformed
by the inverse transpose of the
transformation that transforms points.
Whew!
In short, positions and normals do not transform the same way. As explained in the previous text, the normal transformation matrix is (M-1)T. Scaling M to sM would result in (M-1)T/s: the smaller the scale factor, the bigger the transformed normal... Here we go!
It would seem that the normals are not scaling with the object. This would mean the normals for an object at full size would considerably more coverage of an object at a smaller size. This would lead to the angles between the light sources and the normals being exactly the same but on a surface that's much smaller.