I have a list of points that I want to draw a smooth line between. I am using the RVG library for drawing so if i could get a SVG string from my points I would be happy. Searched around and found that Catmull-Rom probably is the algorithm to use.
Found some implementations in the Kamelopard and Rubyvis libraries, but couldn't understand how to use them from my list of points.
So, the question is, how can I take my array of (x,y) points and get a Catmull-Rom interpolated SVG curve from them?
Catmull-Rom is probably a good place to start. I recently re-implemented the Kamelopard version, and found this helpful: http://www.cs.cmu.edu/~462/projects/assn2/assn2/catmullRom.pdf
It's fairly straightforward, provided you understand the matrix multiplication. You'll end up with a matrix equation you'll need to evaluate a bunch of times, once per point on the path you're drawing. If you have control points A, B, C, and D, and you want to draw the curve between B and C, make a matrix where A, B, C, and D are the rows, and plug it into the equation at the top of the paper I linked to. It will be the last matrix in the list. The other values you'll need to know are "u", which ranges from 0 to 1, and "T", the "tension" of the spline. You'll evaluate the equation multiple times, incrementing u across its domain each time. You can set the tension to whatever you want, between 0 and 1, and it will affect how sharply the spline curves. 0.5 is a common value.
If you're trying to evaluate the curve between, for instance, the first two control points on your list, or the last two, you'll find you have problems making your matrix, because you need the two control points on either side of the point you're evaluating. In these cases, just duplicate the first or last control point, as necessary.
Related
I'm interested in writing a my own function that subtracts one 2D triangle from another, returning the remainder as a n array of triangles. (not using an existing geometry library)
Two examples of input & output, triangles are numbered, order isn't important.
While I'm familier with these kinds of algorithms, this seems like a general enough problem that there may be a known robust solution already written (if not I may look into writing one as an answer to this question)
I recently work on this problem and resolved it.
I tried to describe all the possibilities of layering and I got there.
There are 16 cases classified by number of points of the dimunuende triangle in the diminishing triangle and of the dimunutor in the diminuende.
For information in the subtraction a - b = c, a is the dimunende and b is the diminutive, c is the difference.
In the subtraction table, the diminishing triangle is represented in blue and the diminutive in red, at the bottom right of each box are indicated the points of each triangle included in the other. The number of triangles in the difference is at the top left of each box and finally the number of intersections in red at the bottom left.
My method consist to count intersections and points insides triangles. In some cases we must also determine the validity of the triangles either by intersection detection or by checking that it does not contain points. Finally i think, I believe, I managed to solve this problem.
Take a look on this forum for code source and more informations :
https://gb32.proboards.com/post/2112/thread
I have a turtle-graphics-based algorithm for generating a space-filling Hilbert curve in two dimensions. It is recursive and goes like this:
Wa want to draw a curve of order n, in direction x (where x ∈ {L, R}), and let y be the direction opposite to x. We do as follows:
turn in the direction y
draw a Hilbert curve of order n-1, direction y
move one step forward
turn in the direction x
draw a Hilbert curve of order n-1, direction x
move one step forward
draw a Hilbert curve of order n-1, direction x
turn in the direction x
move one step forward
draw a Hilbert curve of order n-1, direction y
I understand this and was able to implement a working solution. However, I'm now trying to "upgrade" this to 3D, and here's where I basically hit a wall; in 3D, when we reach a vertex, we can turn not in two, but four directions (going straight or backing up is obviously not an option, hence four and not six). Intuitively, I think I should store the plane on which the turtle is "walking" and its general direction in the world, represented by an enum with six values:
Up
Down
Left
Right
In (from the camera's perspective, it goes "inside" the world)
Out (same as above, outside)
The turtle, like in 2D, has a state containing the information outlined above, and when it reaches as vertex (which can be thought of as a "crossing") has to make a decision where to go next, based on that state. Whereas in two dimensions it is rather simple, in three, I'm stumped.
Is my approach correct? (i.e., is this what I should store in the turtle's state?)
If it is, how can I use that information to make a decision where to go next?
Because there are many variants of 3D space filling Hilbert curves, I should specify that this is what I'm using as reference and to aid my imagination:
I'm aware that a similar question has already been asked, but the accepted answer links to a website there this problem is solved using a different approach (i.e., not turtle graphics).
Your 2d algorithm can be summarized as “LRFL” or “RLFR” (with “F” being “forward”). Each letter means “turn that direction, draw a (n-1)-curve in that direction, and take a step forward”. (This assumes the x in step 8 should be a y.)
In 3d, you can summarize the algorithm as the 7 turns you would need to go along your reference. This will depend on how you visualize the turtle starting. If it starts at the empty circle, facing the filled circle, and being right-side-up (with its back facing up), then your reference would be “DLLUULL”.
In my project, I represent geometry using splines. For physics and rendering I preprocess the splines and convert them into lines, and later polygons, by sampling the splines at a regular interval. However, I want to reduce the number of vertices/lines by ignoring samples that are already well enough represented by a line.
Coming up short when searching, I was wondering if there are any traditional techniques to convert a curve to a set of vertices while reducing the resulting error.
EDIT: To clarify, the result I want to end up with is a number of vertices/line segments that best represent the spline with the fewest amount of vertices/line segments. I'm not sure how to define what "best represent the spline" really means, but the goal is to make it as hard as possible to distinguish the difference between the spline and the approximation.
It can be done by recursively refining part which is not near segment between part ends.
If we have curve (spline) C:[0,1]->R^n. Than first approximation is segment S between curve end points [C(0), C(1)]. Take point C(0.5) and check how far is it from segment S. If it is far than we have to take it in discretization, if not than S is good approximation. If C(0.5) is far, than next approximation is polyline [C(0), C(0.5), C(1)], and we make same procedure with parts [C(0), C(0.5)] and [C(0.5), C(1)].
If you are using polynomial spline of order >= 3 (e.g. cubic spline) than it can have inflection point(s). In that case it is possible that curve point on half can 'fall' right on segment, but curve around to be far from segment. In that case it is good to check one more level of sub-parts.
This is entirely based on my own intuition, so I'm not sure if it coincides AT ALL with best practices. I do have a mathematics degree, so hopefully it's not too far off. I'll have you note that the computation involved may outstrip performance gains granted by not using as many vertices if the spline needs to be recalculated frequently.
Let's say the vertices are in an array like [v(0), v(1), v(2),..., v(n)] where each v(i) is something like (x, y). By iterating over the vertices starting at v(1) and ending at v(n-1), we can compare a point with its neighbors in order to tell whether or not to discard it. Note that we ignore v(0) and v(n) for two reasons: (I assume) we don't want to remove our endpoints, and also v(0) and v(n) are missing a neighbor that we would need in order to set up our calculation. I can think of a couple possibilities here that might warrant examination, but one in particular seems (in my head) to be the best answer...
Consider the case where we're deciding whether or not to remove v(i) from the vertex array. We could examine the Cartesian distance between v(i) and its neighbors, and remove the point if both are below some threshold value T. For example if v(i-1) = (x1, y1) and v(i) = (x2, y2) and v(i+1) = (x3, y3), then we evaluate sqrt((x2-x1)^2 + (y2-y1)^2))<T && sqrt((x3-x2)^2 + (y3-y2)^2))<T, removing v(i) if the evaluation returns true.
In 3+ dimensions, this would become more complicated - the calculation would be similar, but you would require a method of determining a point's neighbors since they might not lie directly next to the examined point in the vertex array.
I'm working with a really slow renderer, and I need to approximate polygons so that they look almost the same when confined to a screen area containing very few pixels. That is, I'd need an algorithm to go through a polygon and subtract/move a bunch of vertices until the end polygon has a good combination of shape preservation and economy of vertice usage.
I don't know if there's a formal name for these kind of problems, but if anyone knows what it is it would help me get started with my research.
My untested plan is to remove the vertices that change the polygon area the least, and protect the vertices that touch the bounding box from removal, until the difference in area from the original polygon to the proposed approximate one exceeds a tolerance I specify.
This would all be done only once, not in real time.
Any other ideas?
Thanks!
You're thinking about the problem in a slightly off way. If your goal is to reduce the number of vertices with a minimum of distortion, you should be defining your distortion in terms of those same vertices, which define the shape. There's a very simple solution here, which I believe would solve your problem:
Calculate distance between adjacent vertices
Choose a tolerance between vertices, below which the vertices are resolved into a single vertex
Replace all pairs of vertices with distances lower than your cutoff with a single vertex halfway between the two.
Repeat until no vertices are removed.
Since your area is ultimately decided by the vertex placement, this method preserves shape and minimizes shape distortion. The one drawback is that distance between vertices might be slightly less intuitive than polygon area, but the two are proportional. If you really wish, you could run through the change in area that would result from vertex removal, but that's a lot more work for questionable benefit imo.
As mentioned by Angus, if you want a direct solution for the change in area, it's not actually super difficult. Was originally going to leave this as an exercise to the reader, but it's totally possible to solve this exactly, though you need to include vertices on either side.
Assume you're looking at a window of vertices [A, B, C, D] that are connected in that order. In this example we're determining the "cost" of combining B and C.
Calculate the angle offset from collinearity from A toward C. Basically you just want to see how far from collinear the two points are. This is |sin(|arctan(B - A)| - |arctan(C - A)|)| Where pipes are absolute value, and differences are the sensical notion of difference.
Calculate the total distance over which the angle change will effectively be applied, this is just the euclidean distance from A to B times the euclidean distance from B to C.
Multiply the terms from 2 and 3 to get your first term
To get your second term, repeat steps 2 - 4 replacing A with D, B with C, and C with B (just going in the opposite direction)
Calculate the geometric mean of the two terms obtained.
The number that results in step 6 presents the full-picture minus a couple constants.
I tried my own plan first: Protect the vertices touching the bounding box, then remove the rest in the order that changes the resultant area the least, until you can't find a vertice to remove that keeps the new polygon area within X% of the original one. This is the result with X = 5%:
When the user zooms out really far these shapes fit the bill well enough for me. I haven't tried any of the other suggestions. The savings are quite astonishing, sometimes from 80-100 vertices down to 4 or 5.
Given two 3d objects, how can I find if one fits inside the second (and find the location of the object in the container).
The object should be translated and rotated to fit the container - but not modified otherwise.
Additional complications:
The same situation - but look for the best fit solution, even if it's not a proper match (minimize the volume of the object that doesn't fit in the container)
Support for elastic objects - find the best fit while minimizing the "distortion" in the objects
This is a pretty general question - and I don't expect a complete solution.
Any pointers to relevant papers \ articles \ libraries \ tools would be useful
Here is one perhaps less than ideal method.
You could try fixing the position (in 3D space) of 1 shape. Placing the other shape on top of that shape. Then create links that connect one point in shape to a point in the other shape. Then simulate what happens when the links are pulled equally tight. Causing the point that isn't fixed to rotate and translate until it's stable.
If the fit is loose enough, you could use only 3 links (the bare minimum number of links for 3D) and try every possible combination. However, for tighter fit fits, you'll need more links, perhaps enough to place them on every point of the shape with the least number of points. Which means you'll some method to determine how to place the links, which is not trivial.
This seems like quite hard problem. Probable approach is to have some heuristic to suggest transformation and than check is it good one. If transformation moves object only slightly out of interior (e.g. on one part) than make slightly adjust to transformation and test it. If object is 'lot' out (e.g. on same/all axis on both sides) than make new heuristic guess.
Just an general idea for a heuristic. Make a rasterisation of an objects with same pixel size. It can be octree of an object volume. Make connectivity graph between pixels. Check subgraph isomorphism between graphs. If there is a subgraph than that position is for a testing.
This approach also supports 90deg rotation(s).
Some tests can be done even on graphs. If all volume neighbours of a subgraph are in larger graph, than object is in.
In general this is 'refined' boundary box approach.
Another solution is to project equal number of points on both objects and do a least squares best fit on the point sets. The point sets probably will not be ordered the same so iterating between the least squares best fit and a reordering of points so that the points on both objects are close to same order. The equation development for this is a lot of algebra but not conceptually complicated.
Consider one polygon(triangle) in the target object. For this polygon, find the equivalent polygon in the other geometry (source), ie. the length of the sides, angle between the edges, area should all be the same. If there's just one match, find the rigid transform matrix, that alters the vertices that way : X' = M*X. Since X' AND X are known for all the points on the matched polygons, this should be doable with linear algebra.
If you want a one-one mapping between the vertices of the polygon, traverse the edges of the polygons in the same order, and make a lookup table that maps each vertex one one poly to a vertex in another. If you have a half edge data structure of your 3d object that'll simplify this process a great deal.
If you find more than one matching polygon, traverse the source polygon from both the points, and keep matching their neighbouring polygons with the target polygons. Continue until one of them breaks, after which you can do the same steps as the one-match version.
There're more serious solutions that're listed here, but I think the method above will work as well.
What a juicy problem !. As is typical in computational geometry this problem
can be very complicated with a mismatched geometric abstraction. With all kinds of if-else cases etc.
But pick the right abstraction and the solution becomes trivial with few sub-cases.
Compute the Distance Transform of your shapes and Voilà! Your solution is trivial.
Allow me to elaborate.
The distance map of a shape on a grid (pixels) encodes the distance of the closest point on the
shape's border to that pixel. It can be computed in both directions outwards or inwards into the shape.
In this problem, the outward distance map suffices.
Step 1: Compute the distance map of both shapes D_S1, D_S2
Step 2: Subtract the distance maps. Diff = D_S1-D_S2
Step 3: if Diff has only positive values. Then your shapes can be contained in each other(+ve => S1 bigger than S2 -ve => S2 bigger than S1)
If the Diff has both positive and negative values, the shapes intersect.
There you have it. Enjoy !