Creating a curve envelope (non-signal appliaction) - algorithm

I'm working on developing a transportating engineering tool called a swept path analyzer. Briefly, as a vehicle turns at a roadway intersection, it may steer in a simple arc, but the body moves in a more complex fashion, as described in Ackermann Steering theory.
The tool I'm using implements Ackermann Steering and generates the 'tracks' a given vehicle creates as it moves through a curve. These tracks correlate to specific points on the vehicle: the path the center of the vehicle follows, the paths the wheels themselves follow, and the paths the outermost points on the vehicle follow (usually the outside corners).
As I work on my own implementation, I'm faced with the issue of generating a total vehicle 'envelope' which describes the path taken by the outermost point of the vehicle as it makes it's turn at every point along it's turning path. This generates a complex shape comprised of several arcs and lines.
I'm familiar with signal envelope theory and the mathematical approach. However, I feel that would be unnecessarily complex and a better approach would be generating the envelope from the points of the vehicle sampled as the vehicle is driven along it's turning path.
This seems like a boundary problem that I've seen solutions for in game development, and I could pursue that avenue, but as I look at this, it makes me think there has to be a simpler approach - perhaps I'm not looking at the problem correctly.
The images below provide a little visual context:

Related

Is there some generic algorithm to calculate the dimensions of a piece of fabric needed to cover a 3D shape

I hope that this is the correct place to ask this kind of question. I am developing a web app to design garden ponds and I need to calculate the shape and size of the foil needed to cover that pond. The pond will provided as a 3D model (threeJS). The shape of the pond will be relatively simple (think one or more rectangular boxes potentially with some stairs).
I am considering folding out the surface of the 3D model into a flat shape, but I do not know how to do that in a generic way. And even if a could od that it would not be the complete solution (but potentially it would be a starting point) I have been searching for a generic algorithm to do this, but so far have not found anything. Does anyone know of an algorithm that I could use for this, or at least something that I could start with.
Some additonal information:
this will be a browser based solution which should show the pool; one option would be ThreeJS since I am somewhat familiar with it
the foil that should cover the pond needs to watertight, so it needs to be one piece. That means that when your put it in the pool, it form rinkle, especially in the corners.

Bidirectional path tracing, algorithm explanation

I'm trying to understand path tracing. So far, I have only dealt with the very basis - when a ray is launched from each intersection point in a random direction within the hemisphere, then again, and so on recursively, until the ray hits the light source. As a result, this approach leads to the fact that in the case of small light sources, the image is extremely noisy.
The following images show the noise level depending on the number of samples (rays) per pixel.
I am also not sure that i am doing everything correctly, because the "Monte Carlo" method, as far as I understand, implies that several rays are launched from each intersection point, and then their result is summed and averaged. But this approach leads to the fact that the number of rays increases exponentially, and after 6 bounces reaches inadequate values, so i decided that it is better to just run several rays per pixel initially (slightly shifted from the center of the pixel in a random direction), but only 1 ray is generated at each intersection. I do not know whether this approach corresponds to "Monte Carlo" or not, but at least this way the rendering does not last forever..
Bidirectional path tracing
I started looking for ways to reduce the amount of noise, and came across bidirectional path tracing. But unfortunately, i couldn't find a detailed explanation of this algorithm in simple words. All I understood is that the rays are generated from both the camera and the light sources, and then there is a check on the possibility of connecting the endpoints of these paths.
As you can see, if the intersection points of the blue ray from the camera and the white ray from the light source can be freely connected (there are no obstacles in the connection path), then we can assume that the ray from the camera can pass through the points y1, y0 directly to the light source.
But there are a lot of questions:
If the light source is not a point, but has some shape, then the point from which the ray is launched must be randomly selected on the surface of this shape? If you take only the center - then there will be no difference from a point light source, right?
Do i need to build a path from the light source for each path from the camera, or should there be only one path from the light source, while several paths (samples) are built from the camera for one pixel at once?
The number of bounces/re-reflections/refractions should be the same for the path from the camera and the light source? Or not?
But the questions don't end there. I have heard that the bidirectional trace method allows you to model caustics well (in comparison with regular path tracing). But I completely did not understand how the method of bidirectional path tracing can somehow help for this.
Example 1
Here the path will eventually be built, but the number of bounces will be extremely large, so no caustics will work here, despite the fact that the ray from the camera is directed almost to the same point where the path of the ray from the light source ends.
Example 2
Here the path will not be built, because there is an obstacle between the endpoints of the paths, although it could be built if point x3 was connected to point y1, but according to the algorithm (if I understand everything correctly), only the last points of the paths are connected.
Question:
What is the use of such an algorithm, if in a significant number of cases the paths either cannot be built, or are unnecessarily long? Maybe I misunderstand something? I came across many articles and documents where this algorithm was somehow described, but mostly it was described mathematically (using all sorts of magical terms like biased-unbiased, PDF, BSDF, and others), and not.. algorithmically. I am not that strong in mathematics and all sorts of mathematical notation and wording, I would just like to understand WHAT TO DO, how to implement it correctly in the code, how these paths are connected, in what order, and so on. This can be explained in simple words, pseudocode, right? I would be extremely grateful if someone would finally shed some light on all this.
Some references that helped me to understand the Path tracing right :
https://www.scratchapixel.com/ (every rendering student should begin with this)
https://en.wikipedia.org/wiki/Path_tracing
If you're looking for more references, path tracing is used for "Global illumination" wich is the opposite as "Direct illumination" that only rely on a straight line from the point to the light.
What's more caustics is well knowned to be a hard problem, so don't begin with it! Monte Carlo method is a good straightforward method to begin with, but it has its limitations (ie Caustics and tiny lights).
Some advices for rendering newbees
Mathematics notations are surely not the coolest ones. Every one will of course prefer a ready to go code. But maths is the most rigourous way to describe the world. It permits also to modelize a whole physic interaction in a small formula instead of plenty of lines of codes that doesn't fit to the real problem. I suggest you to forget you to try reading what you read better as a good mathematic formula is always detailed. If some variables are not specified, don't loose your time and search another reference.

Data structure for circular sector in robot vision

I'm trying to build a model of a 360-degree view of the surrounding environment from a distance sensor for continuous rotation (radar). I require a data structure for making a quickly computable strategy that will bring a robot to the first clear of obstacles point (or where the obstacle is far away).
I thought to a matrix of 360 numerical elements in which each element represents the detected distance in that degree of circumference.
Do you know a name for this data structure (used in this way)?
There are better representations for the situation I described?
The main language for the controller is Java.
It sounds to me that you are aware that your range data is effectively in polar co-ordinates.
The uniqueness of working with such 360° is in its circular, “wrap-around” nature.
Many people end up writing their own custom implementation around this data. Their is lots of theory in the robotics literature based on it for smoothing, segmenting, finding features, etc. (for example: “Line Extraction in 2D Range Images for Mobile Robotics”.)
Practically speaking, you might want to then consider checking out some robotics libraries. Something like ARIA. Another very good place to start is to use WeBots to emulate/model things - including range data - before transferring to a physical robotics platform.

Indefinitely move objects around randomly without collision

I have an application where I need to move a number of objects around on the screen in a random fashion and they can not bump into each other. I'm looking for an algorithm that will allow me to generate the paths that don't create collisions and can continue for an indefinite time (i.e.: the objects keep moving around until a user driven event removes them from the program).
I'm not a game programmer but I think this looks like an AI problem and you guys probably solve it with your eyes closed. From what I've read A* seems to be the recommended 'basic idea' but I don't really want to invest a lot of time into it without some confirmation.
Can anyone shed some light on an approach? Anti-gravity movement maybe?
This is to be implemented on iOS, if that is important
New paths need to be generated at the end of each path
There is no visible 'grid'. Movement is completely free in 2D space
The objects are insects that walk around the screen until they are killed
A* is an algorithm to find the shortest path between a start and a goal configuration (in terms of whatever you define as short: common are e.g. euclidean distance, cost or time, angular distance...). Your insects seem not to have a specific goal, they don't even need a shortest path. I would certainly not go for A*. (By the way, since you are having a dynamic environment, D* would have been an idea - still it's meant to find a path from A to B).
I would tackle the problem as follows:
Random Paths and following them
For the random paths I see two methods. The first would be a simple random walk (click here to see a nice 2D animation with explanations), which can suffer from jittering and doesn't look too nice. The second one needs a little bit more detailed explanations.
For each insect generate four random points around them, maybe starting on a sinusoid. With a spline interpolation generate a smooth curve between those points. Take care of having C1 (in 2D) or C2 (in 3D) continuity. (Suggestion: Hermite splines)
With Catmull-Rom splines you can find your configurations while moving along the curve.
An application of a similar approach can be found in this blog post about procedural racetracks, also a more technical (but still not too technical) explanation can be found in these old slides (pdf) from a computer animations course.
When an insect starts moving, it can constantly move between the second and third point, when you always remove the first and append a new point when the insect reaches the third, thus making that the second point.
If third point is reached
Remove first
Append new point
Recalculate spline
End if
For a smoother curve add more points in total and move somewhere in the middle, the principle stays the same. (Personally I only used this in fixed environments, it should work in dynamic ones as well though.)
This can, if your random point generation is good (maybe you can use an approach similar to the one provided in the above linked blog post, or have a look at algorithms on the PCG Wiki), lead to smooth paths all over the screen.
Avoid other insects
To avoid other insects, three different methods come to my mind.
Bug algorithms
Braitenberg vehicles
An application of potential fields
For the potential fields I recommend reading this paper about dynamic motion planning (pdf). It's from robotics, but fairly easy to apply to your problem as well. You can just use the robots next spline point as the goal and set its velocity to 0 to apply this approach. However, it might be a bit too much for your simple game.
A discussion of Braitenberg vehicles can be found here (pdf). The original idea was more of a technical method (drive towards or away from a light source depending on how your motor is coupled with the photo receptor) and is often used to show how we apply emotional concepts like fear and attraction to other objects. The "fear" behaviour is an approach used for obstacle avoidance in robotics as well.
The third and probably simplest method are bug algorithms (pdf). I always have problems with the boundary following, which is a bit tricky. But to avoid another insect, these algorithms - no matter which one you use (I suggest Bug 1 or Tangent Bug) - should do the trick. They are very simple: Move towards your goal (in this application with the catmull-rom splines) until you have an obstacle in front. If the obstacle is close, change the insect's state to "obstacle avoidance" and run your bug algorithm. If you give both "colliding" insects the same turn direction, they will automatically go around each other and follow their original path.
As a variation you could just let them turn and recalculate a new spline from that point on.
Conclusion
Path finding and random path generation are different things. You have to experiment around what looks best for your insects. A* is definitely meant for finding shortest paths, not for creating random paths and following them.
You cannot plan the trajectories ahead of time for an indefinite duration !
I suggest a simpler approach where you just predict the next collision (knowing the positions and speeds of the objects allows you to tell if they will collide and when), and resolve it by changing the speed or direction of either objects (bounce before objects touch).
Make sure to redo a check for collisions in case you created an even earlier collision !
The real challenge in your case is to efficiently predict collisions among numerous objects, a priori an O(N²) task. You will accelerate that by superimposing a coarse grid on the play field and look at objects in neighboring cells only.
It may also be possible to maintain a list of object pairs that "might interfere in some future" (i.e. considering their distance and relative speed) and keep it updated. Checking that a pair may leave the list is relatively easy; efficiently checking for new pairs needing to enter the list is not.
Look at this and this Which described an AI program to auto - play Mario game.
So in this link, what the author did was using a A* star algorithm to guide Mario Get to the right border of the screen as fast as possible. Avoid being hurt.
So the idea is for each time frame, he will have an Environment which described the current position of other objects in the scene and for each action (up, down left, right and do nothing) , he calculate its cost function and made a decision of the next movement based on this.
Source: http://www.quora.com/What-are-the-coolest-algorithms
For A* you would need a 2D-Grid even if it is not visible. If I get your idea right you could do the following.
Implement a pathfinding (e.g. A*) then just generate random destination points on the screen and calculate the path. Once your insect reaches the destination, generate another destination point/grid-cell and proceed until the insect dies.
As I see it A* would only make sence if you have obstacles on the screen the insect should navigate around, otherwise it would be enough to just calculate a straight vector path and maybe handle collision with other insects/objects.
Note: I implemented A* once, later I found out that Lee's Algorithm
pretty much does the same but was easier to implement.
Consider a Hamiltonian cycle - the idea is a route that visits all the positions on a grid once (and only once). If you construct the cycle in advance (i.e. precalculate it), and set your insects off with some offset between them, they will never collide, simply because the path never intersects itself.
Also, for bonus points, Hamiltonian paths tend to 'wiggle about', and because it's a loop you can predict (and precalculate) the path into the indefinite future.
You can always use the nodes of the grid as knot points for a spline to smooth the movement, or even randomly shift all the points away from their strict 2d grid positions, until you have the desired motion.
Example Hamiltonian cycle from Wikimedia:
On a side note, if you want to generate such a path, consider constructing a loop through many points and just moving the points around in such a manner that they never intersect an existing edge. With some encouragement to move into gaps and away from each other, they should settle into some long, never-intersecting path. Store the result and use for your loop.

Simulating contraction of a muscle in a skeleton

Using spherical nodes, cylindrical bones, and cone-twist constraints, I've managed to create a simple skeleton in 3 dimensions. I'm using an offshoot of the bullet physics library (physijs by #chandlerprall, along with threejs).
Now I'd like to add muscles. I've been trying for the last two days to get some sort of sliding constraint or generic 6-DOF constraint to get the muscle to be able to contract and pull its two nodes towards one another.
I'm getting all sorts of crazy results, and I'm beginning to think that I'm going about this in the wrong way. I don't think I can simply use two cone twist constraints and then scale the muscle along its length-wise axis, because scaling collision meshes is apparently fairly expensive.
All I need is a 'muscle' which can attach to two nodes and 'contract' to pull in both its nodes.
Can anyone provide some advice on how I might best approach this using the bullet engine (or really, any physics engine)?
EDIT: What if I don't need collisions to occur for the muscle? Say I just need a visual muscle which is constrained to 2 nodes:
The two nodes are linearly constrained to the muscle collision mesh, which instead of being a large mesh, is just a small one that is only there to keep the visual muscle geometry in place, and provide an axis for the nodes to be constrained to.
I could then use the linear motor that comes with the sliding constraint to move the nodes along the axis. Can anyone see any problems with this? My initial problem is that the smaller collision mesh is a bit volatile and seems to move around all over the place...
I don't have any experience with Bullet. However, there is a large academic community that simulates human motion by modeling the human as a system of rigid bodies. In these simulations, the human is actuated by muscles.
The muscles used in such simulations are modeled to generate force in a physiological way. The amount of force a muscle can produce at any given instant depends on its length and the rate at which its length is changing. Here is a paper that describes a fairly complex muscle model that biomechanists might use: http://nmbl.stanford.edu/publications/pdf/Millard2013.pdf.
Another complication with modeling muscles that comes up in biomechanical simulations is that the path of a muscle must be able to wrap around joints (such as the knee). This is what you are trying to get at when you mention collisions along a muscle. This is called muscle wrapping. See http://www.baylor.edu/content/services/document.php/41153.pdf.
I'm a graduate student in a lab that does simulations of humans involving many muscles. We use the multibody dynamics library (physics engine) Simbody (http://github.com/simbody/simbody), which allows one to define force elements that act along a path. Such paths can be defined in pretty complex ways: they could wrap around many different surfaces. To simulate muscle-driven human motion, we use OpenSim (http://opensim.stanford.edu), which in turn uses Simbody to simulate the physics.

Resources