Data structure for circular sector in robot vision - data-structures

I'm trying to build a model of a 360-degree view of the surrounding environment from a distance sensor for continuous rotation (radar). I require a data structure for making a quickly computable strategy that will bring a robot to the first clear of obstacles point (or where the obstacle is far away).
I thought to a matrix of 360 numerical elements in which each element represents the detected distance in that degree of circumference.
Do you know a name for this data structure (used in this way)?
There are better representations for the situation I described?
The main language for the controller is Java.

It sounds to me that you are aware that your range data is effectively in polar co-ordinates.
The uniqueness of working with such 360° is in its circular, “wrap-around” nature.
Many people end up writing their own custom implementation around this data. Their is lots of theory in the robotics literature based on it for smoothing, segmenting, finding features, etc. (for example: “Line Extraction in 2D Range Images for Mobile Robotics”.)
Practically speaking, you might want to then consider checking out some robotics libraries. Something like ARIA. Another very good place to start is to use WeBots to emulate/model things - including range data - before transferring to a physical robotics platform.

Related

Simulating contraction of a muscle in a skeleton

Using spherical nodes, cylindrical bones, and cone-twist constraints, I've managed to create a simple skeleton in 3 dimensions. I'm using an offshoot of the bullet physics library (physijs by #chandlerprall, along with threejs).
Now I'd like to add muscles. I've been trying for the last two days to get some sort of sliding constraint or generic 6-DOF constraint to get the muscle to be able to contract and pull its two nodes towards one another.
I'm getting all sorts of crazy results, and I'm beginning to think that I'm going about this in the wrong way. I don't think I can simply use two cone twist constraints and then scale the muscle along its length-wise axis, because scaling collision meshes is apparently fairly expensive.
All I need is a 'muscle' which can attach to two nodes and 'contract' to pull in both its nodes.
Can anyone provide some advice on how I might best approach this using the bullet engine (or really, any physics engine)?
EDIT: What if I don't need collisions to occur for the muscle? Say I just need a visual muscle which is constrained to 2 nodes:
The two nodes are linearly constrained to the muscle collision mesh, which instead of being a large mesh, is just a small one that is only there to keep the visual muscle geometry in place, and provide an axis for the nodes to be constrained to.
I could then use the linear motor that comes with the sliding constraint to move the nodes along the axis. Can anyone see any problems with this? My initial problem is that the smaller collision mesh is a bit volatile and seems to move around all over the place...
I don't have any experience with Bullet. However, there is a large academic community that simulates human motion by modeling the human as a system of rigid bodies. In these simulations, the human is actuated by muscles.
The muscles used in such simulations are modeled to generate force in a physiological way. The amount of force a muscle can produce at any given instant depends on its length and the rate at which its length is changing. Here is a paper that describes a fairly complex muscle model that biomechanists might use: http://nmbl.stanford.edu/publications/pdf/Millard2013.pdf.
Another complication with modeling muscles that comes up in biomechanical simulations is that the path of a muscle must be able to wrap around joints (such as the knee). This is what you are trying to get at when you mention collisions along a muscle. This is called muscle wrapping. See http://www.baylor.edu/content/services/document.php/41153.pdf.
I'm a graduate student in a lab that does simulations of humans involving many muscles. We use the multibody dynamics library (physics engine) Simbody (http://github.com/simbody/simbody), which allows one to define force elements that act along a path. Such paths can be defined in pretty complex ways: they could wrap around many different surfaces. To simulate muscle-driven human motion, we use OpenSim (http://opensim.stanford.edu), which in turn uses Simbody to simulate the physics.

Algorithm, tool or technique to represent 3D probability density functions on space

I'm working on a project with computer vision (opencv 2.4 on c++). On this project I'm trying to detect certain features to build a map (an internal representation) of the world around.
The information I have available is the camera pose (6D vector with 3 position and 3 angular values), calibration values (focal length, distortion, etc) and the features detected on the object being tracked (this features are basically the contour of the object but it doesn't really matter)
Since the camera pose, the position of the features and other variables are subject to errors, I want to model the object as a 3D probability density function (with the probability of finding the "object" on a given 3D point on space, this is important since each contour has a probability associated of how likely it is that it is an actually object-contour instead of a noise-contour(bear with me)).
Example:
If the object were a sphere, I would detect a circle (contour). Since I know the camera pose, but have no depth information, the internal representation of that object should be a fuzzy cylinder (or a cone, if the camera's perspective is included but it's not relevant). If new information is available (new images from a different location) a new contour would be detected, with it's own fuzzy cylinder merged with previous data. Now we should have a region where the probability of finding the object is greater in some areas and weaker somewhere else. As new information is available, the model should converge to the original object shape.
I hope the idea is clear now.
This model should be able to:
Grow dynamically if needed.
Update efficiently as new observations are made (updating the probability inside making stronger the areas observed multiple times and weaker otherwise). Ideally the system should be able to update in real time.
Now the question:
How can I do to computationally represent this kind of fuzzy information in such a way that I can perform these tasks on it?
Any suitable algorithm, data structure, c++ library or tool would help.
I'll answer with the computer vision equivalent of Monty Python: "SLAM, SLAM, SLAM, SLAM!": :-) I'd suggest starting with Sebastian Thrun's tome.
However, there's older older work on the Bayesian side of active computer vision that's directly relevant to your question of geometry estimation, e.g. Whaite and Ferrie's seminal IEEE paper on uncertainty modeling (Waithe, P. and Ferrie, F. (1991). From uncertainty to visual exploration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 13(10):1038–1049.). For a more general (and perhaps mathematically neater) view on this subject, see also chapter 4 of D.J.C. MacKay's Ph.D. thesis.

What's the best depth map generation algorithm?

I'm into a 2D-to-3D application project and I'm looking for a method to produce the depth map of a single input image, without other external informations. I know that's a sort of "artificial intelligence" mattern but maybe an efficient algorythm exists.
At the moment I've found this one: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.109.7959&rep=rep1&type=pdf but I'm wondering if there is a better method, before start implementing. Suggestions? Thanks!
I've written quite a few automatic depth map generators. I don't think there's one that's better than all others in all cases. It all depends on the stereo pair you're starting with. I personally think a depth map generator based on local method (window or block based) with an edge preserving smoother is probably the best all-around depth map generator.
In any case, on this page:
depth map generation software
you can find depth map generator software based on optical flow, weight-based windows, graph cuts, and many other things that relate to depth map generation and lenticular creation. The best part is that it's all free.
For 2d to 3d conversion (which is more what you are asking), there's a piece of software called DMAG4 that uses a scarsely populated depth map (typically, done in Gimp with the paint brush) to indicate the main depths and then fills the unfilled areas using interpolation while maintaining the edges of the objects (edge-preserving).
DMAG4 can be found here (it's free to use):
2d to 3d conversion software DMAG4
Another way to 2d to 3d conversion is to use a sculpting program like Gimpel3d or Blender, both free. Clearly, this goes beyond depth map since you're essentially creating a 3d scene in which you can then move around (using the camera movement in Blender). This is often referred to as "camera mapping".
Well, I have recently come upon this:
http://make3d.cs.cornell.edu/code.html
which comes together with code, although the license might be too restrictive
("Noncommercial — You may not use this work for commercial purposes").
the gallery is impressive
http://make3d.stanford.edu/images/showall

Convert polygons into mesh

I have a lot of polygons. Ideally, all the polygons must not overlap one other, but they can be located adjacent to one another.
But practically, I would have to allow for slight polygon overlap ( defined by a certain tolerance) because all these polygons are obtained from user hand drawing input, which is not as machine-precised as I want them to be.
My question is, is there any software library components that:
Allows one to input a range of polygons
Check if the polygons are overlapped more than a prespecified tolerance
If yes, then stop, or else, continue
Create mesh in terms of coordinates and elements for the polygons by grouping common vertex and edges together?
More importantly, link back the mesh edges to the original polygon(s)'s edge?
Or is there anyone tackle this issue before?
This issue is a daily "bread" of GIS applications - this is what is exactly done there. We also learned that at a GIS course. Look into GIS systems how they address this issue. E.g. ArcGIS define so called topology rules and has some functions to check if the edited features are topologically correct. See http://webhelp.esri.com/arcgisdesktop/9.2/index.cfm?TopicName=Topology_rules
This is pretty long, only because the question is so big. I've tried to group my comments based on your bullet points.
Components to draw polygons
My guess is that you'll have limited success without providing more information - a component to draw polygons will be very much coupled to the language and UI paradigm you are using for the rest of your project, ie. code for a web component will look very different to a native component.
Perhaps an alternative is to separate this element of the process out from the rest of what you're trying to do. There are some absolutely fantastic pre-existing editors that you can use to create 2d and 3d polygons.
Inkscape is an example of a vector graphics editor that makes it easy to enter 2d polygons, and has the advantage of producing output SVG, which is reasonably easy to parse.
In three dimensions Blender is an open source editor that can be used to produce arbitrary geometries that can be exported to a number of formats.
If you can use a google-maps API (possibly in an native HTML rendering control), and you are interested in adding spatial points on a map overlay, you may be interested in related click-to-draw polygon question on stackoverflow. From past experience, other map APIs like OpenLayers support similar approaches.
Check whether polygons are overlapped
Thomas T made the point in his answer, that there are families of related predicates that can be used to address this and related queries. If you are literally just looking for overlaps and other set theoretic operations (union, intersection, set difference) in two dimensions you can use the General Polygon Clipper
You may also need to consider the slightly more generic problem when two polygons that don't overlap or share a vertex when they should. You can use a Minkowski sum to dilate (enlarge) two and three dimensional polygons to avoid such problems. The Computational Geometry Algorithms Library has robust implementations of these algorithms.
I think that it's more likely that you are really looking for a piece of software that can perform vertex welding, Christer Ericson's book Real-time Collision Detection includes extensive and very readable description of the basics in this field, and also on related issues of edge snapping, crack detection, T-junctions and more. However, even though code snippets are included for that book, I know of no ready made library that addresses these problems, in particular, no complete implementation is given for anything beyond basic vertex welding.
Obviously all 3D packages (blender, maya, max, rhino) all include built in software and tools to solve this problem.
Group polygons based on vertices
From past experience, this turned out to be one of the most time consuming parts of developing software to solve problems in this area. It requires reasonable understanding of graph theory and algorithms to traverse boundaries. It is worth relying upon a solid geometry or graph library to do the heavy lifting for you. In the past I've had success with igraph.
Link the updated polygons back to the originals.
Again, from past experience, this is just a case of careful bookkeeping, and some very careful design of your mesh classes up-front. I'd like to give more advice, but even after spending a big chunk of the last six months on this, I'm still struggling to find a "nice" way to do this.
Other Comments
If you're interacting with users, I would strongly recommend avoiding this issue where possible by using an editor that "snaps", rounding all user entered points onto a grid. This will hopefully significantly reduce the amount of work that you have to do.
Yes, you can use OGR. It has python bindings. Specifically, the Geometry class has an Intersects method. I don't fully understand what you want in points 4 and 5.

Geo-region data for countries/states/oceans

I'm developing an application where entities are located at positions on Earth. I want to have a set of data from which I can determine what region(s) a point is contained within.
Regions may be of types:
Continent
Country
Lake
Sea
DMZ
Desert
Ice Shelf
...and so forth.
I'm envisioning representing each region as a polygon. For any given point, I would test to see if it is contained in each polygon. Alternative ideas are very welcome.
I am also hoping to find public domain data sets that contain some or all of these boundaries.
Some of these polygons are going to be enormously detailed (possibly more detailed than I need) and so I need tips on performing these calculations efficiently. Methods for simplifying 2D polygons would also be useful I expect. What are the best practices for these kinds of things?
Can anyone recommend any good resources of this data, any particular programming approaches or existing software libraries that do this kind of thing?
EDIT
I should point out that the data set of regions will be fairly static, so precomputation is a good option if it improves performance.
If you're on a plane, the common algorithm is to draw a random straight half line from your point and checking for the number of intersections points with the given polygon. If it is odd, you're inside, if it is even, you're outside. You have to beware of vertices and of numerical inaccuracies.
Now, you're on a sphere. You can project it on a plane (the actual projection you use can depend on the polygon) and do the above.
A great resource is Natural Earth.
Natural Earth is a public domain map dataset available at 1:10m, 1:50m, and 1:110 million scales. Featuring tightly integrated vector and raster data, with Natural Earth you can make a variety of visually pleasing, well-crafted maps with cartography or GIS software.
The data is provided as ESRI Shapefiles. There are many Shapefile libraries in existence.
If you can't find support for Shapefiles in your programming languages, this PDF details the file format.

Resources