Simulating contraction of a muscle in a skeleton - three.js

Using spherical nodes, cylindrical bones, and cone-twist constraints, I've managed to create a simple skeleton in 3 dimensions. I'm using an offshoot of the bullet physics library (physijs by #chandlerprall, along with threejs).
Now I'd like to add muscles. I've been trying for the last two days to get some sort of sliding constraint or generic 6-DOF constraint to get the muscle to be able to contract and pull its two nodes towards one another.
I'm getting all sorts of crazy results, and I'm beginning to think that I'm going about this in the wrong way. I don't think I can simply use two cone twist constraints and then scale the muscle along its length-wise axis, because scaling collision meshes is apparently fairly expensive.
All I need is a 'muscle' which can attach to two nodes and 'contract' to pull in both its nodes.
Can anyone provide some advice on how I might best approach this using the bullet engine (or really, any physics engine)?
EDIT: What if I don't need collisions to occur for the muscle? Say I just need a visual muscle which is constrained to 2 nodes:
The two nodes are linearly constrained to the muscle collision mesh, which instead of being a large mesh, is just a small one that is only there to keep the visual muscle geometry in place, and provide an axis for the nodes to be constrained to.
I could then use the linear motor that comes with the sliding constraint to move the nodes along the axis. Can anyone see any problems with this? My initial problem is that the smaller collision mesh is a bit volatile and seems to move around all over the place...

I don't have any experience with Bullet. However, there is a large academic community that simulates human motion by modeling the human as a system of rigid bodies. In these simulations, the human is actuated by muscles.
The muscles used in such simulations are modeled to generate force in a physiological way. The amount of force a muscle can produce at any given instant depends on its length and the rate at which its length is changing. Here is a paper that describes a fairly complex muscle model that biomechanists might use: http://nmbl.stanford.edu/publications/pdf/Millard2013.pdf.
Another complication with modeling muscles that comes up in biomechanical simulations is that the path of a muscle must be able to wrap around joints (such as the knee). This is what you are trying to get at when you mention collisions along a muscle. This is called muscle wrapping. See http://www.baylor.edu/content/services/document.php/41153.pdf.
I'm a graduate student in a lab that does simulations of humans involving many muscles. We use the multibody dynamics library (physics engine) Simbody (http://github.com/simbody/simbody), which allows one to define force elements that act along a path. Such paths can be defined in pretty complex ways: they could wrap around many different surfaces. To simulate muscle-driven human motion, we use OpenSim (http://opensim.stanford.edu), which in turn uses Simbody to simulate the physics.

Related

Creating a curve envelope (non-signal appliaction)

I'm working on developing a transportating engineering tool called a swept path analyzer. Briefly, as a vehicle turns at a roadway intersection, it may steer in a simple arc, but the body moves in a more complex fashion, as described in Ackermann Steering theory.
The tool I'm using implements Ackermann Steering and generates the 'tracks' a given vehicle creates as it moves through a curve. These tracks correlate to specific points on the vehicle: the path the center of the vehicle follows, the paths the wheels themselves follow, and the paths the outermost points on the vehicle follow (usually the outside corners).
As I work on my own implementation, I'm faced with the issue of generating a total vehicle 'envelope' which describes the path taken by the outermost point of the vehicle as it makes it's turn at every point along it's turning path. This generates a complex shape comprised of several arcs and lines.
I'm familiar with signal envelope theory and the mathematical approach. However, I feel that would be unnecessarily complex and a better approach would be generating the envelope from the points of the vehicle sampled as the vehicle is driven along it's turning path.
This seems like a boundary problem that I've seen solutions for in game development, and I could pursue that avenue, but as I look at this, it makes me think there has to be a simpler approach - perhaps I'm not looking at the problem correctly.
The images below provide a little visual context:

Box2d custom shape or bodies joint

Let's say i want to create simple physics object with a shape of "matryoshka" or banal snowman . As i see it , i have two options to do it: 1. To create 2 circle (or may be custom) bodies and connect them with weld joint , or 2. To create one body with two circle (or may be custom) shapes in it.
So the question is: what is more expensive for CPU: bodies connected with joints or complicate-shaped bodies. If i have one object may be i don't feel difference in performance , but what if i have many object of that type?
I know that joints are expensive , but may be custom shaped bodies is more expensiver?
I'm working with Box2dFlash.
Since the question is about CPU use, joints use more CPU than shapes alone with no joint. Circles shapes can be more efficient than polygons in many cases, but not all.
For CPU optimization, use as few bodies and as simple polygons as possible. For every normal defined in a polygon, a calculation may need to be performed if an object overlaps with another. For circles, a maximum of one calculation is needed.
As an aside, unless you are already experiencing performance problems, you should not worry about if your shapes are the idea CPU use. Instead, you should ask whether the simulation the create is the one you want to happen. Box2D contains many many special case optimizations to make it run smoothly. You can also decrease its accuracy per tick by setting the velocity and position iteration variables. This will have a far greater effect on efficiency than geometry unless your geometry is extremely complex.
If you don't want the two sections of the snowman's body to move in relation to each other, then a single body is the way to go. You will ideally be defining the collision shape(s) manually anyway, so there is absolutely no gain to be had using a weld.
Think of it like this: If you use (a) weld(s), the collision shapes will be no less complicated than if you simply made an approximation of the collision geometry for a single body; it would still require either multiple collision shapes, or a single complex collision shape regardless of your choice. The weld will simply be adding an extra step.
If you need the snowman's body to be dynamic in anyway (break apart or flex) then a weld or regular joint is the way to go.

Convert polygons into mesh

I have a lot of polygons. Ideally, all the polygons must not overlap one other, but they can be located adjacent to one another.
But practically, I would have to allow for slight polygon overlap ( defined by a certain tolerance) because all these polygons are obtained from user hand drawing input, which is not as machine-precised as I want them to be.
My question is, is there any software library components that:
Allows one to input a range of polygons
Check if the polygons are overlapped more than a prespecified tolerance
If yes, then stop, or else, continue
Create mesh in terms of coordinates and elements for the polygons by grouping common vertex and edges together?
More importantly, link back the mesh edges to the original polygon(s)'s edge?
Or is there anyone tackle this issue before?
This issue is a daily "bread" of GIS applications - this is what is exactly done there. We also learned that at a GIS course. Look into GIS systems how they address this issue. E.g. ArcGIS define so called topology rules and has some functions to check if the edited features are topologically correct. See http://webhelp.esri.com/arcgisdesktop/9.2/index.cfm?TopicName=Topology_rules
This is pretty long, only because the question is so big. I've tried to group my comments based on your bullet points.
Components to draw polygons
My guess is that you'll have limited success without providing more information - a component to draw polygons will be very much coupled to the language and UI paradigm you are using for the rest of your project, ie. code for a web component will look very different to a native component.
Perhaps an alternative is to separate this element of the process out from the rest of what you're trying to do. There are some absolutely fantastic pre-existing editors that you can use to create 2d and 3d polygons.
Inkscape is an example of a vector graphics editor that makes it easy to enter 2d polygons, and has the advantage of producing output SVG, which is reasonably easy to parse.
In three dimensions Blender is an open source editor that can be used to produce arbitrary geometries that can be exported to a number of formats.
If you can use a google-maps API (possibly in an native HTML rendering control), and you are interested in adding spatial points on a map overlay, you may be interested in related click-to-draw polygon question on stackoverflow. From past experience, other map APIs like OpenLayers support similar approaches.
Check whether polygons are overlapped
Thomas T made the point in his answer, that there are families of related predicates that can be used to address this and related queries. If you are literally just looking for overlaps and other set theoretic operations (union, intersection, set difference) in two dimensions you can use the General Polygon Clipper
You may also need to consider the slightly more generic problem when two polygons that don't overlap or share a vertex when they should. You can use a Minkowski sum to dilate (enlarge) two and three dimensional polygons to avoid such problems. The Computational Geometry Algorithms Library has robust implementations of these algorithms.
I think that it's more likely that you are really looking for a piece of software that can perform vertex welding, Christer Ericson's book Real-time Collision Detection includes extensive and very readable description of the basics in this field, and also on related issues of edge snapping, crack detection, T-junctions and more. However, even though code snippets are included for that book, I know of no ready made library that addresses these problems, in particular, no complete implementation is given for anything beyond basic vertex welding.
Obviously all 3D packages (blender, maya, max, rhino) all include built in software and tools to solve this problem.
Group polygons based on vertices
From past experience, this turned out to be one of the most time consuming parts of developing software to solve problems in this area. It requires reasonable understanding of graph theory and algorithms to traverse boundaries. It is worth relying upon a solid geometry or graph library to do the heavy lifting for you. In the past I've had success with igraph.
Link the updated polygons back to the originals.
Again, from past experience, this is just a case of careful bookkeeping, and some very careful design of your mesh classes up-front. I'd like to give more advice, but even after spending a big chunk of the last six months on this, I'm still struggling to find a "nice" way to do this.
Other Comments
If you're interacting with users, I would strongly recommend avoiding this issue where possible by using an editor that "snaps", rounding all user entered points onto a grid. This will hopefully significantly reduce the amount of work that you have to do.
Yes, you can use OGR. It has python bindings. Specifically, the Geometry class has an Intersects method. I don't fully understand what you want in points 4 and 5.

Collisions in computer games

I have a general questions regarding techniques, or approaches used in computer games to detect particles (or objects collisions). Even in a smiple examples, like currently popular angry birds, how is it programmetically achieved that the game knows that an object hits another one and works out it trajectory, while this object can hit others etc. I presume it is not constantly checking states of ALL the objects in the game map...
Thanks for any hints/answers and sorry for a little dummy/general question.
Games like Angry Birds use a physics engine to move objects around and do collision detection. If you want to learn more about how physics engines work, a good place to start is by reading up on Box2D. I don't know what engine Angry Birds uses, but Box2D is widely used and open-source.
For simpler games however you probably don't need a physics engine, and so the collision detection is fairly simple to do and involves two parts:
First is determining which objects to test collision between. For many small games it is actually not a bad idea to test every single object against every other object. Although your question is leery of this brute-force approach, computers are very fast at routine math like that.
For larger scenes however you will want to cull out all the objects that are too far away from the player to matter. Basically you break up the scene into zones and then first test which zone the character is in, then test for collision with every object in that zone.
One technique for doing this is called "quadtree". Another technique you could use is Binary Space Partitioning. In both cases the scene is broken up into a lot of smaller pieces that you can use to organize the scene.
The second part is detecting the collision between two specific objects. The easiest way to do this is distance checks; just calculate how far apart the two objects are, and if they are close enough then they are colliding.
Almost as easy is to build a bounding box around the objects you want to test. It is pretty easy to check if two boxes are overlapping or not, just by comparing their coordinates.
Now bounding boxes are just rectangles; this approach could get more complex if the shapes you want to collide are represented by arbitrary polygons. Some basic knowledge of geometry is usually enough for this part, although full-featured physics engines can get into a lot more detail, telling you not just that a collision has happened, but exactly when and how it happened.
This is one of those hard to answer questions here, but here's a very basic example in 2D as a very basic example, in pseudo-code.
for-each sprite in scene:
for-each othersprite in scene:
if sprite is not othersprite & sprite.XY = othersprite.XY
collision = true;
else
collision = false;
Hopefully that should be enough to get your thought muscles working!
Addendum: Other improvements would be to assume an area around the XY instead of a precise location, which then gives you a sort of support for sprites with large areas.
For 3D theory, I'll give you the article I read a while ago:
http://www.euclideanspace.com/threed/animation/collisiondetect/index.htm
num1's answer covers most of what I would say and so I voted it up. But there are a few modifications/additional points I would make:
1) Bounding boxes are actually the second easiest way to test collision. The easiest way is distance checks; just calculate how far apart the two objects are, and if they are close enough then they are colliding.
2) Another common way of organizing the scene in order to optimize collision tests (ie. an alternative to BSP) is quadtrees. In either case, basically you break up the scene into zones and then first test which zone the character is in, then test for collision with every object in that zone. That way if the level is huge you don't test against every single object in the level; you can cull out all the objects that are too far away from the player to matter.
3) I would emphasize more that Angry Birds uses a physics engine to handle its collisions. While distance checks and bounding boxes are fine for most 2D games, Angry Birds clearly did not use either of those simple methods and instead relied on collision detection from their physics engine. I don't know what physics engine is used in that game, but Box2D is widely used and open-source.

Looking for ways for a robot to locate itself in the house

I am hacking a vacuum cleaner robot to control it with a microcontroller (Arduino). I want to make it more efficient when cleaning a room. For now, it just go straight and turn when it hits something.
But I have trouble finding the best algorithm or method to use to know its position in the room. I am looking for an idea that stays cheap (less than $100) and not to complex (one that don't require a PhD thesis in computer vision). I can add some discrete markers in the room if necessary.
Right now, my robot has:
One webcam
Three proximity sensors (around 1 meter range)
Compass (no used for now)
Wi-Fi
Its speed can vary if the battery is full or nearly empty
A netbook Eee PC is embedded on the robot
Do you have any idea for doing this? Does any standard method exist for these kind of problems?
Note: if this question belongs on another website, please move it, I couldn't find a better place than Stack Overflow.
The problem of figuring out a robot's position in its environment is called localization. Computer science researchers have been trying to solve this problem for many years, with limited success. One problem is that you need reasonably good sensory input to figure out where you are, and sensory input from webcams (i.e. computer vision) is far from a solved problem.
If that didn't scare you off: one of the approaches to localization that I find easiest to understand is particle filtering. The idea goes something like this:
You keep track of a bunch of particles, each of which represents one possible location in the environment.
Each particle also has an associated probability that tells you how confident you are that the particle really represents your true location in the environment.
When you start off, all of these particles might be distributed uniformly throughout your environment and be given equal probabilities. Here the robot is gray and the particles are green.
When your robot moves, you move each particle. You might also degrade each particle's probability to represent the uncertainty in how the motors actually move the robot.
When your robot observes something (e.g. a landmark seen with the webcam, a wifi signal, etc.) you can increase the probability of particles that agree with that observation.
You might also want to periodically replace the lowest-probability particles with new particles based on observations.
To decide where the robot actually is, you can either use the particle with the highest probability, the highest-probability cluster, the weighted average of all particles, etc.
If you search around a bit, you'll find plenty of examples: e.g. a video of a robot using particle filtering to determine its location in a small room.
Particle filtering is nice because it's pretty easy to understand. That makes implementing and tweaking it a little less difficult. There are other similar techniques (like Kalman filters) that are arguably more theoretically sound but can be harder to get your head around.
A QR Code poster in each room would not only make an interesting Modern art piece, but would be relatively easy to spot with the camera!
If you can place some markers in the room, using the camera could be an option. If 2 known markers have an angular displacement (left to right) then the camera and the markers lie on a circle whose radius is related to the measured angle between the markers. I don't recall the formula right off, but the arc segment (on that circle) between the markers will be twice the angle you see. If you have the markers at known height and the camera is at a fixed angle of inclination, you can compute the distance to the markers. Either of these methods alone can nail down your position given enough markers. Using both will help do it with fewer markers.
Unfortunately, those methods are imperfect due to measurement errors. You get around this by using a Kalman estimator to incorporate multiple noisy measurements to arrive at a good position estimate - you can then feed in some dead reckoning information (which is also imperfect) to refine it further. This part is goes pretty deep into math, but I'd say it's a requirement to do a great job at what you're attempting. You can do OK without it, but if you want an optimal solution (in terms of best position estimate for given input) there is no better way. If you actually want a career in autonomous robotics, this will play large in your future. (
Once you can determine your position you can cover the room in any pattern you'd like. Keep using the bump sensor to help construct a map of obstacles and then you'll need to devise a way to scan incorporating the obstacles.
Not sure if you've got the math background yet, but here is the book:
http://books.google.com/books/about/Applied_optimal_estimation.html?id=KlFrn8lpPP0C
This doesn't replace the accepted answer (which is great, thanks!) but I might recommend getting a Kinect and use that instead of your webcam, either through Microsoft's recently released official drivers or using the hacked drivers if your EeePC doesn't have Windows 7 (presumably it does not).
That way the positioning will be improved by the 3D vision. Observing landmarks will now tell you how far away the landmark is, and not just where in the visual field that landmark is located.
Regardless, the accepted answer doesn't really address how to pick out landmarks in the visual field, and simply assumes that you can. While the Kinect drivers may already have feature detection included (I'm not sure) you can also use OpenCV for detecting features in the image.
One solution would be to use a strategy similar to "flood fill" (wikipedia). To get the controller to accurately perform sweeps, it needs a sense of distance. You can calibrate your bot using the proximity sensors: e.g. run motor for 1 sec = xx change in proximity. With that info, you can move your bot for an exact distance, and continue sweeping the room using flood fill.
Assuming you are not looking for a generalised solution, you may actually know the room's shape, size, potential obstacle locations, etc. When the bot exists the factory there is no info about its future operating environment, which kind of forces it to be inefficient from the outset.
If that's you case, you can hardcode that info, and then use basic measurements (ie. rotary encoders on wheels + compass) to precisely figure out its location in the room/house. No need for wifi triangulation or crazy sensor setups in my opinion. At least for a start.
Ever considered GPS? Every position on earth has a unique GPS coordinates - with resolution of 1 to 3 metres, and doing differential GPS you can go down to sub-10 cm range - more info here:
http://en.wikipedia.org/wiki/Global_Positioning_System
And Arduino does have lots of options of GPS-modules:
http://www.arduino.cc/playground/Tutorials/GPS
After you have collected all the key coordinates points of the house, you can then write the routine for the arduino to move the robot from point to point (as collected above) - assuming it will do all those obstacles avoidance stuff.
More information can be found here:
http://www.google.com/search?q=GPS+localization+robots&num=100
And inside the list I found this - specifically for your case: Arduino + GPS + localization:
http://www.youtube.com/watch?v=u7evnfTAVyM
I was thinking about this problem too. But I don't understand why you can't just triangulate? Have two or three beacons (e.g. IR LEDs of different frequencies) and a IR rotating sensor 'eye' on a servo. You could then get an almost constant fix on your position. I expect the accuracy would be in low cm range and it would be cheap. You can then map anything you bump into easily.
Maybe you could also use any interruption in the beacon beams to plot objects that are quite far from the robot too.
You have a camera you said ? Did you consider looking at the ceiling ? There is little chance that two rooms have identical dimensions, so you can identify in which room you are, position in the room can be computed from angular distance to the borders of the ceiling and direction can probably be extracted by the position of doors.
This will require some image processing but the vacuum cleaner moving slowly to be efficiently cleaning will have enough time to compute.
Good luck !
Use Ultra Sonic Sensor HC-SR04 or similar.
As above told sense the walls distance from robot with sensors and room part with QR code.
When your are near to a wall turn 90 degree and move as width of your robot and again turn 90deg( i.e. 90 deg left turn) and again move your robot I think it will help :)

Resources