Inverse gait analysis using processing - processing

I am developing a code to do inverse gait analysis,that is, given the knee and hip joint angles, I am trying to recreate a walking animation on processing. But I am stuck at the maths of it.
So basically I am trying to make a stickman walk based on given lower limb joint angles.
And I for the life of it cant figure out the math.Here is a sample of the data I am working with

Related

LineTrace algorithim in processing

I'm trying to implement the LineTrace algorithm described in this article:
Linetrace Generative Art
Particularly where it says:
To trace the outline you can sample some of the nearby edges on the previous line, calculate the average direction of those edges and add a vertex to the current line along that direction. Then add some random motion to mimic free hand drawing. This seems to work quite well for a while, but there is some "inertia" that can be seen in the results—the shape adapts too slowly.
The amount of noise you add to each vertex is crucial. This noise is what drives the whole system to make interesting shapes since the tracing behaviour is always forced to attempt to replicate both the general movement and some of the random jitter as it progresses.
I'm trying to do this in processing, and since I'm new to processing and hazy on how vectors, edges and directions work, I don't have any idea how to start to code. I would be greatly appreciative for some sample code, anything to help me get underway. I'm also curious by what he means by "add some random motion to mimic free hand drawing", is he incorporating perlin noise somehow? Thanks in advance.

Car Turn Detection Algorithm

I have timed sensor data for a car - such as Yaw Rate, Absolute Steering Angle etc - and would like to detect if the car is making a turn, left or right.
Currently, I'm thinking about using angular displacement and velocity, but I'm not sure if there's an existing robust algorithm that I could use.
I have also come across this post that hints about a method that could be used, but I'm not sure if this is exactly the one I need.
Algorithm to detect left or right turn from x,y co-ordinates
Sorry, for not giving much, but I'm actually trying to get some literature that I could review and some vocabulary that could help me find better solutions related to my problem. Thanks.

Edge Detection Point Cloud

I am working on a application that is filtering a point cloud from a laser distance measuring device. Its a small array only 3x176x132 and Im trying to find parts inside a bin and pick the top most. So far I have played around with filtering the data into a way that can be processed by more traditional vision algorithms. I have been using the Sobel operator on the distance data and normalizing it and this is what I came up with
The Same filter applied to the PMD amplitude
My problem is I feel like I am not getting enough out of the distance data. When I probe the actual height values I see a drop the thickness of a part around the edges but this is not reflected in the results. I think it has to do with the fact that the largest distance changes in the image are 800mm and a part is only 10mm but Im sure there must be a better way to filter this.
Any suggestions

360 degree 3D view of a room using a single rotating kinect

I am working on a research project to construct the 360 degree 3D view of a room using a single rotating kinect placed in the center.
My current approach is to obtain the 3D point clouds obtained by kinect after every 2 to 5 degrees of rotation, using the Iterative Closest Point Algorithm.
Note that we need to build the view real time as the kinect rotates so we need to capture the point cloud after a small degree of rotation of kinect.
However the ICP algo is computationally expensive.
I am looking for a better solution to the above problem. Any help/ pointers in this direction will be appreciated.
I'm not sure how familiar you are with the intersection of machine learning and computer vision. But recently, a much harder problem has been solved with advances in machine learning: generating 3D models of large areas from an unstructured collection of images. For example, this example of "building Rome in a day": see this video, as it may just blow your mind.
With your mind suitably blown, you may want to check out the machine learning techniques that allowed this computation to take place efficiently in this video.
You may want to follow up with Noah Snavely's PhD thesis and check out the algorithms that he used and other work that has been incorporated to build this system. It seems that the problem of reconstructing a single room from one rotating point must be a much easier inference problem. Then again, you may just want to check out the implementation in their software :)

Need help with circle collision and rotation? - Game Physics

Ok so I have bunch of balls:
What I'm trying to figure out is how to make these circles:
Rotate based on the surfaces they are touching
Fix collision penetration when dealing with multiple touching objects.
EDIT: This is what I mean by rotation
Ball 0 will rotate anti-clockwise as it's leaning on Ball 3
Ball 5 will rotate clockwise as it's leaning on Ball 0
Even though solutions to this are universal, just for the record I'm using Javascript and SVG, and would prefer implementing this myself rather than using a library.
Help would be very much appreciated. Thanks! :)
Here are a few links I think would help you out on your quest:
Box2D
Advanced Character Physics
Javascript Ball Simulation
Box2D has what your looking for, and its open source I believe. You can download the files and see how they do what they do in order to achieve your effect.
Let me know if this helps, trying to get better at answering questions on here. :)
EDIT:
So I went ahead and thought this out just a bit more to give some insight as far as how I would approach it. Take a look at the image below:
Basically, compare the angles on a grid, if the ball is falling +30 degrees compared to the ball it falls on then rotate the ball positively. If its falling -30 degrees compared to the ball it fall on then rotate the ball negatively. Im not saying this is the correct solution, but just thinking about it, this is the way I would approach the problem off the bat.
From a physics standpoint it sounds like you want to conserve both linear and angular momentum.
As a starting point, you'll want establish ODE matrices that model both and then perform some linear algebra to solve them. I personally would use Numpy/Scipy (probably using a sparse array) for that solution. But there are many approaches (sympy comes to mind). What modules do you want to use?
You'll want to familiarize yourself with coefficient of restitution and coefficient of friction and decide if you want to conserve kinetic energy too. (do you want/care if they keep bouncing and rolling around forever?) (you'll probably need energy matrices as well)
You'll be solving these matrices every timestep all the while checking the condition that no two ball centers are closer than the sum of the two radii. (..and if they do, you adjust the momentum and energy terms for a post-collision condition)
This is just the barest of beginnings to a big project. Can I ask why you want to do this from scratch?
I would recommend checking out game physics simulation books and articles. See O'Reilly's Physics for Game Developers and the Gamasutra website, for example.

Resources