Is there a way to dampen a collision in pymunk. I'm trying to simulate space ships and when they collide they shouldn't just bounce apart. Gravity is (0,0). I want to absorb about 90% of an impact basically as if the hull was crumpling under the collision.
One possible thing I thought of is after the post-solve I can read the impulse used to solve the collision and take 90% of that and counteract it making a net solve of 10% but I would need the angular velocity and regular velocity's impulse separate.
I just need to absorb momentum as apposed to no energy being lost in a collision. Its always transferred to the other object.
Edit: Ok so Elasticity helps the bounce but the energy transfer is way too high. The Energy that would transfer is suppose to be absorbed in the crumpling collision. I would still like some to transfer just not much. I'm still thinking about the impulse because that's what actually changes their velocities upon collision. But based on if it's a direct collision or just a clip the amount is very different.
You want to set the shapes' elasticity property. I believe the correct way to do so in pymunk is:
# Replace newElasticity with the elasticity value you want.
# e.g. shape.elasticity = 0.3;
shape.elasticity = newElasticity;
For each collision, the two elasticities are multiplied together, and the product determines how close to an elastic (product = 1.0) or inelastic (product = 0.0) collision the resulting collision will be.
If you want a collision to be 10% of an elastic collision, you'll want to set the elasticities to sqrt(0.1), which is approximately 0.3.
Related
My question may seem trivial, but the more I read about it - the more confused I get... I have started a little project where I want to roughly track the movements of a rotating object. (A basketball to be precise)
I have a 3-axis accelerometer (low-pass-filtered) and a 3-axis gyroscope measuring °/s.
I know about the issues of a gyro, but as the measurements will only be several seconds and the angles tend to be huge - I don't care about drift and gimbal right now.
My Gyro gives me the rotation speed of all 3 axis. As I want to integrate the acceleration twice to get the position at each timestep, I wanted to convert the sensors coordinate-system into an earthbound system.
For the first try, I want to keep things simple, so I decided to go with the big standard rotation matrix.
But as my results are horrible I wonder if this is the right way to do so. If I understood correctly - the matrix is simply 3 matrices multiplied in a certain order. As rotation of a basketball doesn't have any "natural" order, this may not be a good idea. My sensor measures 3 angular velocitys at once. If I throw them into my system "step by step" it will not be correct since my second matrix calculates the rotation around the "new y-axis" , but my sensor actually measured an angular velocity around the "old y-axis". Is that correct so far?
So how can I correctly calculate the 3D rotation?
Do I need to go for quaternoins? but how do I get one from 3 different rotations? And don't I have the same issue here again?
I start with a unity-matrix ((1, 0, 0)(0, 1, 0)(0, 0, 1)) multiplied with the acceleration vector to give me the first movement.
Then I want use the Rotation matrix to find out, where the next acceleration is really heading so I can simply add the accelerations together.
But right now I am just too confused to find a proper way.
Any suggestions?
btw. sorry for my poor english, I am tired and (obviously) not a native speaker ;)
Thanks,
Alex
Short answer
Yes, go for quaternions and use a first order linearization of the rotation to calculate how orientation changes. This reduces to the following pseudocode:
float pose_initial[4]; // quaternion describing original orientation
float g_x, g_y, g_z; // gyro rates
float dt; // time step. The smaller the better.
// quaternion with "pose increment", calculated from the first-order
// linearization of continuous rotation formula
delta_quat = {1, 0.5*dt*g_x, 0.5*dt*g_y, 0.5*dt*g_z};
// final orientation at start time + dt
pose_final = quaternion_hamilton_product(pose_initial, delta_quat);
This solution is used in PixHawk's EKF navigation filter (it is open source, check out formulation here). It is simple, cheap, stable and accurate enough.
Unit matrix (describing a "null" rotation) is equivalent to quaternion [1 0 0 0]. You can get the quaternion describing other poses using a suitable conversion formula (for example, if you have Euler angles you can go for this one).
Notes:
Quaternions following [w, i, j, k] notation.
These equations assume angular speeds in SI units, this is, radians per second.
Long answer
A gyroscope describes the rotational speed of an object as a decomposition in three rotational speeds around the orthogonal local axes XYZ. However, you could equivalently describe the rotational speed as a single rate around a certain axis --either in reference system that is local to the rotated body or in a global one.
The three rotational speeds affect the body simultaneously, continously changing the rotation axis.
Here we have the problem of switching from the continuous-time real world to a simpler discrete-time formulation that can be easily solved using a computer. When discretizing, we are always going to introduce errors. Some approaches will lead to bigger errors, while others will be notably more accurate.
Your approach of concatenating three simultaneous rotations around orthogonal axes work reasonably well with small integration steps (let's say smaller than 1/1000 s, although it depends on the application), so that you are simulating the continuous change of rotation axis. However, this is computationally expensive, and error grows as you make time steps bigger.
As an alternative to first-order linearization, you can calculate pose increments as a small delta of angular speed gradient (also using quaternion representation):
quat_gyro = {0, g_x, g_y, g_z};
q_grad = 0.5 * quaternion_product(pose_initial, quat_gyro);
// Important to normalize result to get unit quaternion!
pose_final = quaternion_normalize(pose_initial + q_grad*dt);
This technique is used in Madgwick rotation filter (here an implementation), and works pretty fine for me.
We can retrieve the acceleration data from CMAcceleration.
It provides 3 values, namely x, y and , z.
I have been reading up on this and I seem to have gotten different explanation for these values.
Some say they are the acceleration values in respect to gravity.
Others have said they are not, they are the acceleration values in respect to the axis as they turn around on its axis.
Which is the correct version here? For example, does x represent the acceleration rate for pitch or does it for from left to right?
In addition, let say if we want to get the acceleration rate (how fast) for yaw, how could we be able to derive that value when the call back is feeding us constantly with values? Would we need to set up another timer for the calculation?
Edit (in response to #Kay):
Yes, it was basically it - I just wanted to make sure x, y, z and respectively pitch, roll and yaw and represented differently by the frame.
1.)
How are these related in certain situations? Would there be a need that besides getting a value, for example, for yaw that needs addition information from the use of x, y, z?
2.)
Can you explain a little more on this:
(deviceMotion.rotationRate.z - previousRotationRateZ) / (currentTime - previousTime)
Would we need to use a timer for the time values? And how would making use of the above generate an angular acceleration? I thought angular acceleration entail more complex maths.
3.)
In a real world situation, we can barely only rely on a single value from pitch, roll and yaw because that would be impossible to for us to make a rotation only on one axis (our hand is not that "stable". Especially after 5 cups of coffee...)
Let say I would like to get the values of yaw (yes, rotation on the z-axis) but at the time as yaw spins I wanted to check it against pitch (x-axis).
Yes, 2 motions combine here (imagine the phone is rotating around z with slight movement going towards and away from the user's face).
So: Is there is mathematical model (or one that is from your own personal experience) to derive a value from calculating values of different axis? (sample case: if the user is spinning on z-axis and at the same time also making a movement of x-axis - good. If not, not a good motion we need). Sample case just off the top of my head.
I hope my sample case above with both yaw and pitch makes sense to you. If not, please feel free to cite a better use case for explanation.
4.)
Lastly time. How can we get time as a reference frame to check how fast a movement is since the last? Should we provide a tolerance (Example: "less than 1/50 of a second since last movement - do something. If not, do nothing.")? Where and when do we set a timer?
The class reference of CMAccelerometerData says:
X-axis acceleration in G's (gravitational force)
The acceleration is measured in local coordinates like shown in figure 4-1 in the Event Handling Guide. It's always a translation und must not be confused with radial or circular motions which are measured in angles.
Anyway, every rotation even with a constant angular velocity is related to a change in the direction and thus an acceleration is reported as well s. Circular Motion
What do you mean by get the acceleration rate (how fast) for yaw?
Based on figure 4-2 in Handling Rotation Rate Data the yaw rotation occurs around the Z axis. That means there is a continuous linear acceleration in the X,Y plane. If you are interested in angular acceleration, you need to take CMDeviceMotion.rotationRate and divide it by the time delta e.g.:
(deviceMotion.rotationRate.z - previousRotationRateZ) / (currentTime - previousTime)
Update:
It depends on what you want to do and which motions you are interested in to track. I hope you don't want to get the exact device position in x,y,z when doing a translation as this is impossible. The orientation i.e. the rotation relativ to g can be determined very well of course.
I think in >99% of all cases you won't need additional information from accelerations when working with angles.
Don't use your own timer. CMDeviceMotion inherits from CMLogItem and thus provides a perfect matching timestamp of the sensor data or respectivly the interpolated time for the result of the sensor fusion algorithm.
I assume that you don't need angular acceleration.
You are totally right even without coffee ;-) If you look at the motions shown in this video there is exactly the situation you describe. Maths and algorithms were the result of some heavy R&D and I am bound to NDA.
But the most use cases are covered with the properties available in CMAttitude. Be cautious with Euler angles when doing calculation because of Gimbal Lock
Again this totally depends on what you are up to.
So I'm trying to implement a photon mapping algorithm to simulate global illumination in my ray tracing program. However, I'm running into a few issues that are making it hard to complete the implementation.
My program already successfully traces photons throughout the scene, stores them in a balanced KD-Tree and can gather the k nearest photons near any given point p, so most of the work is complete. The issue is mostly when it comes time for the radiance estimate.
First, I can't seem to get the indirect illumination to be bright enough to make any noticeable difference in my scene. If my light source emits 100,000 photons, then the power of each stored photon (which for 100,000 emitted photons would be roughly around 500,000 stored photons in my program) must be scaled down by a factor of 100,000 which makes them very dim. I thought I could mitigate this when dividing by the area if the encapsulating circle (pi*rad^2) if I have a search radius much less than 1 but decreasing the search radius to such a small number leaves me with very few photons for the estimate and if I use a large radius I can get enough photons but i don't get the extra "power boost" of using a small radius and might wind up including incorrect photons. So I don't know what to do.
Additionally, my other problem is that if I were to artificially scale up the photon power to increase the indirect lighting contribution, the resulting illumination is splotchy, uneven and ugly and doesn't look at all realistic. I know this problem is vague but I don't know why it looks this way since I'm pretty sure I'm doing the radiance estimate and brdf calculations correctly.
Without knowing the exact calculations you are doing throughout your rendering system, no one will be able to tell you why your indirect illumination is so weak. It could be that your materials are dark enough that there just isn't much indirect light. It could be that you are missing a factor of pi in your indirect illumination calculation somewhere, or you could be missing a divide-by-pi in your direct illumination calculation, so the indirect is dim in comparison.
As for splotchiness, that's what photon mapping looks like without a ton of photons. Try 100 million photons (or at least 10 million) instead and see if the issue persists.
I have an accelerometer with 3 axes and I need to detect acceleration's change frequency. It's ok when object is moving straight, without rotation - in that case I can simply ignore g (acceleration due to gravity), it would have constant direction and won't affect frequency. But what can I do when object is rotating while moving? Is it even possible to (somehow) substract g only with accelerometer, without gyroscope? May be there are some methods of rough calculation, approximation?
The following could work if your accelerometer's gain is the same in all x, y and z directions.
You don't know the orientation of the car so you cannot subtract the gravity from the sensor readings. However, when the car starts moving or there is a sharp break or acceleration then the length of the measured acceleration vector changes too.
I would use a low-pass filter to filter out the very high frequency noise that you certainly get in a car. It requires some tweaking and testing but it should not be too difficult.
Of course it is rather a workaround, the true solution would be to use gyroscopes. Good luck anyway!
my problem is that I have a large set of GPS tracks from different GPS loggers used in cars. When not turned off these cheap devices log phantom movements even if standing still:
As you can see in the image above, about a thousand points get visualized in a kind of congestion. Now I want to remove all of these points so that the red track coming from the left ends before the jitter starts.
My approach is to "draw" two or three circles around each point in the track, check how many other points are located within these circles and check the ratio:
(#points / covered area) > threshold?
If the threshold exceeds a certain ratio (purple circles), I could delete all points within. So: easy method, but has huge disadvantages, e.g. computation time, deleting "innocent" tracks only passing through the circle, doesn't detect outliers like the single points at the bottom of the picture).
I am looking for a better way to detect large heaps of points like in the picture. It should not remove false positives (of perhaps 5 or 10 points, these aggregations don't matter to me). Also, it should not simplify the rest of the track!
Edit: The result in given example should look like this:
My first step would be to investigate the speeds implied by the 'movements' of your stationary car and the changes in altitude. If either of these changes too quickly or too slowly (you'll have to decide the thresholds here) then you can probably conclude that they are due to the GPS jitter.
What information, other than position at time, does your GPS device report ?
EDIT (after OP's comment)
The problem is to characterise part of the log as 'car moving' and part of the log as 'car not moving but GPS location jittering'. I suggested one approach, Benjamin suggested another. If speed doesn't discriminate accurately enough, try acceleration. Try rate of change of heading. If none of these simple approaches work, I think it's time for you to break out your stats textbooks and start figuring out autocorrelation of random processes and the like. At this point I quietly slink away ...
Similarly to High Performance Mark's answer, you could look for line intersections that happen within a short number of points. When driving on a road, the route of the last n points rarely intersects with itself, but it does in your stationary situation because of the jitter. A single intersection could be a person doubling-back or circling around a block, but multiple intersections should be rarer. The angle of intersection will also be sharper for the jitter case.
What is the data interval of the GPS Points, it seems that these are in seconds. There may be one other way to add to the logic previously mentioned.
sum_of_distance(d0,d1,d2....dn)>=80% of sum_of_distance(d0,dn)
This 0 to n th value can iterate in smaller and larger chunks, as the traveled distance within that range will not be much. So, you can iterate over may be 60 points of data initially, and within that data iterate in 10 number of data in each iteration.