While trying to implement gravity in my "Donkey Kong" game, i ran into a problem. The jump movement works perfectly fine when Mario is on a platform. But however when he's falling from one platform to another, the collision with the next platform doesn't get detected and so mario goes through the platform.
This is how my gravity logic works: The vertical velocity is checked every frame, if it's not equal to 0 then mario is moved (+ mario-y vVelocity). As long as there is no collision with a platform, the vVelocity is changed to (- vVelocity gravity). And when there is a collision with a platform, the vVelocity gets reset to 0. The problem with this is, the mario-y changes too much every frame, for example it can go from (100;100) to (100;90) when the vVelocity= -10, and if there is a platform in (100;95), the collision is not detected. How can i fix this? Thanks
There at least two options.
Introduce a maximum velocity smaller than the size of a platform. In this way the player can’t pass a platform without a collision.
Use two or more time steps when you update the variables. So far your time step had the same duration as a frame. If you split the frame into several steps the velocity per time step will be smaller - as a result the player won’t move through a platform without a collision.
You should create line between last and current position and check line/platform intersection for collisions.
Also this is not Racket specific but general discrete simulation problem.
Related
We can retrieve the acceleration data from CMAcceleration.
It provides 3 values, namely x, y and , z.
I have been reading up on this and I seem to have gotten different explanation for these values.
Some say they are the acceleration values in respect to gravity.
Others have said they are not, they are the acceleration values in respect to the axis as they turn around on its axis.
Which is the correct version here? For example, does x represent the acceleration rate for pitch or does it for from left to right?
In addition, let say if we want to get the acceleration rate (how fast) for yaw, how could we be able to derive that value when the call back is feeding us constantly with values? Would we need to set up another timer for the calculation?
Edit (in response to #Kay):
Yes, it was basically it - I just wanted to make sure x, y, z and respectively pitch, roll and yaw and represented differently by the frame.
1.)
How are these related in certain situations? Would there be a need that besides getting a value, for example, for yaw that needs addition information from the use of x, y, z?
2.)
Can you explain a little more on this:
(deviceMotion.rotationRate.z - previousRotationRateZ) / (currentTime - previousTime)
Would we need to use a timer for the time values? And how would making use of the above generate an angular acceleration? I thought angular acceleration entail more complex maths.
3.)
In a real world situation, we can barely only rely on a single value from pitch, roll and yaw because that would be impossible to for us to make a rotation only on one axis (our hand is not that "stable". Especially after 5 cups of coffee...)
Let say I would like to get the values of yaw (yes, rotation on the z-axis) but at the time as yaw spins I wanted to check it against pitch (x-axis).
Yes, 2 motions combine here (imagine the phone is rotating around z with slight movement going towards and away from the user's face).
So: Is there is mathematical model (or one that is from your own personal experience) to derive a value from calculating values of different axis? (sample case: if the user is spinning on z-axis and at the same time also making a movement of x-axis - good. If not, not a good motion we need). Sample case just off the top of my head.
I hope my sample case above with both yaw and pitch makes sense to you. If not, please feel free to cite a better use case for explanation.
4.)
Lastly time. How can we get time as a reference frame to check how fast a movement is since the last? Should we provide a tolerance (Example: "less than 1/50 of a second since last movement - do something. If not, do nothing.")? Where and when do we set a timer?
The class reference of CMAccelerometerData says:
X-axis acceleration in G's (gravitational force)
The acceleration is measured in local coordinates like shown in figure 4-1 in the Event Handling Guide. It's always a translation und must not be confused with radial or circular motions which are measured in angles.
Anyway, every rotation even with a constant angular velocity is related to a change in the direction and thus an acceleration is reported as well s. Circular Motion
What do you mean by get the acceleration rate (how fast) for yaw?
Based on figure 4-2 in Handling Rotation Rate Data the yaw rotation occurs around the Z axis. That means there is a continuous linear acceleration in the X,Y plane. If you are interested in angular acceleration, you need to take CMDeviceMotion.rotationRate and divide it by the time delta e.g.:
(deviceMotion.rotationRate.z - previousRotationRateZ) / (currentTime - previousTime)
Update:
It depends on what you want to do and which motions you are interested in to track. I hope you don't want to get the exact device position in x,y,z when doing a translation as this is impossible. The orientation i.e. the rotation relativ to g can be determined very well of course.
I think in >99% of all cases you won't need additional information from accelerations when working with angles.
Don't use your own timer. CMDeviceMotion inherits from CMLogItem and thus provides a perfect matching timestamp of the sensor data or respectivly the interpolated time for the result of the sensor fusion algorithm.
I assume that you don't need angular acceleration.
You are totally right even without coffee ;-) If you look at the motions shown in this video there is exactly the situation you describe. Maths and algorithms were the result of some heavy R&D and I am bound to NDA.
But the most use cases are covered with the properties available in CMAttitude. Be cautious with Euler angles when doing calculation because of Gimbal Lock
Again this totally depends on what you are up to.
I'm working on a project where I need to track two points in an image. So far, the best way I have of identifying these points is to get the user to click on them when the program is first run. I'm using the Lucas-Kanade Pyramid method built into OpenCV (documented here, but as is to be expected, this doesn't work too well. Is there a better alternative algorithm for tracking points in OpenCV, or alternatively some other way of verifying the points I already have?
I'm currently considering using GoodFeaturesToTrack, and getting the distance from each point to the one that I want to track, and maybe some sort of vector pointing out the relationship between the two points, and using this information to determine my new point.
I'm looking for suggestions of ways to go about this, not necessarily code samples.
Thanks
EDIT: I'm tracking small movements, if that helps
If you look for a solution that is implemented in opencv the pyramidal Lucas Kanade (PLK) method is quit good, else I would prefer a Particle Filter based tracker.
To improve your tracking performance with the PLK be sure that you have set up the parameters correctly. E.g. for large motion you need a level at ca. 3 or 4. The window should not be to small ( I prefer 17x17 to 27x27). Also keep in mind that the methods needs textured areas to be able to track the points. That means corner like image content (aperture problem).
I would propose to seed a set of points (ps) in a grid around the points (P) you want to track. And than use a foreward - backward threshold to reject falsly tracked points. The motion of your points (P) will be computed by the mean motion of the particular residual point sets (ps).
The foreward backward confidence is computes by estimating the motion from frame 1 to frame 2. (ptList1 -> ptList2). And that from frame 2 to frame 1 with the points of ptList2 (ptList2 -> ptListRef). Motion vectors will be rejected if (|| ptRef - pt1 || > fb_threshold).
For this question assume that the following things are unknown:
The size and shape of the room
The location of the robot
The presence of any obsticles
Also assume that the following things are constant:
The size and shape of the room
The number, shape and location of all (if any) obsticles
And assume that the robot has the following properties:
It can only move forward in increments of absolute units and turn in degrees. Also the operation that moves will return true if it succeeded or false if it failed to move due to an obstruction
A reasonably unlimited source of power (let's say it is a solar powered robot placed on a space station that faces the sun at all times with no ceiling)
Every movement and rotation is carried out with absolute precision every time (don't worry about unreliable data)
I was asked a much simpler version of this question (room is a rectangle and there are no obstacles, how would you move over it guaranteeing you could over every part at least once) and after I started wondering how you would approach this if you couldn't guarantee the shape or the presence of obstacles. I've started looking at this with Dijkstra's algorithm, but I'm fascinated to hear how others approach this (or if there is a well accepted answer to this? (How does Roomba do it?)
Take a look at SLAM http://openslam.org/ and for more Wiki
You further should look into
Mark de Berg: Computational Geometry Algorithms and Applications
Chapter 13 Robot Motion Planning
I am hacking a vacuum cleaner robot to control it with a microcontroller (Arduino). I want to make it more efficient when cleaning a room. For now, it just go straight and turn when it hits something.
But I have trouble finding the best algorithm or method to use to know its position in the room. I am looking for an idea that stays cheap (less than $100) and not to complex (one that don't require a PhD thesis in computer vision). I can add some discrete markers in the room if necessary.
Right now, my robot has:
One webcam
Three proximity sensors (around 1 meter range)
Compass (no used for now)
Wi-Fi
Its speed can vary if the battery is full or nearly empty
A netbook Eee PC is embedded on the robot
Do you have any idea for doing this? Does any standard method exist for these kind of problems?
Note: if this question belongs on another website, please move it, I couldn't find a better place than Stack Overflow.
The problem of figuring out a robot's position in its environment is called localization. Computer science researchers have been trying to solve this problem for many years, with limited success. One problem is that you need reasonably good sensory input to figure out where you are, and sensory input from webcams (i.e. computer vision) is far from a solved problem.
If that didn't scare you off: one of the approaches to localization that I find easiest to understand is particle filtering. The idea goes something like this:
You keep track of a bunch of particles, each of which represents one possible location in the environment.
Each particle also has an associated probability that tells you how confident you are that the particle really represents your true location in the environment.
When you start off, all of these particles might be distributed uniformly throughout your environment and be given equal probabilities. Here the robot is gray and the particles are green.
When your robot moves, you move each particle. You might also degrade each particle's probability to represent the uncertainty in how the motors actually move the robot.
When your robot observes something (e.g. a landmark seen with the webcam, a wifi signal, etc.) you can increase the probability of particles that agree with that observation.
You might also want to periodically replace the lowest-probability particles with new particles based on observations.
To decide where the robot actually is, you can either use the particle with the highest probability, the highest-probability cluster, the weighted average of all particles, etc.
If you search around a bit, you'll find plenty of examples: e.g. a video of a robot using particle filtering to determine its location in a small room.
Particle filtering is nice because it's pretty easy to understand. That makes implementing and tweaking it a little less difficult. There are other similar techniques (like Kalman filters) that are arguably more theoretically sound but can be harder to get your head around.
A QR Code poster in each room would not only make an interesting Modern art piece, but would be relatively easy to spot with the camera!
If you can place some markers in the room, using the camera could be an option. If 2 known markers have an angular displacement (left to right) then the camera and the markers lie on a circle whose radius is related to the measured angle between the markers. I don't recall the formula right off, but the arc segment (on that circle) between the markers will be twice the angle you see. If you have the markers at known height and the camera is at a fixed angle of inclination, you can compute the distance to the markers. Either of these methods alone can nail down your position given enough markers. Using both will help do it with fewer markers.
Unfortunately, those methods are imperfect due to measurement errors. You get around this by using a Kalman estimator to incorporate multiple noisy measurements to arrive at a good position estimate - you can then feed in some dead reckoning information (which is also imperfect) to refine it further. This part is goes pretty deep into math, but I'd say it's a requirement to do a great job at what you're attempting. You can do OK without it, but if you want an optimal solution (in terms of best position estimate for given input) there is no better way. If you actually want a career in autonomous robotics, this will play large in your future. (
Once you can determine your position you can cover the room in any pattern you'd like. Keep using the bump sensor to help construct a map of obstacles and then you'll need to devise a way to scan incorporating the obstacles.
Not sure if you've got the math background yet, but here is the book:
http://books.google.com/books/about/Applied_optimal_estimation.html?id=KlFrn8lpPP0C
This doesn't replace the accepted answer (which is great, thanks!) but I might recommend getting a Kinect and use that instead of your webcam, either through Microsoft's recently released official drivers or using the hacked drivers if your EeePC doesn't have Windows 7 (presumably it does not).
That way the positioning will be improved by the 3D vision. Observing landmarks will now tell you how far away the landmark is, and not just where in the visual field that landmark is located.
Regardless, the accepted answer doesn't really address how to pick out landmarks in the visual field, and simply assumes that you can. While the Kinect drivers may already have feature detection included (I'm not sure) you can also use OpenCV for detecting features in the image.
One solution would be to use a strategy similar to "flood fill" (wikipedia). To get the controller to accurately perform sweeps, it needs a sense of distance. You can calibrate your bot using the proximity sensors: e.g. run motor for 1 sec = xx change in proximity. With that info, you can move your bot for an exact distance, and continue sweeping the room using flood fill.
Assuming you are not looking for a generalised solution, you may actually know the room's shape, size, potential obstacle locations, etc. When the bot exists the factory there is no info about its future operating environment, which kind of forces it to be inefficient from the outset.
If that's you case, you can hardcode that info, and then use basic measurements (ie. rotary encoders on wheels + compass) to precisely figure out its location in the room/house. No need for wifi triangulation or crazy sensor setups in my opinion. At least for a start.
Ever considered GPS? Every position on earth has a unique GPS coordinates - with resolution of 1 to 3 metres, and doing differential GPS you can go down to sub-10 cm range - more info here:
http://en.wikipedia.org/wiki/Global_Positioning_System
And Arduino does have lots of options of GPS-modules:
http://www.arduino.cc/playground/Tutorials/GPS
After you have collected all the key coordinates points of the house, you can then write the routine for the arduino to move the robot from point to point (as collected above) - assuming it will do all those obstacles avoidance stuff.
More information can be found here:
http://www.google.com/search?q=GPS+localization+robots&num=100
And inside the list I found this - specifically for your case: Arduino + GPS + localization:
http://www.youtube.com/watch?v=u7evnfTAVyM
I was thinking about this problem too. But I don't understand why you can't just triangulate? Have two or three beacons (e.g. IR LEDs of different frequencies) and a IR rotating sensor 'eye' on a servo. You could then get an almost constant fix on your position. I expect the accuracy would be in low cm range and it would be cheap. You can then map anything you bump into easily.
Maybe you could also use any interruption in the beacon beams to plot objects that are quite far from the robot too.
You have a camera you said ? Did you consider looking at the ceiling ? There is little chance that two rooms have identical dimensions, so you can identify in which room you are, position in the room can be computed from angular distance to the borders of the ceiling and direction can probably be extracted by the position of doors.
This will require some image processing but the vacuum cleaner moving slowly to be efficiently cleaning will have enough time to compute.
Good luck !
Use Ultra Sonic Sensor HC-SR04 or similar.
As above told sense the walls distance from robot with sensors and room part with QR code.
When your are near to a wall turn 90 degree and move as width of your robot and again turn 90deg( i.e. 90 deg left turn) and again move your robot I think it will help :)
Modern UI's are starting to give their UI elments nice inertia when moving. Tabs slide in, page transitions, even some listboxes and scroll elments have nice inertia to them (the iphone for example). What is the best algorythm for this? It is more than just gravity as they speed up, and then slow down as they fall into place. I have tried various formulae's for speeding up to a maximum (terminal) velocity and then slowing down but nothing I have tried "feels" right. It always feels a little bit off. Is there a standard for this, or is it just a matter of playing with various numbers until it looks/feels right?
You're talking about two different things here.
One is momentum - giving things residual motion when you release them from a drag. This is simply about remembering the velocity of a thing when the user releases it, then applying that velocity to the object every frame and also reducing the velocity every frame by some amount. How you reduce velocity every frame is what you experiment with to get the feel right.
The other thing is ease-in and ease-out animation. This is about smoothly accelerating/decelerating objects when you move them between two positions, instead of just linearly interpolating. You do this by simply feeding your 'time' value through a sigmoid function before you use it to interpolate an object between two positions. One such function is
smoothstep(t) = 3*t*t - 2*t*t*t [0 <= t <= 1]
This gives you both ease-in and ease-out behaviour. However, you'll more commonly see only ease-out used in GUIs. That is, objects start moving snappily, then slow to a halt at their final position. To achieve that you just use the right half of the curve, ie.
smoothstep_eo(t) = 2*smoothstep((t+1)/2) - 1
Mike F's got it: you apply a time-position function to calculate the position of an object with respect to time (don't muck around with velocity; it's only useful when you're trying to figure out what algorithm you want to use.)
Robert Penner's easing equations and demo are superb; like the jQuery demo, they demonstrate visually what the easing looks like, but they also give you a position time graph to give you an idea of the equation behind it.
What you are looking for is interpolation. Roughly speaking, there are functions that vary from 0 to 1 and when scaled and translated create nice looking movement. This is quite often used in Flash and there are tons of examples: (NOTE: in Flash interpolation has picked up the name "tweening" and the most popular type of interpolation is known as "easing".)
Have a look at this to get an intuitive feel for the movement types:
SparkTable: Visualize Easing Equations.
When applied to movement, scaling, rotation an other animations these equations can give a sense of momentum, or friction, or bouncing or elasticity. For an example when applied to animation have a look at Robert Penners easing demo. He is the author of the most popular series of animation functions (I believe Adobe's built in ones are based on his). This type of transition works equally as well on alpha's (for fade in).
There is a bit of method to the usage. easeInOut start slow, speeds up and the slows down. easeOut starts fast and slows down (like friction) and easeIn starts slow and speeds up (like momentum). Depending on the feel you want you choose the appropriate one. Then you choose between Sine, Expo, Quad and so on for the strength of the effect. The others are easy to work out by their names (e.g. Bounce bounces, Back goes a little further then comes back like an elastic).
Here is a link to the equations from the popular Tweener library for AS3. You should be able to rewrite these in JavaScript (or any other language) with little to no trouble.
It's playing with the numbers.. What feels good is good.
I've tried to develop magic formulas myself for years. In the end the ugly hack always felt best. Just make sure you somehow time your animations properly and don't rely on some kind of redraw/refresh rate. These tend to change based on the OS.
Im no expert on this either, but I beleive they are done with quadratic formulas, that, when given the correct parameters, start fast or slow and dramatically increase or decrease towards the end until a certain point is reached.