I have timed sensor data for a car - such as Yaw Rate, Absolute Steering Angle etc - and would like to detect if the car is making a turn, left or right.
Currently, I'm thinking about using angular displacement and velocity, but I'm not sure if there's an existing robust algorithm that I could use.
I have also come across this post that hints about a method that could be used, but I'm not sure if this is exactly the one I need.
Algorithm to detect left or right turn from x,y co-ordinates
Sorry, for not giving much, but I'm actually trying to get some literature that I could review and some vocabulary that could help me find better solutions related to my problem. Thanks.
I'm working on a project where I'm capturing people making free throw shots via a video camera. I need a way to detect, as fast as possible, the instant the ball is released from a player's hand. I tried researching a lot of detection/tracking algorithms, but everything I've found seemed more suited to tracking the ball itself. While I may eventually want to do that, right now all I need to know is the release timing.
I'm also open to other solutions that don't use the camera (I have a decent budget), but of course I'd like to use the camera if possible/fast enough. I'm also able to mess with the camera positioning/setup, and what I even want in the FOV.
Does anyone have any ideas? I'm pretty stuck right now, and haven't been able to find anything online that can help.
A solution is to use visual markers (motion trackers) on the throwing hands and on the ball. The precision is based on the FPS of the camera.
The assumption is that you know the ball dimension and the hand grip on the ball that may vary. By using visual markers/trackers you can know the position of the ball relative to the hand. When the distance between the initial grip of the ball and the hand is bigger than the distance between the center of the ball and it's extremity then is when you have your release. Schema of the method
A better solution is to use a graded meter bar (alternate between black and white bars like the ones used on the mythbusters show to track the speed of objects). At the moment there is a color gap between the hand and the ball you have your release. The downside of this approach is that you have to capture the image at a side angle or top-down angle and use panels to hold the grading.
Your problem is similar to the billiard ball collision detection. I hope you find this paper helpful.
Edit:
There is a powerful tool, that is not that expensive named Microsoft Kinect used for motion capture. The downside of this tool is that it's camera works with 30 fps and you cannot use it accurately on a very sunny scene. However I have found a scientific paper about using kinect to record athletes, including free-throws in basketball. Paper here
It's my first answer on so. Any feedback on how to improve my future answers is appreciated.
I am trying to create a distructable floor in chipmunk
Pretty much I need a floor or body object that when a bomb explodes that the floor disappears in the area of the balls defined explosion area.
I considered useing a CPpoly shape to do this and define the vertexes each time that a bomb exploded, but discovered that this was not only intractable, but practically impossible.
Does anybody have any ideas on how I could do this in chipmunk? and sorry I am relatively new to the language and know literally only the basics. Thank you for your help.
Deformable terrain with a physics engine isn't easy. I've been making an add-on library for Chipmunk that can help do it though. http://www.cocos2d-iphone.org/forum/topic/20792
Basically you are going to have to reconstruct the geometry every time something changes. Your choices include marching squares or CSG. Neither are particularly easy to make run in real time though.
like many others before, I'm coding a 2D platformer at the moment, or more precise on an Engine for that. My question is about collision detection and especially the reactions to collision.
It's vital to me to have surfaces that are NOT tile based, because the world of the platformer has urgend need of surfaces that are not straight.
What do you think is the best approach for implementing such?
I've already come to a solution, but I'm only about 95% satisfied with it.
My Entity is component based so at first it calculates all the movement. Horizontal position change. Falling. Jumping speed; all that. Then it stores these values temporarily; accessible to the Collision component.
This one adds 4 hitpoints to the entity. Two for the floor collision and two for the wall collision.
Like this
(source: fux-media.com)
I first check against the floor. When there IS collision, I iterate collision detections upwards until there is no more collision.
The I reset the Entity to this point. If both ground hitpoints collide, I reset the Entity to the lowest point to avoid the Entity floating in the air.
Then I check against the wall with the upper hitpoints. Basically, if it collides, I do the same thing as above, just horizontally.
If works quite well. Very heavy ascensions are treated like a wall and very low ascensions are just climbed. But in between, when the ascensions are around 45 degree, it behaves strange. It does not quite feel right.
Now, I could just avoid implementing 45 degree walls in my game, but that would just not be clean, as I'm programming an engine that is supposed to work right no matter what.
So... how would you implement this, algorithmically?
I think you're on the right track with the points. You could combine those points to form a polygon, and then use the resulting polygon to perform collision detection.
Phys2D is a library that happens to be in Java that does collision between several types of geometric primitives. The site includes downloadable/runnable demos on what the library is capable of.
If you look around, you can find 2D polygon collision detection implementations in most common languages. With some study, an understanding of the underlying geometry calculations, and some gusto, you could even write your own if you like.
I am hacking a vacuum cleaner robot to control it with a microcontroller (Arduino). I want to make it more efficient when cleaning a room. For now, it just go straight and turn when it hits something.
But I have trouble finding the best algorithm or method to use to know its position in the room. I am looking for an idea that stays cheap (less than $100) and not to complex (one that don't require a PhD thesis in computer vision). I can add some discrete markers in the room if necessary.
Right now, my robot has:
One webcam
Three proximity sensors (around 1 meter range)
Compass (no used for now)
Wi-Fi
Its speed can vary if the battery is full or nearly empty
A netbook Eee PC is embedded on the robot
Do you have any idea for doing this? Does any standard method exist for these kind of problems?
Note: if this question belongs on another website, please move it, I couldn't find a better place than Stack Overflow.
The problem of figuring out a robot's position in its environment is called localization. Computer science researchers have been trying to solve this problem for many years, with limited success. One problem is that you need reasonably good sensory input to figure out where you are, and sensory input from webcams (i.e. computer vision) is far from a solved problem.
If that didn't scare you off: one of the approaches to localization that I find easiest to understand is particle filtering. The idea goes something like this:
You keep track of a bunch of particles, each of which represents one possible location in the environment.
Each particle also has an associated probability that tells you how confident you are that the particle really represents your true location in the environment.
When you start off, all of these particles might be distributed uniformly throughout your environment and be given equal probabilities. Here the robot is gray and the particles are green.
When your robot moves, you move each particle. You might also degrade each particle's probability to represent the uncertainty in how the motors actually move the robot.
When your robot observes something (e.g. a landmark seen with the webcam, a wifi signal, etc.) you can increase the probability of particles that agree with that observation.
You might also want to periodically replace the lowest-probability particles with new particles based on observations.
To decide where the robot actually is, you can either use the particle with the highest probability, the highest-probability cluster, the weighted average of all particles, etc.
If you search around a bit, you'll find plenty of examples: e.g. a video of a robot using particle filtering to determine its location in a small room.
Particle filtering is nice because it's pretty easy to understand. That makes implementing and tweaking it a little less difficult. There are other similar techniques (like Kalman filters) that are arguably more theoretically sound but can be harder to get your head around.
A QR Code poster in each room would not only make an interesting Modern art piece, but would be relatively easy to spot with the camera!
If you can place some markers in the room, using the camera could be an option. If 2 known markers have an angular displacement (left to right) then the camera and the markers lie on a circle whose radius is related to the measured angle between the markers. I don't recall the formula right off, but the arc segment (on that circle) between the markers will be twice the angle you see. If you have the markers at known height and the camera is at a fixed angle of inclination, you can compute the distance to the markers. Either of these methods alone can nail down your position given enough markers. Using both will help do it with fewer markers.
Unfortunately, those methods are imperfect due to measurement errors. You get around this by using a Kalman estimator to incorporate multiple noisy measurements to arrive at a good position estimate - you can then feed in some dead reckoning information (which is also imperfect) to refine it further. This part is goes pretty deep into math, but I'd say it's a requirement to do a great job at what you're attempting. You can do OK without it, but if you want an optimal solution (in terms of best position estimate for given input) there is no better way. If you actually want a career in autonomous robotics, this will play large in your future. (
Once you can determine your position you can cover the room in any pattern you'd like. Keep using the bump sensor to help construct a map of obstacles and then you'll need to devise a way to scan incorporating the obstacles.
Not sure if you've got the math background yet, but here is the book:
http://books.google.com/books/about/Applied_optimal_estimation.html?id=KlFrn8lpPP0C
This doesn't replace the accepted answer (which is great, thanks!) but I might recommend getting a Kinect and use that instead of your webcam, either through Microsoft's recently released official drivers or using the hacked drivers if your EeePC doesn't have Windows 7 (presumably it does not).
That way the positioning will be improved by the 3D vision. Observing landmarks will now tell you how far away the landmark is, and not just where in the visual field that landmark is located.
Regardless, the accepted answer doesn't really address how to pick out landmarks in the visual field, and simply assumes that you can. While the Kinect drivers may already have feature detection included (I'm not sure) you can also use OpenCV for detecting features in the image.
One solution would be to use a strategy similar to "flood fill" (wikipedia). To get the controller to accurately perform sweeps, it needs a sense of distance. You can calibrate your bot using the proximity sensors: e.g. run motor for 1 sec = xx change in proximity. With that info, you can move your bot for an exact distance, and continue sweeping the room using flood fill.
Assuming you are not looking for a generalised solution, you may actually know the room's shape, size, potential obstacle locations, etc. When the bot exists the factory there is no info about its future operating environment, which kind of forces it to be inefficient from the outset.
If that's you case, you can hardcode that info, and then use basic measurements (ie. rotary encoders on wheels + compass) to precisely figure out its location in the room/house. No need for wifi triangulation or crazy sensor setups in my opinion. At least for a start.
Ever considered GPS? Every position on earth has a unique GPS coordinates - with resolution of 1 to 3 metres, and doing differential GPS you can go down to sub-10 cm range - more info here:
http://en.wikipedia.org/wiki/Global_Positioning_System
And Arduino does have lots of options of GPS-modules:
http://www.arduino.cc/playground/Tutorials/GPS
After you have collected all the key coordinates points of the house, you can then write the routine for the arduino to move the robot from point to point (as collected above) - assuming it will do all those obstacles avoidance stuff.
More information can be found here:
http://www.google.com/search?q=GPS+localization+robots&num=100
And inside the list I found this - specifically for your case: Arduino + GPS + localization:
http://www.youtube.com/watch?v=u7evnfTAVyM
I was thinking about this problem too. But I don't understand why you can't just triangulate? Have two or three beacons (e.g. IR LEDs of different frequencies) and a IR rotating sensor 'eye' on a servo. You could then get an almost constant fix on your position. I expect the accuracy would be in low cm range and it would be cheap. You can then map anything you bump into easily.
Maybe you could also use any interruption in the beacon beams to plot objects that are quite far from the robot too.
You have a camera you said ? Did you consider looking at the ceiling ? There is little chance that two rooms have identical dimensions, so you can identify in which room you are, position in the room can be computed from angular distance to the borders of the ceiling and direction can probably be extracted by the position of doors.
This will require some image processing but the vacuum cleaner moving slowly to be efficiently cleaning will have enough time to compute.
Good luck !
Use Ultra Sonic Sensor HC-SR04 or similar.
As above told sense the walls distance from robot with sensors and room part with QR code.
When your are near to a wall turn 90 degree and move as width of your robot and again turn 90deg( i.e. 90 deg left turn) and again move your robot I think it will help :)