Line follower bot location - algorithm

I am working on a line follower bot that travels on a map consist of nodes But the confusion is how to let the bot know that at which node he is standing, in other words, what approach should be taken to feed the map to the bot so that it knows every node of the map and also know which node he is at present time.
I searched over the internet a lot but that doesn't seem to be worthy.

Line followers usually do not have any map. Instead they usually have a pair of front sensors pointing downwards (usually IR photo diodes and LEDs) which detects line crossing from left and right side and the robot just turns toward the line.
Its usually done by controlling the speed of left and right motor with brightness of detected light from right and left sensor (usually without any MCU or CPU, the analog version uses just 2 comparators and power amplifier to drive motors which results in much more smooth movement instead of the zig-zag like pattern)
Better bots have also in-build algorithms to search for line if it has gaps (that usually requires CPU or MCU).
If you insist on having map then you need interface to copy it in (ISP for example) however to detect where the robot is needs to actually follow the line remembering the trajectory and compare it against map until detected trajectory corresponds to only one location and orientation in the map however you will just end up with more complex and less reliable robot that has more or less the same or worse properties than simple line follower.
Another option is to use positioning system so either there is a build in positioning system on the maze or map (can be markers or transponders or whatever) or you place your robot to predetermined position and orientation and hit reset button Or you use accelerometers and gyros to integrate the position over time however as mentioned I see no benefit in any of this for line follower. This kind of stuff is better for unknown maze solver robots (they usually uses the SONAR or also IR photodiode+LED however oriented forward and to sides instead of downwards).

Related

How to map a house layout, room by room to be used for simple room to room navigation by a robot?

I am planning on a robot, basically an Arduino coupled with a webcam and RC car to navigate from a point in the house to another using a map of the house layout made possibly by a webcam tour of the place.
It should receive a command to where it should go based on input from my smartphone or PC. Each room will have an ID code which the robot should use to determine the travel path.
Also, it should be able to go to the room where I am based on locating me using Bluetooth or Wifi.
Sensors: Proximity sensors and light sensors
I live in the house, so that is not an issue.
Any ideas on where I can start?
I participated in a similar project, it will be more difficult than you think now.
We used bluetooth beacons. Fix their positions, then you can measure the signal strength with the robot. If you know the positions of the beacon (they are fix), then you can calculate where is the robot actually. But they are very inaccurate, and takes a couple of seconds to scan all the beacons.
If you want to navigate through your house, I think the easiest way that you plant the beacons, go around in the house with the robot and measure the signals (the more the better). This way you can create a discrete layout of your house. In my opinion the easiest way to store the map if you represent the layout as a graph. The nodes are the discrete points you measured, and there exists an edge between two nodes if the robot can travel between them in "one step". This way you can represent temporary obstacles too, for example delete an edge. And the robot can easily determine which way to go, just use Dijsktra's algorithm.

Plat former Game - A realistic path-finding algorithm

I am making a game and i have come across a hard part to implement into code. My game is a tile-bases platformer with lots of enemies chasing you. basically, in theory, I want my enemies to be able to, every frame/second/2 seconds, find the realistic, and shortest path to my player. I originally thought of A-star as a solution, but it leads the enemies to paths that defy gravity, which is not good. Also, multiple enemies will be using it every second to get the latest path, and then walk the first few tiles of it. So they will be discarding the rest of the path every second, and just following the first few tiles of it. I know this seems like a lot, to calculate a new path every second, all at the same time, if their is more than one enemy, but I don't know any other way to achieve what i want.
This is a picture of what I want:
Explanation: The green figure is the player, the red one is an enemy. the grey tiles are regular, open, nothing there tiles, the brown tiles being ones that you can stand on. And finally the highlighted yellow tiles represents the path that i want my enemy to be able to find, in order to realistically get to the player.
SO, the question is: What realistic path-finding algorithm can i use to acquire this? While keeping it fast?
EDIT*
I updated the picture to represent the most complicated map that their could be. this map represents what the player of my game actually sees, they just use WASD and can move around and they see themselves move through this 2d plat-former view. Their will be different types of enemies, all with different speeds and jump heights. but all will have enough jump height and speed to make the jumps in this map, and maneuver through it. The maps are generated by simply reading an XML file that has the level data in it. the data is then parsed and different types of tiles are placed in the tile holding sprite, acording to what the XML says. EX( XML node: (type="reg" graphic="grass2" x="5" y="7") and so the x and y are multiplied by the constant gridSize (like 30 or something) and they are placed down accordingly. The enemies get their frame-by-frame instruction from an AI class attached to them. This class is responsible for producing this path and return the first direction to the enemy, this should only happen every second or so, so that the enemies don't follow a old, wrong path. Please let me know if you understand my concept, and you have some thought/ideas or maybe even the answer that i'm looking for.
ALSO: the physics in this game is separate from the pathfinding, they work just fine, using a AABB vs AABB concept (the player and enemies also being AABBs).
The trick with using A* here is how you link tiles together to form available paths. Take for example the first gap the red player would need to cross. The 'link' to the next platform (aka brown tile to the left) is actually a jump action, not a move action. Additionally, it's up to you to determine how the nodes connect together; I'd add a heavy penalty when moving from a gray tile over a brown tile to a gray tile with nothing underneath just for starters (without discouraging jumps that open a shortcut).
There are two routes I see personally: running a quick prediction of how far the player can jump and where they'd jump and adjusting how the algorithm determines node adjacency or accept the path and determine when parts of the path "hang" in the air (no brown tile immediately below) and animate the enemy 'jumping' to the next part of the path. The trick is handling things when the enemy may pass through brown tiles in the even the path isn't a parabola.
I am not versed in either solution; just something I've thought about.
You need to give us the most complicated case of map, player and enemy behaviour (including jumping up and across speed) that you are going to either automatically create or manually create so we can give relevant advice. The given map is so simple, put the map in an 2-dimensional array and then the initial player location as an element of that map and then first test whether lower number column on the same row is occupied by brown if not put player there and repeat until false then same row higher column and so on to move enemy.
Update: from my reading of the stage generation- its sometime you create- not semi-random.
My suggestion is the enemy creates clones of itself with its same AI but invisible and each clone starts going in different direction jump up/left/right/jump diagonal right/left and every time it succeeds it creates a new clone- basically a genetic algorithm. From the map it seems an enemy never need to evaluate one path over another just one way fails to get closer to the player's initial position and other doesn't.

Tracking user defined points with OpenCV

I'm working on a project where I need to track two points in an image. So far, the best way I have of identifying these points is to get the user to click on them when the program is first run. I'm using the Lucas-Kanade Pyramid method built into OpenCV (documented here, but as is to be expected, this doesn't work too well. Is there a better alternative algorithm for tracking points in OpenCV, or alternatively some other way of verifying the points I already have?
I'm currently considering using GoodFeaturesToTrack, and getting the distance from each point to the one that I want to track, and maybe some sort of vector pointing out the relationship between the two points, and using this information to determine my new point.
I'm looking for suggestions of ways to go about this, not necessarily code samples.
Thanks
EDIT: I'm tracking small movements, if that helps
If you look for a solution that is implemented in opencv the pyramidal Lucas Kanade (PLK) method is quit good, else I would prefer a Particle Filter based tracker.
To improve your tracking performance with the PLK be sure that you have set up the parameters correctly. E.g. for large motion you need a level at ca. 3 or 4. The window should not be to small ( I prefer 17x17 to 27x27). Also keep in mind that the methods needs textured areas to be able to track the points. That means corner like image content (aperture problem).
I would propose to seed a set of points (ps) in a grid around the points (P) you want to track. And than use a foreward - backward threshold to reject falsly tracked points. The motion of your points (P) will be computed by the mean motion of the particular residual point sets (ps).
The foreward backward confidence is computes by estimating the motion from frame 1 to frame 2. (ptList1 -> ptList2). And that from frame 2 to frame 1 with the points of ptList2 (ptList2 -> ptListRef). Motion vectors will be rejected if (|| ptRef - pt1 || > fb_threshold).

Tracking multi-touch movements inside the frame with transmitters and receivers

The problem with tracking multi-touches (at least two finger touches) on the following frame device.
White circles are LEDs and black circles are receivers. When user moves fingers inside this frame we can analyze which receivers received light from the LEDs and which has not received. Based on that we need to track movements of the fingers somehow.
First problem that we has separate x and y coordinates. What is the effective way to combine them?
Second problem concerns analyzing coordinates when two fingers are close to each other. How to distinct between them?
I found that k-means clustering cam be useful here. What are other algorithms I should look more carefully to handle this task?
As you point out in your diagram, with two fingers different finger positions can give the same sensor readings, so you may have some irreducible uncertainty, unless you find some clever way to use previous history or something.
Do you actually need to know the position of each finger? Is this the right abstraction for this situation? Perhaps you could get a reasonable user interface if you limited yourself to one finger for precise pointing, and recognised e.g. gesture commands by some means that did not use an intermediate representation of finger positions. Can you find gestures that can be easily distinguished from each other given the raw sensor readings?
I suppose the stereotypical computer science approach to this would be to collect the sensor readings from different gestures, throw them at some sort of machine learning box, and hope for the best. You might also try drawing graphs of how the sensor readings change over time for the different gestures and looking at them to see if anything obvious stands out. If you do want to try out machine learning algorithms, http://www.cs.waikato.ac.nz/ml/weka/ might be a good start.

Looking for ways for a robot to locate itself in the house

I am hacking a vacuum cleaner robot to control it with a microcontroller (Arduino). I want to make it more efficient when cleaning a room. For now, it just go straight and turn when it hits something.
But I have trouble finding the best algorithm or method to use to know its position in the room. I am looking for an idea that stays cheap (less than $100) and not to complex (one that don't require a PhD thesis in computer vision). I can add some discrete markers in the room if necessary.
Right now, my robot has:
One webcam
Three proximity sensors (around 1 meter range)
Compass (no used for now)
Wi-Fi
Its speed can vary if the battery is full or nearly empty
A netbook Eee PC is embedded on the robot
Do you have any idea for doing this? Does any standard method exist for these kind of problems?
Note: if this question belongs on another website, please move it, I couldn't find a better place than Stack Overflow.
The problem of figuring out a robot's position in its environment is called localization. Computer science researchers have been trying to solve this problem for many years, with limited success. One problem is that you need reasonably good sensory input to figure out where you are, and sensory input from webcams (i.e. computer vision) is far from a solved problem.
If that didn't scare you off: one of the approaches to localization that I find easiest to understand is particle filtering. The idea goes something like this:
You keep track of a bunch of particles, each of which represents one possible location in the environment.
Each particle also has an associated probability that tells you how confident you are that the particle really represents your true location in the environment.
When you start off, all of these particles might be distributed uniformly throughout your environment and be given equal probabilities. Here the robot is gray and the particles are green.
When your robot moves, you move each particle. You might also degrade each particle's probability to represent the uncertainty in how the motors actually move the robot.
When your robot observes something (e.g. a landmark seen with the webcam, a wifi signal, etc.) you can increase the probability of particles that agree with that observation.
You might also want to periodically replace the lowest-probability particles with new particles based on observations.
To decide where the robot actually is, you can either use the particle with the highest probability, the highest-probability cluster, the weighted average of all particles, etc.
If you search around a bit, you'll find plenty of examples: e.g. a video of a robot using particle filtering to determine its location in a small room.
Particle filtering is nice because it's pretty easy to understand. That makes implementing and tweaking it a little less difficult. There are other similar techniques (like Kalman filters) that are arguably more theoretically sound but can be harder to get your head around.
A QR Code poster in each room would not only make an interesting Modern art piece, but would be relatively easy to spot with the camera!
If you can place some markers in the room, using the camera could be an option. If 2 known markers have an angular displacement (left to right) then the camera and the markers lie on a circle whose radius is related to the measured angle between the markers. I don't recall the formula right off, but the arc segment (on that circle) between the markers will be twice the angle you see. If you have the markers at known height and the camera is at a fixed angle of inclination, you can compute the distance to the markers. Either of these methods alone can nail down your position given enough markers. Using both will help do it with fewer markers.
Unfortunately, those methods are imperfect due to measurement errors. You get around this by using a Kalman estimator to incorporate multiple noisy measurements to arrive at a good position estimate - you can then feed in some dead reckoning information (which is also imperfect) to refine it further. This part is goes pretty deep into math, but I'd say it's a requirement to do a great job at what you're attempting. You can do OK without it, but if you want an optimal solution (in terms of best position estimate for given input) there is no better way. If you actually want a career in autonomous robotics, this will play large in your future. (
Once you can determine your position you can cover the room in any pattern you'd like. Keep using the bump sensor to help construct a map of obstacles and then you'll need to devise a way to scan incorporating the obstacles.
Not sure if you've got the math background yet, but here is the book:
http://books.google.com/books/about/Applied_optimal_estimation.html?id=KlFrn8lpPP0C
This doesn't replace the accepted answer (which is great, thanks!) but I might recommend getting a Kinect and use that instead of your webcam, either through Microsoft's recently released official drivers or using the hacked drivers if your EeePC doesn't have Windows 7 (presumably it does not).
That way the positioning will be improved by the 3D vision. Observing landmarks will now tell you how far away the landmark is, and not just where in the visual field that landmark is located.
Regardless, the accepted answer doesn't really address how to pick out landmarks in the visual field, and simply assumes that you can. While the Kinect drivers may already have feature detection included (I'm not sure) you can also use OpenCV for detecting features in the image.
One solution would be to use a strategy similar to "flood fill" (wikipedia). To get the controller to accurately perform sweeps, it needs a sense of distance. You can calibrate your bot using the proximity sensors: e.g. run motor for 1 sec = xx change in proximity. With that info, you can move your bot for an exact distance, and continue sweeping the room using flood fill.
Assuming you are not looking for a generalised solution, you may actually know the room's shape, size, potential obstacle locations, etc. When the bot exists the factory there is no info about its future operating environment, which kind of forces it to be inefficient from the outset.
If that's you case, you can hardcode that info, and then use basic measurements (ie. rotary encoders on wheels + compass) to precisely figure out its location in the room/house. No need for wifi triangulation or crazy sensor setups in my opinion. At least for a start.
Ever considered GPS? Every position on earth has a unique GPS coordinates - with resolution of 1 to 3 metres, and doing differential GPS you can go down to sub-10 cm range - more info here:
http://en.wikipedia.org/wiki/Global_Positioning_System
And Arduino does have lots of options of GPS-modules:
http://www.arduino.cc/playground/Tutorials/GPS
After you have collected all the key coordinates points of the house, you can then write the routine for the arduino to move the robot from point to point (as collected above) - assuming it will do all those obstacles avoidance stuff.
More information can be found here:
http://www.google.com/search?q=GPS+localization+robots&num=100
And inside the list I found this - specifically for your case: Arduino + GPS + localization:
http://www.youtube.com/watch?v=u7evnfTAVyM
I was thinking about this problem too. But I don't understand why you can't just triangulate? Have two or three beacons (e.g. IR LEDs of different frequencies) and a IR rotating sensor 'eye' on a servo. You could then get an almost constant fix on your position. I expect the accuracy would be in low cm range and it would be cheap. You can then map anything you bump into easily.
Maybe you could also use any interruption in the beacon beams to plot objects that are quite far from the robot too.
You have a camera you said ? Did you consider looking at the ceiling ? There is little chance that two rooms have identical dimensions, so you can identify in which room you are, position in the room can be computed from angular distance to the borders of the ceiling and direction can probably be extracted by the position of doors.
This will require some image processing but the vacuum cleaner moving slowly to be efficiently cleaning will have enough time to compute.
Good luck !
Use Ultra Sonic Sensor HC-SR04 or similar.
As above told sense the walls distance from robot with sensors and room part with QR code.
When your are near to a wall turn 90 degree and move as width of your robot and again turn 90deg( i.e. 90 deg left turn) and again move your robot I think it will help :)

Resources