Increase sensibility using collision detection in sphero api - sphero-api

I'm wanting to do a maze solver program using sphero and artoo, so I need to detect collision in low speed and with a very high sensibility.
I've been looking at Collision detection docs: https://github.com/orbotix/DeveloperResources/blob/master/docs/Collision%20detection%201.2.pdf
Could help me to figure out what params to ConfigureCollisionDetection I should use? http://grab.by/vWLI

You are going to want very low values for Xt, Yt, Xspd, and Yspd. Try setting Xt and Yt to something < 100 and your speeds to something less than what you are commanding the ball to do.

Related

Multiple Tangos Looking at one location - IR Conflict

I am getting my first Tango in the next day or so; worked a little bit with Occipital's Structure Sensor - which is where my background in depth perceiving camera's come from.
Has anyone used multiple Tango at once (lets say 6-10), looking at the same part of a room, using depth for identification and placement of 3d character/content? I have been told that multiple devices looking at the same part of a room will confuse each Tango as they will see the other Tango's IR dots.
Thanks for your input.
Grisly
I have not tried to use several Tangos, but I have however tried to use my Tango in a room where I had a Kinect 2 sensor, which caused the Tango to go bananas. It seems however like the Tango has lower intensity on its IR projector in comparison, but I would still say that it is a reasonable assumption that it will not work.
It might work under certain angles but I doubt that you will be able to find a configuration of that many cameras without any of them interfering with each other. If you would make it work however, I would be very interested to know how.
You could lower the depth camera rate (defaults to 5/second I believe) to avoid conflicts, but that might not be desirable given what you're using the system for.
Alternatively, only enable the depth camera when placing your 3D models on surfaces, then disable said depth camera when it is not needed. This can also help conserve CPU and battery power.
It did not work. Occipital Structure Sensor on the other hand, did work (multiple devices in one place)!

Using windows phone combined motion api to track device position

I'd like to track the position of the device with respect to an initial position with high accuracy (ideally) for motions at a small scale (say < 1 meter). The best bet seems to be using motionReading.SensorReading.DeviceAcceleration. I tried this. But ran into few problems. Apart from the noisy readings (which I was expecting and can tolerate), I see some behaviors that are conceptually wrong - e.g. If I start from rest, move the phone around and bring it back to rest- and in the process periodically update the velocity vector along all the dimensions, I would expect the magnitude of the velocity to be very small (ideally 0). But I don't see that. I have extensively reviewed available help including the official msdn pages but I don't see any examples where the position/velocity of the device are updated using the acceleration vector. Is the acceleration vector that the api returns (atleast in theory) supposed to be the rate of change of velocity or something else? (FYI - my device does not have a gyroscope, so the api is going to be the low accuracy version.)

Looking for ways for a robot to locate itself in the house

I am hacking a vacuum cleaner robot to control it with a microcontroller (Arduino). I want to make it more efficient when cleaning a room. For now, it just go straight and turn when it hits something.
But I have trouble finding the best algorithm or method to use to know its position in the room. I am looking for an idea that stays cheap (less than $100) and not to complex (one that don't require a PhD thesis in computer vision). I can add some discrete markers in the room if necessary.
Right now, my robot has:
One webcam
Three proximity sensors (around 1 meter range)
Compass (no used for now)
Wi-Fi
Its speed can vary if the battery is full or nearly empty
A netbook Eee PC is embedded on the robot
Do you have any idea for doing this? Does any standard method exist for these kind of problems?
Note: if this question belongs on another website, please move it, I couldn't find a better place than Stack Overflow.
The problem of figuring out a robot's position in its environment is called localization. Computer science researchers have been trying to solve this problem for many years, with limited success. One problem is that you need reasonably good sensory input to figure out where you are, and sensory input from webcams (i.e. computer vision) is far from a solved problem.
If that didn't scare you off: one of the approaches to localization that I find easiest to understand is particle filtering. The idea goes something like this:
You keep track of a bunch of particles, each of which represents one possible location in the environment.
Each particle also has an associated probability that tells you how confident you are that the particle really represents your true location in the environment.
When you start off, all of these particles might be distributed uniformly throughout your environment and be given equal probabilities. Here the robot is gray and the particles are green.
When your robot moves, you move each particle. You might also degrade each particle's probability to represent the uncertainty in how the motors actually move the robot.
When your robot observes something (e.g. a landmark seen with the webcam, a wifi signal, etc.) you can increase the probability of particles that agree with that observation.
You might also want to periodically replace the lowest-probability particles with new particles based on observations.
To decide where the robot actually is, you can either use the particle with the highest probability, the highest-probability cluster, the weighted average of all particles, etc.
If you search around a bit, you'll find plenty of examples: e.g. a video of a robot using particle filtering to determine its location in a small room.
Particle filtering is nice because it's pretty easy to understand. That makes implementing and tweaking it a little less difficult. There are other similar techniques (like Kalman filters) that are arguably more theoretically sound but can be harder to get your head around.
A QR Code poster in each room would not only make an interesting Modern art piece, but would be relatively easy to spot with the camera!
If you can place some markers in the room, using the camera could be an option. If 2 known markers have an angular displacement (left to right) then the camera and the markers lie on a circle whose radius is related to the measured angle between the markers. I don't recall the formula right off, but the arc segment (on that circle) between the markers will be twice the angle you see. If you have the markers at known height and the camera is at a fixed angle of inclination, you can compute the distance to the markers. Either of these methods alone can nail down your position given enough markers. Using both will help do it with fewer markers.
Unfortunately, those methods are imperfect due to measurement errors. You get around this by using a Kalman estimator to incorporate multiple noisy measurements to arrive at a good position estimate - you can then feed in some dead reckoning information (which is also imperfect) to refine it further. This part is goes pretty deep into math, but I'd say it's a requirement to do a great job at what you're attempting. You can do OK without it, but if you want an optimal solution (in terms of best position estimate for given input) there is no better way. If you actually want a career in autonomous robotics, this will play large in your future. (
Once you can determine your position you can cover the room in any pattern you'd like. Keep using the bump sensor to help construct a map of obstacles and then you'll need to devise a way to scan incorporating the obstacles.
Not sure if you've got the math background yet, but here is the book:
http://books.google.com/books/about/Applied_optimal_estimation.html?id=KlFrn8lpPP0C
This doesn't replace the accepted answer (which is great, thanks!) but I might recommend getting a Kinect and use that instead of your webcam, either through Microsoft's recently released official drivers or using the hacked drivers if your EeePC doesn't have Windows 7 (presumably it does not).
That way the positioning will be improved by the 3D vision. Observing landmarks will now tell you how far away the landmark is, and not just where in the visual field that landmark is located.
Regardless, the accepted answer doesn't really address how to pick out landmarks in the visual field, and simply assumes that you can. While the Kinect drivers may already have feature detection included (I'm not sure) you can also use OpenCV for detecting features in the image.
One solution would be to use a strategy similar to "flood fill" (wikipedia). To get the controller to accurately perform sweeps, it needs a sense of distance. You can calibrate your bot using the proximity sensors: e.g. run motor for 1 sec = xx change in proximity. With that info, you can move your bot for an exact distance, and continue sweeping the room using flood fill.
Assuming you are not looking for a generalised solution, you may actually know the room's shape, size, potential obstacle locations, etc. When the bot exists the factory there is no info about its future operating environment, which kind of forces it to be inefficient from the outset.
If that's you case, you can hardcode that info, and then use basic measurements (ie. rotary encoders on wheels + compass) to precisely figure out its location in the room/house. No need for wifi triangulation or crazy sensor setups in my opinion. At least for a start.
Ever considered GPS? Every position on earth has a unique GPS coordinates - with resolution of 1 to 3 metres, and doing differential GPS you can go down to sub-10 cm range - more info here:
http://en.wikipedia.org/wiki/Global_Positioning_System
And Arduino does have lots of options of GPS-modules:
http://www.arduino.cc/playground/Tutorials/GPS
After you have collected all the key coordinates points of the house, you can then write the routine for the arduino to move the robot from point to point (as collected above) - assuming it will do all those obstacles avoidance stuff.
More information can be found here:
http://www.google.com/search?q=GPS+localization+robots&num=100
And inside the list I found this - specifically for your case: Arduino + GPS + localization:
http://www.youtube.com/watch?v=u7evnfTAVyM
I was thinking about this problem too. But I don't understand why you can't just triangulate? Have two or three beacons (e.g. IR LEDs of different frequencies) and a IR rotating sensor 'eye' on a servo. You could then get an almost constant fix on your position. I expect the accuracy would be in low cm range and it would be cheap. You can then map anything you bump into easily.
Maybe you could also use any interruption in the beacon beams to plot objects that are quite far from the robot too.
You have a camera you said ? Did you consider looking at the ceiling ? There is little chance that two rooms have identical dimensions, so you can identify in which room you are, position in the room can be computed from angular distance to the borders of the ceiling and direction can probably be extracted by the position of doors.
This will require some image processing but the vacuum cleaner moving slowly to be efficiently cleaning will have enough time to compute.
Good luck !
Use Ultra Sonic Sensor HC-SR04 or similar.
As above told sense the walls distance from robot with sensors and room part with QR code.
When your are near to a wall turn 90 degree and move as width of your robot and again turn 90deg( i.e. 90 deg left turn) and again move your robot I think it will help :)

Car steering algorithm?

i've already asked something similar, but now i've the problem to manage and realize a "realistic" steering for a simple 2d (top-down) car racing game.
How can i do a "realistic" steering for the car ? (i use c# but another language is welcome;))
Using Sin and Cos ?
If yes, how ?
Thanks in advance!
I'm on my lunch break so I can't do tremendous justice to the "best" answer, but the pseudocode looks something like this:
y_change = sin(rotation) * speed;
x_change = cos(rotation) * speed;
car.x += x_change;
car.y += y_change;
you would execute this code in every frame; rotation would be controlled by your steering input, and speed would be controlled by your acceleration input.
You will probably want to use a physics engine that someone else has already created. I've heard good things about the XNA Physics API.
I would imagine that you will have to use sine and cosine, but that is the just the tip of a VERY large iceberg...
Brian Driscoll’s helpful answer ten years ago is all you need to know about this for non-demanding applications. I often use Euler integration of position via velocity vector as modified by accelerations from a controller.
An interesting sidelight for wheeled vehicles is that they do not rotate around their center of gravity. A typical car rotates about a point along the line through the rear axle, but offset well to the side.
This concept is important for real vehicles. Their steering mechanism tries to conform to Ackermann steering geometry to minimize wear due to slip. In simulated vehicles these considerations are import for modeling instantaneous curvature and predicting future path.
Algorithm is:
Record how someone's else drives (using dev. version of game)
(Optional) Split up recording into snippets for variety of usual situations.
Replay recordings in game. (Using suitable snippets and possibly interpolate trajectory between them).
You may also try fuzzy logic and simple steering unit model.
Model would be:
x = integrate horiz_velocity by t
horiz_velocity = intergate steering_angle by t
and steering_angle = fuzzy_steering_function(...)

Need help with circle collision and rotation? - Game Physics

Ok so I have bunch of balls:
What I'm trying to figure out is how to make these circles:
Rotate based on the surfaces they are touching
Fix collision penetration when dealing with multiple touching objects.
EDIT: This is what I mean by rotation
Ball 0 will rotate anti-clockwise as it's leaning on Ball 3
Ball 5 will rotate clockwise as it's leaning on Ball 0
Even though solutions to this are universal, just for the record I'm using Javascript and SVG, and would prefer implementing this myself rather than using a library.
Help would be very much appreciated. Thanks! :)
Here are a few links I think would help you out on your quest:
Box2D
Advanced Character Physics
Javascript Ball Simulation
Box2D has what your looking for, and its open source I believe. You can download the files and see how they do what they do in order to achieve your effect.
Let me know if this helps, trying to get better at answering questions on here. :)
EDIT:
So I went ahead and thought this out just a bit more to give some insight as far as how I would approach it. Take a look at the image below:
Basically, compare the angles on a grid, if the ball is falling +30 degrees compared to the ball it falls on then rotate the ball positively. If its falling -30 degrees compared to the ball it fall on then rotate the ball negatively. Im not saying this is the correct solution, but just thinking about it, this is the way I would approach the problem off the bat.
From a physics standpoint it sounds like you want to conserve both linear and angular momentum.
As a starting point, you'll want establish ODE matrices that model both and then perform some linear algebra to solve them. I personally would use Numpy/Scipy (probably using a sparse array) for that solution. But there are many approaches (sympy comes to mind). What modules do you want to use?
You'll want to familiarize yourself with coefficient of restitution and coefficient of friction and decide if you want to conserve kinetic energy too. (do you want/care if they keep bouncing and rolling around forever?) (you'll probably need energy matrices as well)
You'll be solving these matrices every timestep all the while checking the condition that no two ball centers are closer than the sum of the two radii. (..and if they do, you adjust the momentum and energy terms for a post-collision condition)
This is just the barest of beginnings to a big project. Can I ask why you want to do this from scratch?
I would recommend checking out game physics simulation books and articles. See O'Reilly's Physics for Game Developers and the Gamasutra website, for example.

Resources