I have timed sensor data for a car - such as Yaw Rate, Absolute Steering Angle etc - and would like to detect if the car is making a turn, left or right.
Currently, I'm thinking about using angular displacement and velocity, but I'm not sure if there's an existing robust algorithm that I could use.
I have also come across this post that hints about a method that could be used, but I'm not sure if this is exactly the one I need.
Algorithm to detect left or right turn from x,y co-ordinates
Sorry, for not giving much, but I'm actually trying to get some literature that I could review and some vocabulary that could help me find better solutions related to my problem. Thanks.
Related
I'm trying to implement the LineTrace algorithm described in this article:
Linetrace Generative Art
Particularly where it says:
To trace the outline you can sample some of the nearby edges on the previous line, calculate the average direction of those edges and add a vertex to the current line along that direction. Then add some random motion to mimic free hand drawing. This seems to work quite well for a while, but there is some "inertia" that can be seen in the results—the shape adapts too slowly.
The amount of noise you add to each vertex is crucial. This noise is what drives the whole system to make interesting shapes since the tracing behaviour is always forced to attempt to replicate both the general movement and some of the random jitter as it progresses.
I'm trying to do this in processing, and since I'm new to processing and hazy on how vectors, edges and directions work, I don't have any idea how to start to code. I would be greatly appreciative for some sample code, anything to help me get underway. I'm also curious by what he means by "add some random motion to mimic free hand drawing", is he incorporating perlin noise somehow? Thanks in advance.
I'm trying to understand path tracing. So far, I have only dealt with the very basis - when a ray is launched from each intersection point in a random direction within the hemisphere, then again, and so on recursively, until the ray hits the light source. As a result, this approach leads to the fact that in the case of small light sources, the image is extremely noisy.
The following images show the noise level depending on the number of samples (rays) per pixel.
I am also not sure that i am doing everything correctly, because the "Monte Carlo" method, as far as I understand, implies that several rays are launched from each intersection point, and then their result is summed and averaged. But this approach leads to the fact that the number of rays increases exponentially, and after 6 bounces reaches inadequate values, so i decided that it is better to just run several rays per pixel initially (slightly shifted from the center of the pixel in a random direction), but only 1 ray is generated at each intersection. I do not know whether this approach corresponds to "Monte Carlo" or not, but at least this way the rendering does not last forever..
Bidirectional path tracing
I started looking for ways to reduce the amount of noise, and came across bidirectional path tracing. But unfortunately, i couldn't find a detailed explanation of this algorithm in simple words. All I understood is that the rays are generated from both the camera and the light sources, and then there is a check on the possibility of connecting the endpoints of these paths.
As you can see, if the intersection points of the blue ray from the camera and the white ray from the light source can be freely connected (there are no obstacles in the connection path), then we can assume that the ray from the camera can pass through the points y1, y0 directly to the light source.
But there are a lot of questions:
If the light source is not a point, but has some shape, then the point from which the ray is launched must be randomly selected on the surface of this shape? If you take only the center - then there will be no difference from a point light source, right?
Do i need to build a path from the light source for each path from the camera, or should there be only one path from the light source, while several paths (samples) are built from the camera for one pixel at once?
The number of bounces/re-reflections/refractions should be the same for the path from the camera and the light source? Or not?
But the questions don't end there. I have heard that the bidirectional trace method allows you to model caustics well (in comparison with regular path tracing). But I completely did not understand how the method of bidirectional path tracing can somehow help for this.
Example 1
Here the path will eventually be built, but the number of bounces will be extremely large, so no caustics will work here, despite the fact that the ray from the camera is directed almost to the same point where the path of the ray from the light source ends.
Example 2
Here the path will not be built, because there is an obstacle between the endpoints of the paths, although it could be built if point x3 was connected to point y1, but according to the algorithm (if I understand everything correctly), only the last points of the paths are connected.
Question:
What is the use of such an algorithm, if in a significant number of cases the paths either cannot be built, or are unnecessarily long? Maybe I misunderstand something? I came across many articles and documents where this algorithm was somehow described, but mostly it was described mathematically (using all sorts of magical terms like biased-unbiased, PDF, BSDF, and others), and not.. algorithmically. I am not that strong in mathematics and all sorts of mathematical notation and wording, I would just like to understand WHAT TO DO, how to implement it correctly in the code, how these paths are connected, in what order, and so on. This can be explained in simple words, pseudocode, right? I would be extremely grateful if someone would finally shed some light on all this.
Some references that helped me to understand the Path tracing right :
https://www.scratchapixel.com/ (every rendering student should begin with this)
https://en.wikipedia.org/wiki/Path_tracing
If you're looking for more references, path tracing is used for "Global illumination" wich is the opposite as "Direct illumination" that only rely on a straight line from the point to the light.
What's more caustics is well knowned to be a hard problem, so don't begin with it! Monte Carlo method is a good straightforward method to begin with, but it has its limitations (ie Caustics and tiny lights).
Some advices for rendering newbees
Mathematics notations are surely not the coolest ones. Every one will of course prefer a ready to go code. But maths is the most rigourous way to describe the world. It permits also to modelize a whole physic interaction in a small formula instead of plenty of lines of codes that doesn't fit to the real problem. I suggest you to forget you to try reading what you read better as a good mathematic formula is always detailed. If some variables are not specified, don't loose your time and search another reference.
as the title stated, I want to make Dead Reckoning with accelerometer and gyroscope. The accelerometer could apply with linear accleration and then used to get velocity by one time integration or get runing distance in the gap of sample time by twice integration. and also I could get changed angle by integration of gyroscope output value. so on the condition of initial position, I can get new postion by DR with distance and angle.
thought is perfect but fact is not so simple. acceleromter and gyroscope is unstable and always affected by temperature or unaligned axis direction. I have know there is one popular method called kalman filter to combined this sensors with gps to protect navigation from noise of output, but I think it's out of my comptent by now.
firstly, I want to know how to remove gravity force which is mixed with real accelerometer ouput?
sencond, how to correct gyroscope error?
the last, how to realize kalman filter with accelerometer, gyroscope and gps?
any suggestion is good to me, if you can give some code that's best of all!
thank you
[edit #2013/12/12]:
I have given up using accelemeter for calculating velocity and distance due to it's big drift. And also the error can become more and more under the influence of double integration. But there is a lucky thing that this work for a car in tis case. so I prefer to select receiving velocity from CAN, it was proven to be more exactly than accelerometer. By now a solution about velocity has been resolved, but the other question is still not sure in my opinion. expect more nice answer.
look here https://stackoverflow.com/a/19764828/2521214
And read the whole thing !!!
including the comments
and view also this http://www.youtube.com/watch?v=C7JQ7Rpwn2k ...
video link is from AlexWien answer in the same thread
I am hacking a vacuum cleaner robot to control it with a microcontroller (Arduino). I want to make it more efficient when cleaning a room. For now, it just go straight and turn when it hits something.
But I have trouble finding the best algorithm or method to use to know its position in the room. I am looking for an idea that stays cheap (less than $100) and not to complex (one that don't require a PhD thesis in computer vision). I can add some discrete markers in the room if necessary.
Right now, my robot has:
One webcam
Three proximity sensors (around 1 meter range)
Compass (no used for now)
Wi-Fi
Its speed can vary if the battery is full or nearly empty
A netbook Eee PC is embedded on the robot
Do you have any idea for doing this? Does any standard method exist for these kind of problems?
Note: if this question belongs on another website, please move it, I couldn't find a better place than Stack Overflow.
The problem of figuring out a robot's position in its environment is called localization. Computer science researchers have been trying to solve this problem for many years, with limited success. One problem is that you need reasonably good sensory input to figure out where you are, and sensory input from webcams (i.e. computer vision) is far from a solved problem.
If that didn't scare you off: one of the approaches to localization that I find easiest to understand is particle filtering. The idea goes something like this:
You keep track of a bunch of particles, each of which represents one possible location in the environment.
Each particle also has an associated probability that tells you how confident you are that the particle really represents your true location in the environment.
When you start off, all of these particles might be distributed uniformly throughout your environment and be given equal probabilities. Here the robot is gray and the particles are green.
When your robot moves, you move each particle. You might also degrade each particle's probability to represent the uncertainty in how the motors actually move the robot.
When your robot observes something (e.g. a landmark seen with the webcam, a wifi signal, etc.) you can increase the probability of particles that agree with that observation.
You might also want to periodically replace the lowest-probability particles with new particles based on observations.
To decide where the robot actually is, you can either use the particle with the highest probability, the highest-probability cluster, the weighted average of all particles, etc.
If you search around a bit, you'll find plenty of examples: e.g. a video of a robot using particle filtering to determine its location in a small room.
Particle filtering is nice because it's pretty easy to understand. That makes implementing and tweaking it a little less difficult. There are other similar techniques (like Kalman filters) that are arguably more theoretically sound but can be harder to get your head around.
A QR Code poster in each room would not only make an interesting Modern art piece, but would be relatively easy to spot with the camera!
If you can place some markers in the room, using the camera could be an option. If 2 known markers have an angular displacement (left to right) then the camera and the markers lie on a circle whose radius is related to the measured angle between the markers. I don't recall the formula right off, but the arc segment (on that circle) between the markers will be twice the angle you see. If you have the markers at known height and the camera is at a fixed angle of inclination, you can compute the distance to the markers. Either of these methods alone can nail down your position given enough markers. Using both will help do it with fewer markers.
Unfortunately, those methods are imperfect due to measurement errors. You get around this by using a Kalman estimator to incorporate multiple noisy measurements to arrive at a good position estimate - you can then feed in some dead reckoning information (which is also imperfect) to refine it further. This part is goes pretty deep into math, but I'd say it's a requirement to do a great job at what you're attempting. You can do OK without it, but if you want an optimal solution (in terms of best position estimate for given input) there is no better way. If you actually want a career in autonomous robotics, this will play large in your future. (
Once you can determine your position you can cover the room in any pattern you'd like. Keep using the bump sensor to help construct a map of obstacles and then you'll need to devise a way to scan incorporating the obstacles.
Not sure if you've got the math background yet, but here is the book:
http://books.google.com/books/about/Applied_optimal_estimation.html?id=KlFrn8lpPP0C
This doesn't replace the accepted answer (which is great, thanks!) but I might recommend getting a Kinect and use that instead of your webcam, either through Microsoft's recently released official drivers or using the hacked drivers if your EeePC doesn't have Windows 7 (presumably it does not).
That way the positioning will be improved by the 3D vision. Observing landmarks will now tell you how far away the landmark is, and not just where in the visual field that landmark is located.
Regardless, the accepted answer doesn't really address how to pick out landmarks in the visual field, and simply assumes that you can. While the Kinect drivers may already have feature detection included (I'm not sure) you can also use OpenCV for detecting features in the image.
One solution would be to use a strategy similar to "flood fill" (wikipedia). To get the controller to accurately perform sweeps, it needs a sense of distance. You can calibrate your bot using the proximity sensors: e.g. run motor for 1 sec = xx change in proximity. With that info, you can move your bot for an exact distance, and continue sweeping the room using flood fill.
Assuming you are not looking for a generalised solution, you may actually know the room's shape, size, potential obstacle locations, etc. When the bot exists the factory there is no info about its future operating environment, which kind of forces it to be inefficient from the outset.
If that's you case, you can hardcode that info, and then use basic measurements (ie. rotary encoders on wheels + compass) to precisely figure out its location in the room/house. No need for wifi triangulation or crazy sensor setups in my opinion. At least for a start.
Ever considered GPS? Every position on earth has a unique GPS coordinates - with resolution of 1 to 3 metres, and doing differential GPS you can go down to sub-10 cm range - more info here:
http://en.wikipedia.org/wiki/Global_Positioning_System
And Arduino does have lots of options of GPS-modules:
http://www.arduino.cc/playground/Tutorials/GPS
After you have collected all the key coordinates points of the house, you can then write the routine for the arduino to move the robot from point to point (as collected above) - assuming it will do all those obstacles avoidance stuff.
More information can be found here:
http://www.google.com/search?q=GPS+localization+robots&num=100
And inside the list I found this - specifically for your case: Arduino + GPS + localization:
http://www.youtube.com/watch?v=u7evnfTAVyM
I was thinking about this problem too. But I don't understand why you can't just triangulate? Have two or three beacons (e.g. IR LEDs of different frequencies) and a IR rotating sensor 'eye' on a servo. You could then get an almost constant fix on your position. I expect the accuracy would be in low cm range and it would be cheap. You can then map anything you bump into easily.
Maybe you could also use any interruption in the beacon beams to plot objects that are quite far from the robot too.
You have a camera you said ? Did you consider looking at the ceiling ? There is little chance that two rooms have identical dimensions, so you can identify in which room you are, position in the room can be computed from angular distance to the borders of the ceiling and direction can probably be extracted by the position of doors.
This will require some image processing but the vacuum cleaner moving slowly to be efficiently cleaning will have enough time to compute.
Good luck !
Use Ultra Sonic Sensor HC-SR04 or similar.
As above told sense the walls distance from robot with sensors and room part with QR code.
When your are near to a wall turn 90 degree and move as width of your robot and again turn 90deg( i.e. 90 deg left turn) and again move your robot I think it will help :)
Ok so I have bunch of balls:
What I'm trying to figure out is how to make these circles:
Rotate based on the surfaces they are touching
Fix collision penetration when dealing with multiple touching objects.
EDIT: This is what I mean by rotation
Ball 0 will rotate anti-clockwise as it's leaning on Ball 3
Ball 5 will rotate clockwise as it's leaning on Ball 0
Even though solutions to this are universal, just for the record I'm using Javascript and SVG, and would prefer implementing this myself rather than using a library.
Help would be very much appreciated. Thanks! :)
Here are a few links I think would help you out on your quest:
Box2D
Advanced Character Physics
Javascript Ball Simulation
Box2D has what your looking for, and its open source I believe. You can download the files and see how they do what they do in order to achieve your effect.
Let me know if this helps, trying to get better at answering questions on here. :)
EDIT:
So I went ahead and thought this out just a bit more to give some insight as far as how I would approach it. Take a look at the image below:
Basically, compare the angles on a grid, if the ball is falling +30 degrees compared to the ball it falls on then rotate the ball positively. If its falling -30 degrees compared to the ball it fall on then rotate the ball negatively. Im not saying this is the correct solution, but just thinking about it, this is the way I would approach the problem off the bat.
From a physics standpoint it sounds like you want to conserve both linear and angular momentum.
As a starting point, you'll want establish ODE matrices that model both and then perform some linear algebra to solve them. I personally would use Numpy/Scipy (probably using a sparse array) for that solution. But there are many approaches (sympy comes to mind). What modules do you want to use?
You'll want to familiarize yourself with coefficient of restitution and coefficient of friction and decide if you want to conserve kinetic energy too. (do you want/care if they keep bouncing and rolling around forever?) (you'll probably need energy matrices as well)
You'll be solving these matrices every timestep all the while checking the condition that no two ball centers are closer than the sum of the two radii. (..and if they do, you adjust the momentum and energy terms for a post-collision condition)
This is just the barest of beginnings to a big project. Can I ask why you want to do this from scratch?
I would recommend checking out game physics simulation books and articles. See O'Reilly's Physics for Game Developers and the Gamasutra website, for example.