Here, I want to calculate distance travelled by robot in atmega 2560? I know the formula distance = Wheel's circumference * motor's rotation in rpm. But I am not getting how to apply this formula for the following problem.
What is the approximate distance covered by the robot in 2 seconds, if OCR5AL=OCR5BL=0xB2. Given that, maximum speed at which the motors rotate is 300 rpm and wheels have a radius of 2.8cm.
Where
OCR5AL is Output compare register low value for Left Motor
OCR5BL is Output compare register low value for Right Motor
Maximum speed is attained, when OCR5AL =OCR5BL =0xFF.
Motor revolutions per minute (rpm) and wheel revolutions is something different. I do not think your robot's wheels rotate at 300 rpm.
So, in order to be able to calculate the distance, you need to know how many motor revolutions are needed for the wheel to rotate 360 degrees.
This would work fine if you have a stepper motor for instance. Knowing the amount of steps for the wheel to make a full turn gives you a math formula to calculate the distance.
If you want to approach this problem using time measurements, you need to figure out how many seconds and milliseconds it takes for the wheel to turn using different motor speeds.
I don't even know if it's mathematically feasible, but let's say I have a tower at (Tx, Ty) shooting at a monster located at (Mx(t),My(t)).
The thing is, the path followed by the monster is jagged and swirly, meaning that predictive aiming from a distance based on velocity/direction at -that exact time- would be useless. The monster would've changed directions two times over by the time the bullet reached its target.
To solve this, I have a function to fast forward the monster for (t) frames and get its position as (Mx(t), My(t)) assuming its velocity remains constant. I can get the monster's position at t = 0 (current position) or t = 99999, or anything in between. Think of it as a lookup table.
The hard part is having the turret predictably aim at a position derived from that function.
I have to know t beforehand to know what to put into (Mx(t), My(t)). I have to know (Mx(t), My(t)) to know the distance and calculate the t from that. Is this even possible? Are there alternatives?
Any kind of pseudocode is welcome.
It is, if I understand your problem correctly, basically the transformation for euclidian to polar coordinate system
If you have the relative position x=Mx(t)-Tx and y=My(t)-Ty which is
distance= sqrt(x^2+y^2)
phi = arctan(y/x)
this assumes infinite speed of the shoot. However you can first calculate the approximate time for the projectile using the current position. Calculate the flying time based on the estimated distance and iterate this process.
If convergence speed is to slow (not much difference in projectile/monster speed) You can also use the idea of Divide and conquer.
You calculate the maximum distance the monster runs if it maximises distance. Calculate the distance for now, the max distance and the middle. Then you can decide in which half the monster is and calculate again distance in the middle.
I think everything else would require more information on how you plan to calculate Mx(t) and My(t)
You want the atan2 function.
If the tower is at tx,ty and the monster's predicted position is mx, my then the angle of the tower's gun has to be:
angle = atan2(my - ty, mx - tx)
This should be available in most languages.
Distance is, of course:
square-root((my - ty)^2, (mx - tx)^2)
where ^2 is "squared".
I would like to write a small program simulating many particle collisions, starting first in 2D (I would extend it to 3D later on), to (in 3D) simulate the convergence towards the Boltzmann distribution and also to see how the distribution evolves in 2D.
I have not yet started programming, so please don't ask for code samples, it is a rather general question that should help me get started. There is no problem for me with the physics behind this problem, it is rather the fact that I will have to simulate at least 200-500 particles, to achieve a pretty good speed distribution. And I would like to do that in real time.
Now, for every time step, I would update first the position of all the particles and then check for collisions, to update the new velocity vector. That, however, includes a lot of checkings, since I would have to see if every single particle undergoes a collision with every other particle.
I found this post to more or less the same problem and the approach used there was also the only one I can think of. I am afraid, however, that this will not work very well in real time, because it would involve too many collision checks.
So now: Even if this approach will work performance wise (getting say 40fps), can anybody think of a way to avoid unnecessary collision checks?
My own idea was splitting up the board (or in 3D: space) into squares (cubes) that have dimensions at least of the diameters of the particles and implement a way of only checking for collisions if the centres of two particles are within adjecent squares in the grid...
I would be happy to hear more ideas, as I would like to increase the amount of particles as much as I can and still have a real time calculation/simulation going on.
Edit: All collisions are purely elastic collisions without any other forces doing work on the particles. The initial situation I will implement to be determined by some variables chosen by the user to choose random starting positions and velocities.
Edit2: I found a good and very helpful paper on the simulation of particle collision here. Hopefully it might help some People that are interested in more depth.
If you think of it, particles moving on a plan are really a 3D system where the three dimensions are x, y and time (t).
Let's say a "time step" goes from t0 to t1. For each particle, you create a 3D line segment going from P0(x0, y0, t0) to P1(x1, y1, t1) based on current particle position, velocity and direction.
Partition the 3D space in a 3D grid, and link each 3D line segments to the cells it cross.
Now, each grid cell should be checked. If it's linked to 0 or 1 segments, it need no further check (mark it as checked). If it contains 2 or more segments, you need to check for collision between them: compute the 3D collision point Pt, shorten the two segments to end at this point (and remove link to cells they doesn't cross anymore), create two new segments going from Pt to newly computed P1 points according to the new direction/velocity of particles. Add these new line segments to the grid and mark the cell as checked. Adding a line segment to the grid turn all crossed cells to unchecked state.
When there is no more unchecked cells in your grid, you've resolved your time step.
EDIT
For 3D particles, adapt above solution to 4D.
Octrees are a nice form of 3D space partitioning grid in this case, as you can "bubble up" checked/unnchecked status to quickly find cells requiring attention.
A good high level example of spatial division is to think about the game of pong, and detecting collisions between the ball and a paddle.
Say the paddle is in the top left corner of the screen, and the ball is near the bottom left corner of the screen...
--------------------
|▌ |
| |
| |
| ○ |
--------------------
It's not necessary to check for collision each time the ball moves. Instead, split the playing field into two right down the middle. Is the ball in the left hand side of the field? (simple point inside rectangle algorithm)
Left Right
|
---------|----------
|▌ | |
| | |
| | |
| ○ | |
---------|----------
|
If the answer is yes, split the the left hand side again, this time horizontally so we have a top left and a bottom left partition.
Left Right
|
---------|----------
|▌ | |
| | |
----------- |
| | |
| ○ | |
---------|----------
|
Is this ball in the same top-left corner of the screen as the paddle? If not, no need to check for collision! Only objects which reside in the same partition need to be tested for collision with each other. By doing a series of simple (and cheap) point inside rectangle checks, you can easily save yourself from doing a more expensive shape/geometry collision check.
You can continue splitting the space down into smaller and smaller chunks until an object spans two partitions. This is the basic principle behind BSP (a technique pioneered in early 3D games like Quake). There is a whole bunch of theory on the web about spatial partitioning in 2 and 3 dimensions.
http://en.wikipedia.org/wiki/Space_partitioning
In 2 dimensions you would often use a BSP or quadtree. In 3 dimensions you would often use an octree. However the underlying principle remains the same.
You can think along the line of 'divide and conquer'. The idea is to identify orthogonal parameters with don't impact each other. e.g. one can think of splitting momentum component along 2 axis in case of 2D (3 axis in 3D) and compute collision/position independently. Another way to to identify such parameters can be grouping of particles which are moving perpendicular to each other. So even if they impact, net momentum along those lines doesn't change.
I agree above doesn't fully answer your question, but it conveys a fundamental idea which you may find useful here.
Let us say that at time t, for each particle, you have:
P position
V speed
and a N*(N-1)/2 array of information between particle A(i) and A(j) where i < j; you use symmetry to evaluate an upper triangular matrix instead of a full N*(N-1) grid.
MAT[i][j] = { dx, dy, dz, sx, sy, sz }.
which means that in respect to particle j, particle j has distance made up of three components dx, dy and dz; and a delta-vee multipled by dt which is sx, sy, sz.
To move to instant t+dt you tentatively update the positions of all particles based on their speed
px[i] += dx[i] // px,py,pz make up vector P; dx,dy,dz is vector V premultiplied by dt
py[i] += dy[i] // Which means that we could use "particle 0" as a fixed origin
pz[i] += dz[i] // except you can't collide with the origin, since it's virtual
Then you check the whole N*(N-1)/2 array and tentatively calculate the new relative distance between every couple of particles.
dx1 = dx + sx
dy1 = dy + sy
dz1 = dz + sz
DN = dx1*dx1+dy1*dy1+dz1*dz1 # This is the new distance
If DN < D^2 with D diameter of the particle, you have had a collision in the dt just past.
You then calculate exactly where this happened, i.e. you calculate the exact d't of collision, which you can do from the old distance squared D2 (dx*dx+dy*dy+dz*dz) and the new DN: it's
d't = [(SQRT(D2)-D)/(SQRT(D2)-SQRT(DN))]*dt
(Time needed to reduce distance from SQRT(D2) to D, at a speed that covers the distance SQRT(D2)-SQRT(DN) in time dt). This makes the hypothesis that particle j, seen from the refrence frame of particle i, hasn't "overshooted".
It is a more hefty calculation, but you only need it when you get a collision.
Knowing d't, and d"t = dt-d't, you can repeat the position calculation on Pi and Pj using dx*d't/dt etc. and obtain the exact position P of particles i and j at the instant of collision; you update speeds, then integrate it for the remaining d"t and get the positions at the end of time dt.
Note that if we stopped here this method would break if a three-particle collision took place, and would only handle two-particle collisions.
So instead of running the calculations we just mark that a collision occurred at d't for particles (i,j), and at the end of the run, we save the minimum d't at which a collision occurred, and between whom.
I.e., say we check particles 25 and 110 and find a collision at 0.7 dt; then we find a collision between 110 and 139 at 0.3 dt. There are no collisions earlier than 0.3 dt.
We enter collision updating phase, and "collide" 110 and 139 and update their position and speed. Then repeat the 2*(N-2) calculations for each (i, 110) and (i, 139).
We will discover that there probably still is a collision with particle 25, but now at 0.5 dt, and maybe, say, another between 139 and 80 at 0.9 dt. 0.5 dt is the new minimum, so we repeat collision calculation between 25 and 110, and repeat, suffering a slight "slow down" in the algorithm for each collision.
Thus implemented, the only risk now is that of "ghost collisions", i.e., a particle is at D > diameter from a target at time t-dt, and is at D > diameter on the other side at time t.
This you can only avoid by choosing a dt so that no particle ever travels more than half its own diameter in any given dt. Actually, you might use an adaptive dt based on the speed of the fastest particle. Ghost glancing collisions are still possible; a further refinement is to reduce dt based on the nearest distance between any two particles.
This way, it is true that the algorithm slows down considerably in the vicinity of a collision, but it speeds up enormously when collisions aren't likely. If the minimum distance (which we calculate at almost no cost during the loop) between two particles is such that the fastest particle (which also we find out at almost no cost) can't cover it in less than fifty dts, that's a 4900% speed increase right there.
Anyway, in the no-collision generic case we have now done five sums (actually more like thirty-four due to array indexing), three products and several assignments for every particle couple. If we include the (k,k) couple to take into account the particle update itself, we have a good approximation of the cost so far.
This method has the advantage of being O(N^2) - it scales with the number of particles - instead of being O(M^3) - scaling with the volume of space involved.
I'd expect a C program on a modern processor to be able to manage in real time a number of particles in the order of the tens of thousands.
P.S.: this is actually very similar to Nicolas Repiquet's approach, including the necessity of slowing down in the 4D vicinity of multiple collisions.
Until a collision between two particles (or between a particle and a wall), happens, the integration is trivial. The approach here is to calculate the time of the first collision, integrate until then, then calculate the time of the second collision and so on. Let's define tw[i] as the time the ith particle takes to hit the first wall. It is quite easy to calculate, although you must take into account the diameter of the sphere.
The calculation of the time tc[i,j] of the collision between two particles i and j takes a little more time, and follows from the study in time of their distance d:
d^2=Δx(t)^2+Δy(t)^2+Δz(t)^2
We study if there exists t positive such that d^2=D^2, being D the diameter of the particles(or the sum of the two radii of the particles, if you want them different). Now, consider the first term of the sum at the RHS,
Δx(t)^2=(x[i](t)-x[j](t))^2=
Δx(t)^2=(x[i](t0)-x[j](t0)+(u[i]-u[j])t)^2=
Δx(t)^2=(x[i](t0)-x[j](t0))^2+2(x[i](t0)-x[j](t0))(u[i]-u[j])t + (u[i]-u[j])^2t^2
where the new terms appearing define the law of motion of the two particles for the x coordinate,
x[i](t)=x[i](t0)+u[i]t
x[j](t)=x[j](t0)+u[j]t
and t0 is the time of the initial configuration. Let then (u[i],v[i],w[i]) be the three components of velocities of the i-th particle. Doing the same for the other three coordinates and summing up, we get to a 2nd order polynomial equation in t,
at^2+2bt+c=0,
where
a=(u[i]-u[j])^2+(v[i]-v[j])^2+(w[i]-w[j])^2
b=(x[i](t0)-x[j](t0))(u[i]-u[j]) + (y[i](t0)-y[j](t0))(v[i]-v[j]) + (z[i](t0)-z[j](t0))(w[i]-w[j])
c=(x[i](t0)-x[j](t0))^2 + (y[i](t0)-y[j](t0))^2 + (z[i](t0)-z[j](t0))^2-D^2
Now, there are many criteria to evaluate the existence of a real solution, etc... You can evaluate that later if you want to optimize it. In any case you get tc[i,j], and if it is complex or negative you set it to plus infinity. To speed up, remember that tc[i,j] is symmetric, and you also want to set tc[i,i] to infinity for convenience.
Then you take the minimum tmin of the array tw and of the matrix tc, and integrate in time for the time tmin.
You now subtract tmin to all elements of tw and of tc.
In case of an elastic collision with the wall of the i-th particle, you just flip the velocity of that particle, and recalculate only tw[i] and tc[i,k] for every other k.
In case of a collision between two particles, you recalculate tw[i],tw[j] and tc[i,k],tc[j,k] for every other k. The evaluation of an elastic collision in 3D is not trivial, maybe you can use this
http://www.atmos.illinois.edu/courses/atmos100/userdocs/3Dcollisions.html
About how does the process scale, you have an initial overhead that is O(n^2). Then the integration between two timesteps is O(n), and hitting the wall or a collision requires O(n) recalculation. But what really matters is how the average time between collisions scales with n. And there should be an answer somewhere in statistical physics for this :-)
Don't forget to add further intermediate timesteps if you want to plot a property against time.
You can define a repulsive force between particles, proportional to 1/(distance squared). At each iteration, calculate all the forces between particle pairs, add all the forces acting on each particle, calculate the particle acceleration, then particle velocity and finally the particle new position. Collisions will be handled naturally in this way. But dealing with interactions between particles and walls is another problem and must be handled in other way.
Suppose to have a GPS trajectory - i.e.: a series of spatio-temporal coords, every coord is a (x,y,t) information, where x is longitude, y is latitude and t is the time stamp.
Suppose each trajectory identified by 1000 (x,y) points, a compressed trajectory is trajectory with fewer points than the original, for instance 300 points. A compression algorithm (Douglas-Peucker, Bellman, etc) decide what points will be in compressed trajectory and what point will be discarded.
Each algorithm make his own choice. Better algorithms choice the points not only by spatial characteristics (x, y) but using spatio-temporal characteristics (x,y,t).
Now I need a way to compare two compressed trajectories against the original to understand what compression algorithm better reduce a spatio-temporal (temporal component is really important) trajectory.
I've thinked of DTW algorithm to check trajectory similarity, but this probably don't care about temporal component. What algorithm can I use to make this control?
What is the best compression algorithm depends to a large extent on what you are trying to achieve with it, and is dependent on other external variables. Typically, were going to identify and remove spikes, and then remove redundant data. For example;
Known minimum and maximum velocity, acceleration, and ability to tuen will let you remove spikes. If we look at the join distance between a pair of points divided by the time where
velocity = sqrt((xb - xa)^2 + (yb - ya))/(tb-ta)
we can eliminate points where the distance couldn't be travelled in the elapsed time given the speed constraint. We can do the same with acceleration constraints, and change in direction constraints for a given velocity. These constraints change whether the GPS receiver is static, hand held, in a car, in an aeroplane etc...
We can remove redundant points using a moving window looking at three points, where if the an interpolated (x,y,t) for middle point can be compared with an observed point, and the observed point removed if it lies within a specified distance + time tolerance of the interpolated point. We can also curve fit the data and consider the distance to the curve rather than using a moving 3 point window.
The compression may also have differing goals based on the constraints given, e.g. to simply reduce the data size by removing redundant observations and spikes, or to smooth the data as well.
For the former, after checking for spikes based on defined constraints, we simply check the 3d distance of each point to the polyline connecting the compressed points. This is achieved by finding the pair of points before and after the point that has been removed, interpolating a position on the line connecting those points based on the observed time, and comparing the interpolated position with the observed position. The amount of points removed will increase as we allow this distance tolerance to increase.
For the latter we also have to consider how well the smoothed result models the data, the weights imposed by the constraints, and the design shape / curve parameters.
Hope this makes some sense.
Maybe you could use mean square distance between trajectories over time.
Probably, simply looking at distance at time 1s,2s,... will be enough, but you can also do it more precise between time stamps integrating, (x1(t)-x2(t))^2 + (y1(t)-y2(t))^2. Note that between 2 time stamps both trajectories will be straight line.
I've found what I need to compute spatio-temporal error.
As written in paper "Compression and Mining of GPS Trace Data:
New Techniques and Applications" by Lawson, Ravi & Hwang:
Synchronized Euclidean distance (sed) measures the distance between
two points at identical time stamps. In Figure 1, five time steps (t1
through t5) are shown. The simplified line (which can be thought of as
the compressed representation of the trace) is comprised of only two
points (P't1 and P't5); thereby, it does not include points P't2, P't3
and P't4. To quantify the error introduced by these missing points,
distance is measured at the identical time steps. Since three points
were removed between P't1 and P't5, the line is divided into four
equal sized line segments using the three points P't2, P't3 and P't4
for the purposes of measuring the error. The total error is measured
as the sum of the distance between all points at the synchronized time
instants, as shown below. (In the following expression, n represents
the total number of points considered.)
I'm looking for an algorithm that would move a point around an arbitrary closed polygon that is not self-crossing in N time. For example, move a point smoothly around a circle in 3 seconds.
1) Calculate the polygon's perimeter (or estimate it if exact time of circling
is not critical)
2) divide the perimieter by the time desired for circling
3) move point around polygon at this speed.
Edit following ire and curses' comment.
Maybe I read the question too literally, I fail to see the difficulty or the points raised by ire_and_curses.
The following describes more specifically the logic I imagine for step #3. A more exact description would require knowing details about the coordinate system, the structure used to describe the polygon, and indication abou the desired/allowed animation refreshing frequency.
The "travellng point" which goes around the polygon would start on any edge of the polygop (maybe on a vertex, as so to make the start and end point more obvious) and would stay on an edge at all time.
From this starting point, be it predetermined or randomly selected), the traveling point would move towards towards a vertex, at the calculated speed. Once there it would go towards the other vertex of the edge it just arrived to, and proceed till it returns to the starting point.
The equations for calculating the points on a given edge are the same that for tracing a polygon: simple trig or even pythagoras (*). The visual effect is based on refreshing the position of the traveling point at least 15 times or so per second. The refresh frequency (or rather its period) can be used to determine the distance of two consecutive points of the animation.
The only less trivial task is to detect the end of a given edge, i.e. when the traveling point needs to "turn" to follow the next edge. On these occasions, a fractional travel distance needs to be computed so that the next point in the animation is on the next edge. Special mention also for extremely short edges, as these may require the fractional distance logic to be repeated (or done differently).
Sorry for such a verbose explanation for a rather straight forward literally ;-) algorithm...
Correction: as pointed out by Jefromi in comment for other response, all that is needed with regard to the tracing is merely to decompose the x and y components of the motion. Although we do need Pythagoras for calculating the distance between each vertex for the perimeter calculation, and we do need to extrapolate because the number of animation steps on an edge is not [necessarily] a integer.
For the record, a circle is not a polygon--it's the limit of a regular polygon as the number of sides go to infinity, but it's not a polygon. What I'm giving you isn't going to work if you don't have defined points.
Assuming you have your polygon stored in some format like a list of adjacent vertices, do a O(n) check to calculate the perimeter by iterating through them and computing the distance between each point. Divide that by the time to get the velocity that you should travel.
Now, if you want to compute the path, iterate back through your vertices again and calculate from your current position where your actual position should be on the next timestep, whatever your refresh time step may be (if you need to move down a different edge, calculate how much time it would take to get to the end of your first edge, then continue on from there..). To travel along the edge, decompose your velocity vector into its components (because you know the slope of the edge from its two endpoints).
A little code might answer this with fewer words (though I'm probably too late for any votes here). Below is Python code that moves a point around a polygon at constant speed.
from turtle import *
import numpy as nx
import time
total_time = 10. # time in seconds
step_time = .02 # time for graphics to make each step
# define the polygone by the corner points
# repeat the start point for a closed polygon
p = nx.array([[0.,0.], [0.,200.], [50.,150.], [200.,200.], [200.,0.], [0.,0.]])
perim = sum([nx.sqrt(sum((p[i]-p[i+1])**2)) for i in range(len(p)-1)])
distance_per_step = (step_time/total_time)*perim
seg_start = p[0] # segment start point
goto(seg_start[0], seg_start[1]) # start the graphic at this point
for i in range(len(p)-1):
seg_end = p[i+1] # final point on the segment
seg_len = nx.sqrt(sum((seg_start-seg_end)**2))
n_steps_this_segment = int(seg_len/distance_per_step)
step = (seg_end-seg_start)/n_steps_this_segment # the vector step
#
last_point = seg_start
for i in range(n_steps_this_segment):
x = last_point + step
goto(x[0], x[1])
last_point = x
time.sleep(step_time)
seg_start = seg_end
Here I calculated the step size from the step_time (anticipating an graphics delay) but one could calculate the step size, from whatever was needed, for example, the desired speed.