Projectile Distance/Time - time

I'm making a game and need to figure out how long it will take an object to fall to a certain height.
The object has an initial y value (y), an initial vertical velocity (vy), a gravity constant (gravity), and the vertical target distance it is supposed to drop (destination).
I can figure this out using a loop:
int i = 0;
while(y < destination) {
y += vy;
vy += gravity;
i++;
}
return i;
The only problem with this is that I need to do this for several hundred objects and I have to do it every frame.
Is there any way to figure this out using some kind of formula? That way I can speed up my game while still figuring out this problem.
Thanks

You can solve this explicitly using elementary physics (kinematics).
Given an initial velocity v, constant acceleration a and a fixed distance x, the time is:
(1/2)at^2 + vt - x = 0
or
at^2 + 2vt - 2x = 0
Solve that quadratic formula and take the positive time.

Check out:
http://en.wikipedia.org/wiki/Equations_for_a_falling_body

The only problem with this is that I need to do this for several hundred objects and I have to do it every frame.
I think if performance is critical for you, then a formula with lots of *s and ^s probably won't suit your situation.
I don't know what your purpose is, whether you need the simulation to be accurate or not, but I think you could check out Particle System. Although it's just a general method and it won't speed up your program directly, there are many researches going on in this field that might be useful. Also, you could read this to know more about this.
Also, since your method only use +, I believe it is quite efficient. Unless your objects are really hard to render, a few hundred won't be a problem. Most of the formulas will probably make your program looks better but does not work better. FYI, I once rendered almost 10000 particles on a pretty old machine and it worked great.

You want to know how many frames it will take to travel a distance d, given initial (vertical) velocity v and acceleration (due to gravity) a.
After n frames the distance travelled is
vn + Σ(0≤j<n) aj = vn + ½an(n−1)
So set d = vn + ½an(n−1) and solve for n:
d = vn + ½an(n−1)
∴ ½an2 + n(v − ½a) − d = 0
And then use the quadratic formula to get n:
n = (½a − v ± √((v − ½a)2 − 2ad)) / a
Some of the other answers have referred you to the usual solutions to Newton's equations of motion, but these only work for the continuous equations. You have a discrete simulation here, so if you want accurate rather than approximate answers then you need sums rather than integrals, as described above.
The first time I had to write this kind of prediction code was in a game involving tanks firing artillery shells at each other. To assist with aiming, the game drew a target reticule on the ground at the predicted position that the shell would land. For my first attempt I used the normal solutions to the continuous equations of motion, and the result was quite a way off. The discreteness of the simulation makes a noticeable difference to the result, and for some applications (like drawing a target reticule) agreement between the prediction and the simulation is vital.

Related

Querying large amount of multidimensional points in R^N

I'm looking at listing/counting the number of integer points in R^N (in the sense of Euclidean space), within certain geometric shapes, such as circles and ellipses, subject to various conditions, for small N. By this I mean that N < 5, and the conditions are polynomial inequalities.
As a concrete example, take R^2. One of the queries I might like to run is "How many integer points are there in an ellipse (parameterised by x = 4 cos(theta), y = 3 sin(theta) ), such that y * x^2 - x * y = 4?"
I could implement this in Haskell like this:
ghci> let latticePoints = [(x,y) | x <- [-4..4], y <-[-3..3], 9*x^2 + 16*y^2 <= 144, y*x^2 - x*y == 4]
and then I would have:
ghci> latticePoints
[(-1,2),(2,2)]
Which indeed answers my question.
Of course, this is a very naive implementation, but it demonstrates what I'm trying to achieve. (I'm also only using Haskell here as I feel it most directly expresses the underlying mathematical ideas.)
Now, if I had something like "In R^5, how many integer points are there in a 4-sphere of radius 1,000,000, satisfying x^3 - y + z = 20?", I might try something like this:
ghci> :{
Prelude| let latticePoints2 = [(x,y,z,w,v) | x <-[-1000..1000], y <- [-1000..1000],
Prelude| z <- [-1000..1000], w <- [-1000..1000], v <-[1000..1000],
Prelude| x^2 + y^2 + z^2 + w^2 + v^2 <= 1000000, x^3 - y + z == 20]
Prelude| :}
so if I now type:
ghci> latticePoints2
Not much will happen...
I imagine the issue is because it's effectively looping through 2000^5 (32 quadrillion!) points, and it's clearly unreasonably of me to expect my computer to deal with that. I can't imagine doing a similar implementation in Python or C would help matters much either.
So if I want to tackle a large number of points in such a way, what would be my best bet in terms of general algorithms or data structures? I saw in another thread (Count number of points inside a circle fast), someone mention quadtrees as well as K-D trees, but I wouldn't know how to implement those, nor how to appropriately query one once it was implemented.
I'm aware some of these numbers are quite large, but the biggest circles, ellipses, etc I'd be dealing with are of radius 10^12 (one trillion), and I certainly wouldn't need to deal with R^N with N > 5. If the above is NOT possible, I'd be interested to know what sort of numbers WOULD be feasible?
There is no general way to solve this problem. The problem of finding integer solutions to algebraic equations (equations of this sort are called Diophantine equations) is known to be undecidable. Apparently, you can write equations of this sort such that solving the equations ends up being equivalent to deciding whether a given Turing machine will halt on a given input.
In the examples you've listed, you've always constrained the points to be on some well-behaved shape, like an ellipse or a sphere. While this particular class of problem is definitely decidable, I'm skeptical that you can efficiently solve these problems for more complex curves. I suspect that it would be possible to construct short formulas that describe curves that are mostly empty but have a huge bounding box.
If you happen to know more about the structure of the problems you're trying to solve - for example, if you're always dealing with spheres or ellipses - then you may be able to find fast algorithms for this problem. In general, though, I don't think you'll be able to do much better than brute force. I'm willing to admit that (and in fact, hopeful that) someone will prove me wrong about this, though.
The idea behind the kd-tree method is that you recursive subdivide the search box and try to rule out whole boxes at a time. Given the current box, use some method that either (a) declares that all points in the box match the predicate (b) declares that no points in the box match the predicate (c) makes no declaration (one possibility, which may be particularly convenient in Haskell: interval arithmetic). On (c), cut the box in half (say along the longest dimension) and recursively count in the halves. Obviously the method can choose (c) all the time, which devolves to brute force; the goal here is to do (a) or (b) as much as possible.
The performance of this method is very dependent on how it's instantiated. Try it -- it shouldn't be more than a couple dozen lines of code.
For nicely connected region, assuming your shape is significantly smaller than your containing search space, and given a seed point, you could do a growth/building algorithm:
Given a seed point:
Push seed point into test-queue
while test-queue has items:
Pop item from test-queue
If item tests to be within region (eg using a callback function):
Add item to inside-set
for each neighbour point (generated on the fly):
if neighbour not in outside-set and neighbour not in inside-set:
Add neighbour to test-queue
else:
Add item to outside-set
return inside-set
The trick is to find an initial seed point that is inside the function.
Make sure your set implementation gives O(1) duplicate checking. This method will eventually break down with large numbers of dimensions as the surface area exceeds the volume, but for 5 dimensions should be fine.

what is the right algorithm for ... eh, I don't know what it's called

Suppose I have a long and irregular digital signal made up of smaller but irregular signals occurring at various times (and overlapping each other). We will call these shorter signals the "pieces" that make up the larger signal. By "irregular" I mean that it is not a specific frequency or pattern.
Given the long signal I need to find the optimal arrangement of pieces that produce (as closely as possible) the larger signal. I know what the pieces look like but I don't know how many of them exist in the full signal (or how many times any one piece exists in the full signal). What software algorithm would you use to do this optimization? What do I search for on the web to get help on solving this problem?
Here's a stab at it.
This is actually the easier of the deconvolution problems. It is easier in that you may be able to have a unique answer. The harder problem is that you also don't know what the pieces look like. That case is called blind deconvolution. It is a harder problem and is usually iterative and statistical (ML or MAP), and the solution may not be right.
Luckily, your case is easier, but still not so easy because you have multiple pieces :p
I think that it may be commonly called mixture deconvolution?
So let f[t] for t=1,...N be your long signal. Let h1[t]...hn[t] for t=0,1,2,...M be your short signals. Obviously here, N>>M.
So your hypothesis is that:
(1) f[t] = h1[t+a1[1]]+h1[t+a1[2]] + ...
+h2[t+a2[1]]+h2[t+a2[2]] + ...
+....
+hn[t+an[1]]+h2[t+an[2]] + ...
Observe that each row of that equation is actually hj * uj where uj is the sum of shifted Kronecker delta. The * here is convolution.
So now what?
Let Hj be the (maybe transposed depending on how you look at it) Toeplitz matrix generated by hj, then the equation above becomes:
(2) F = H1 U1 + H2 U2 + ... Hn Un
subject to the constraint that uj[k] must be either 0 or 1.
where F is the vector [f[0],...F[N]] and Uj is the vector [uj[0],...uj[N]].
So you can rewrite this as:
(3) F = H * U
where H = [H1 ... Hn] (horizontal concatenation) and U = [U1; ... ;Un] (vertical concatenation).
H is an Nx(nN) matrix. U is an nN vector.
Ok, so the solution space is finite. It is 2^(nN) in size. So you can try all possible combinations to see which one gives you the lowest ||F - H*U||, but that will take too long.
What you can do is solve equation (3) using pseudo-inverse, multi-linear regression (which uses least square, which comes out to pseudo-inverse), or something like this
Is it possible to solve a non-square under/over constrained matrix using Accelerate/LAPACK?
Then move that solution around within the null space of H to get a solution subject to the constraint that uj[k] must be either 0 or 1.
Alternatively, you can use something like Nelder-Mead or Levenberg-Marquardt to find the minimum of:
||F - H U|| + lambda g(U)
where g is a regularization function defined as:
g(U) = ||U - U*||
where U*[j] = 0 if |U[j]|<|U[j]-1|, else 1
Ok, so I have no idea if this will converge. If not, you have to come up with your own regularizer. It's kinda dumb to use a generalized nonlinear optimizer when you have a set of linear equations.
In reality, you're going to have noise and what not, so it actually may not be a bad idea to use something like MAP and apply the small pieces as prior.

Minimizing a function of vectors

I need to minimize the following sum:
minimize sum for all i{(i = 1 to n) fi(v(i), v(i - 1), tangent(i))}
v and tangent are vectors.
fi takes the 3 vectors as arguments and returns a cost associated with these 3 vectors. For this function, v(i - 1) is the vector chosen in the previous iteration. tangent(i) is also known. fi calculates the cost of choosing a vector v(i), given the other two vectors v(i - 1) and tangent(i). The v(0) and v(n) vectors are known. tangent(i) values are also known in advance for alli = 0 to n.
My task is to determine all such v(i)s such that the total cost of the function values for i = 1 to n is minimized.
Can you please give me any ideas to solve this?
So far I can think of Branch and Bound or dynamic programming methods.
Thanks!
I think this is a problem in mathematical optimisation, with an objective function built up of dot products and arcCosines, subject to the constraint that your vectors should be unit vectors. You could enforce this either with Lagrange multipliers, or by including a normalising step in the arc-Cosine. If Ti is a unit vector then for Vi calculate cos^-1(Ti.Vi/sqrt(Vi.Vi)). I would have a go at using a conjugate gradient optimiser for this, or perhaps even Newton's method, with my starting point Vi = Ti.
I would hope that this would be reasonably tractable, because the Vi are only related to neighbouring Vi. You might even get somewhere by repeatedly adjusting each Vi in isolation, one by one, to optimise the objective function. It might be worth just seeing what happens if you repeatedly set Vi to be the average of Ti, Vi+1, and Vi-1, and then scaled Vi to be a unit vector again.

GPS Data time to distance base transformation

I am developing an application that logs a GPS trace over time.
After the trace is complete, I need to convert the time based data to distance based data, that is to say, where the original trace had a lon/lat record every second, I need to convert that into having a lon/lat record every 20 meters.
Smoothing the original data seems to be a well understood problem and I suppose I need something like a smoothing algorithm, but I'm struggling to think how to convert from a time based data set to a distance based data set.
This is an excellent question and what makes it so interesting is the data points should be assumed random. Which means you cannot expect a beginning to end data graph that represents a well behaved polynomial (like SINE or COS wave). So you will have to work in small increments such that values on your x-axis (so to speak) do not oscillate meaning Xn cannot be less than Xn-1. The next consideration would be the case of overlap or near overlap of data points. Imagine I’m recording my GPS coordinates and we have stopped to chat or rest and I walk randomly within a twenty five foot circle for the next five minutes. So the question would be how to ignore this type of “data noise”?
For simplicity let’s consider linear calculations where there is no approximation between two points; it’s a straight line. This will probably be more than sufficient for your calculations. Now given the comment above regarding random data points, you will want to traverse your data from your start point to the end point sequentially. Sequential termination occurs when you exceed the last data point or you have exceeded the overall distance to produce coordinates (like a subset). Let’s assume your plot precision is X. This would be your 20 meters. As you traverse there will be three conditions:
The distance between the two points is greater than your
precision. Therefore save the start point plus the precision X. This
will also become your new start point.
The distance between the two points is equal to your precision.
Therefore save the start point plus the precision X (or save end
point). This will also become your new start point.
The distance between the two points is less than your precision.
Therefore precision is adjusted to precision minus end point. The end
point will become your new start point.
Here is pseudo-code that might help get you started. Note, point y minus point x = distance between. And, point x plus value = new point on line between poing x and point y at distance value.
recordedPoints = received from trace;
newPlotPoints = emplty list of coordinates;
plotPrecision = 20
immedPrecision = plotPrecision;
startPoint = recordedPoints[0];
for(int i = 1; i < recordedPoints.Length – 1; i++)
{
Delta = recordedPoints[i] – startPoint;
if (immedPrecision < Delta)
{
newPlotPoints.Add(startPoint + immedPrecision);
startPoint = startPoint + immedPrecision;
immedPrecsion = plotPrecsion;
i--;
}
else if (immedPrecision = Delta)
{
newPlotPoints.Add(startPoint + immedPrecision);
startPoint = startPoint + immediatePrecision;
immedPrecision = plotPrecision;
}
else if (immedPrecision > Delta)
{
// Store last data point regardless
if (i == recordedPoints.Length - 1)
{
newPlotPoints.Add(startPoint + Delta)
}
startPoint = recordedPoints[i];
immedPrecision = Delta - immedPrecision;
}
}
Previously I mentioned "data noise". You can wrap the "if" and "else if's" in another "if" which detemines scrubs this factor. The easiest way is to ignore a data point if it has not moved a given distance. Keep in mind this magic number must be small enough such that sequentially recorded data points which are ignored don't sum to something large and valuable. So putting a limit on ignored data points might be a benefit.
With all this said, there are many ways to accurately perform this operation. One suggestion to take this subject to the next level is Interpolation. For .NET there is a open source library at http://www.mathdotnet.com. You can use their Numberics library which contains Interpolation at http://numerics.mathdotnet.com/interpolation/. If you choose such a route your next major hurdle will be deciding the appropriate Interpolation technique. If you are not a math guru here is a bit of information to get you started http://en.wikipedia.org/wiki/Interpolation. Frankly, Polynomial Interpolation using two adjacent points would be more than sufficient for your approximations provided you consider the idea of Xn is not < Xn-1 otherwise your approximation will be skewed.
The last item to note, these calculations are two-dimensional and do consider altitude (Azimuth) or the curvature of the earth. Here is some additional information in that regard: Calculate distance between two latitude-longitude points? (Haversine formula).
Never the less, hopefully this will point you in the correct direction. With no doubt this is not a trivial problem therefore keeping the data point range as small as possible while still being accurate will be to your benefit.
One other consideration might be to approximate using actual data points using the precision to disregard excessive data. Therefore you are not essentially saving two lists of coordinates.
Cheers,
Jeff

Is there a fast way to invert a matrix in Matlab?

I have lots of large (around 5000 x 5000) matrices that I need to invert in Matlab. I actually need the inverse, so I can't use mldivide instead, which is a lot faster for solving Ax=b for just one b.
My matrices are coming from a problem that means they have some nice properties. First off, their determinant is 1 so they're definitely invertible. They aren't diagonalizable, though, or I would try to diagonlize them, invert them, and then put them back. Their entries are all real numbers (actually rational).
I'm using Matlab for getting these matrices and for this stuff I need to do with their inverses, so I would prefer a way to speed Matlab up. But if there is another language I can use that'll be faster, then please let me know. I don't know a lot of other languages (a little but of C and a little but of Java), so if it's really complicated in some other language, then I might not be able to use it. Please go ahead and suggest it, though, in case.
I actually need the inverse, so I can't use mldivide instead,...
That's not true, because you can still use mldivide to get the inverse. Note that A-1 = A-1 * I. In MATLAB, this is equivalent to
invA = A\speye(size(A));
On my machine, this takes about 10.5 seconds for a 5000x5000 matrix. Note that MATLAB does have an inv function to compute the inverse of a matrix. Although this will take about the same amount of time, it is less efficient in terms of numerical accuracy (more info in the link).
First off, their determinant is 1 so they're definitely invertible
Rather than det(A)=1, it is the condition number of your matrix that dictates how accurate or stable the inverse will be. Note that det(A)=∏i=1:n λi. So just setting λ1=M, λn=1/M and λi≠1,n=1 will give you det(A)=1. However, as M → ∞, cond(A) = M2 → ∞ and λn → 0, meaning your matrix is approaching singularity and there will be large numerical errors in computing the inverse.
My matrices are coming from a problem that means they have some nice properties.
Of course, there are other more efficient algorithms that can be employed if your matrix is sparse or has other favorable properties. But without any additional info on your specific problem, there is nothing more that can be said.
I would prefer a way to speed Matlab up
MATLAB uses Gauss elimination to compute the inverse of a general matrix (full rank, non-sparse, without any special properties) using mldivide and this is Θ(n3), where n is the size of the matrix. So, in your case, n=5000 and there are 1.25 x 1011 floating point operations. So on a reasonable machine with about 10 Gflops of computational power, you're going to require at least 12.5 seconds to compute the inverse and there is no way out of this, unless you exploit the "special properties" (if they're exploitable)
Inverting an arbitrary 5000 x 5000 matrix is not computationally easy no matter what language you are using. I would recommend looking into approximations. If your matrices are low rank, you might want to try a low-rank approximation M = USV'
Here are some more ideas from math-overflow:
https://mathoverflow.net/search?q=matrix+inversion+approximation
First suppose the eigen values are all 1. Let A be the Jordan canonical form of your matrix. Then you can compute A^{-1} using only matrix multiplication and addition by
A^{-1} = I + (I-A) + (I-A)^2 + ... + (I-A)^k
where k < dim(A). Why does this work? Because generating functions are awesome. Recall the expansion
(1-x)^{-1} = 1/(1-x) = 1 + x + x^2 + ...
This means that we can invert (1-x) using an infinite sum. You want to invert a matrix A, so you want to take
A = I - X
Solving for X gives X = I-A. Therefore by substitution, we have
A^{-1} = (I - (I-A))^{-1} = 1 + (I-A) + (I-A)^2 + ...
Here I've just used the identity matrix I in place of the number 1. Now we have the problem of convergence to deal with, but this isn't actually a problem. By the assumption that A is in Jordan form and has all eigen values equal to 1, we know that A is upper triangular with all 1s on the diagonal. Therefore I-A is upper triangular with all 0s on the diagonal. Therefore all eigen values of I-A are 0, so its characteristic polynomial is x^dim(A) and its minimal polynomial is x^{k+1} for some k < dim(A). Since a matrix satisfies its minimal (and characteristic) polynomial, this means that (I-A)^{k+1} = 0. Therefore the above series is finite, with the largest nonzero term being (I-A)^k. So it converges.
Now, for the general case, put your matrix into Jordan form, so that you have a block triangular matrix, e.g.:
A 0 0
0 B 0
0 0 C
Where each block has a single value along the diagonal. If that value is a for A, then use the above trick to invert 1/a * A, and then multiply the a back through. Since the full matrix is block triangular the inverse will be
A^{-1} 0 0
0 B^{-1} 0
0 0 C^{-1}
There is nothing special about having three blocks, so this works no matter how many you have.
Note that this trick works whenever you have a matrix in Jordan form. The computation of the inverse in this case will be very fast in Matlab because it only involves matrix multiplication, and you can even use tricks to speed that up since you only need powers of a single matrix. This may not help you, though, if it's really costly to get the matrix into Jordan form.

Resources