Solving Minimum time target problem by solving HJB equation numerically - controls

Here is the problem where I am trying to solve by solving HJB equation, where the following term is coming:
To solve this, what I am doing is from one point I am iterating for all u than deciding h that stencil remains in the 9 point neighborhood, and then taking the 5 top most weight (based on distance) and lastly deciding the max value to update the value function but it is actually diverging.
https://colab.research.google.com/drive/1J805FxYLRBnRQAbvRRDVKwyqGO9wkR_M#scrollTo=ykFMv5YNKGBQ
I have written everything in function form please give a look, it is easy to understand, if anybody have any suggestion please weigh in

Related

dividing a point cloud to equal size sub-clouds

I want to find an algorithm that solves the following problem.
Suppose we have a point cloud with N points of dimension m we want to divide the point cloud into sub-clouds where any sub-cloud is larger than or equal to size k and we want to minimize the following:
each sub-cloud size is closest as possible to k.
the distances between points in each sub-cloud.
any direction for a solution will be great, and implementation in python will be appreciated.
Have you thought about using K-means machine learning algorithm?
I know it's not a perfect solution, as you still need to solve the k-size condition, but it's a good direction.
To solve the issue I would:
Choose my k to be something around N / size of sub-cloud (what you called k). I think it has the best chance to success.
Each sub-cloud that returned from the algorithm and is smaller than the wanted size- just add its points to the closest sub-cloud that's been created.
Hope it helped you somehow!

Finding optimum path through graph search

I am currently working on Euler 411 https://projecteuler.net/problem=411.
I have figured out the mod exponentiation simplification to find all the coordinates in a reasonable amount of time and store the coordinates in files (70-200MB).
I also can plot coordinates and possible solutions. This is not the optimal solution. The optimal solution for this problem hits the maximum amount of stations.
Here's an image of N = 10000, PE reports 48 is the correct answer. The red line approximator gets 36. 504 coordinates.
N = 7**5 (16807) (actual from problem). Red line gets 159 points, 14406 unique coordinates.
This is a search problem right? Am I missing something? I have tried greedy search with a density heuristic to get an approximate search, but it is not good enough to approximate the solution to the biggest problems. It would take days to finish. I have not tried an exact search like A* because it would be slower than greedy. BFS is out of the question.
Any hints? NO SPOILERS PLEASE!! There must be a way to eliminate nodes from this massive search space I am missing.
Have you considered that there may be a pattern in where the points occur...and hence the function value? You should solve small cases (of k) by hand! Also check that there is nothing special about S(k^5). Finally, the second to last line of the problem statement seems a little suspicious, giving you particular information about S(123) and S(10000). If S(10000) is so low at forty-eight, it seems certain you are missing something and the search space need not be diabolical. So to my first reading, it does not appear to be a brute force search problem.

incremental least squares differing with only one row

I have to solve multiple least squares problem sequentially - that is one by one. Every least square problem from the previous one changes by only one row. The right hand side is same for all. For eg., Problem 1 : ||Ax-b|| and Problem 2 : ||Cy-b|| where C and A changes by only one row. That is, it is equivalent to deleting a row from A and including a new row in A. When solving problem 2, I also have x. Is there a fast way for solving y of Problem 2?
You can use the Sherman-Morrison formula.
The key piece of the linear regression solution is computing the inverse of A'A.
If b is the old row from A and a is the new row in C, then
C'C=A'A-bb'+aa'=A'A+(a-b)(a+b)'
This expression can be plugged into the Sherman-Morrison formula to compute (C'C)^{-1} given (A'A)^{-1}.
Unfortunately the answer may be NO...
Changing one row of a matrix will lead to completely different spectrum of the matrix. All the eigenvalues and eigenvectors are changed with both magnitude and orientation. As a result, the gradient of problem 1 won't remain in problem 2. You can try to use your x from problem 1 as a initial guess for y in problem 2, but it is not guaranteed to reduce your searching time in optimization.
Yet a linear matrix equation solving is not that hard with the powerful packages. You can use LU decomposition or QR decomposition to improve the computing efficiency very much.

An algorithm for the mellon-selling farmer

Question i saw on the net:
A melon-selling farmer has n melons. The weight of each melon, an integer (lbs), is distinct. A customer asks for exactly m pounds of uncut melons. Now, the farmer has the following problem: If it is possible to satisfy the customer, he should do so by finding the appropriate melons as efficiently as possible, or else tell the customer that it is not possible to fulfill his request.
note: This is not homework btw, i just need guidance.
My Answer:
This seems similar to the coin change problem(knapsack) and the subset problem (backtracking).
Coin-change: i can put the weights into a set w = {5, 8, 3 , 2,....} then solve and the same goes for the Subset problem.
So basically i can use either method to solve this problem?
This is exactly an integer knapsack problem where solutions have zero wasteage. There is a good dynamic programming/memoization strategy to help you solve it. See either of these links:
http://www.cs.ship.edu/~tbriggs/dynamic/
https://en.wikipedia.org/wiki/Knapsack_problem
Indeed, the subset-sum problem IS the 0/1 knapsack problem where the weight equals the value of each item.

Trilateration of a signal using Time Difference of Arrival

I am having some trouble to find or implement an algorithm to find a signal source. The objective of my work is to find the sound emitter position.
To accomplish this I am using three microfones. The technique that I am using is multilateration that is based on the time difference of arrival.
The time difference of arrival between each microfones are found using Cross Correlation of the received signals.
I already implemented the algorithm to find the time difference of arrival, but my problem is more on how multilateration works, it's unclear for me based on my reference, and I couldn't find any other good reference for this that are free/open.
If you have some references on how I can implement a multilateration algorithm, or some other trilateration algorithm that I can use based on time difference of arrival it would be a great help.
Thanks in advance.
The point you are looking for is the intersection of three hyperbolas. I am assuming 2D here since you only use 3 receptors. Technically, you can find a unique 3D solution but as you likely have noise, I assume that if you wanted a 3D result, you would have taken 4 microphones (or more).
The wikipedia page makes some computations for you. They do it in 3D, you just have to set z = 0 and solve for system of equations (7).
The system is overdetermined, so you will want to solve it in the least squares sense (this is the point in using 3 receptors actually).
I can help you with multi-lateration in general.
Basically, if you want a solution in 3d - you have to have at least 4 points and 4 distances from them (2-give you the circle in which is the solution - because that is the intersection between 2 spheres, 3 points give you 2 possible solutions (intersection between 3 spheres) - so, in order to have one solution - you need 4 spheres). So, when you have some points (4+) and the distance between them (there is an easy way to transform the TDOA into the set of equations for just having the length type distances /not time/) you need a way to solve the set of equations. First - you need a cost function (or solution error function, as I call it) which would be something like
err(x,y,z) = sum(i=1..n){sqrt[(x-xi)^2 + (y-yi)^2 + (z-zi)^2] - di}
where x, y, z are coordinates of the current point in the numerical solution and xi, yi, zi and di are the coordinates and distance towards the ith reference point. In order to solve this - my advice is NOT to use Newton/Gauss or Newton methods. You need first and second derivative of the aforementioned function - and those have a finite discontinuation in some points in space - hence that is not a smooth function and these methods won't work. What will work is direct search family of algorithms for optimization of functions (finding minimums and maximums. in our case - you need minimum of the error/cost function).
That should help anyone wanting to find a solution for similar problem.

Resources