In old games era, we are used to have a look-up table of pre-computed values of sin and cos,..etc, due to the slowness of computing those values in that old CPUs.
Is that considered a dynamic programming technique ? or dynamic programming must solve a recursive function that is always computed or sort of ?
Update:
In dynamic programming the key is to have a memoization table, which is the solution for the sin,cos look up table, so what is really the difference in the technique ?
I'd say for what I see in your question no it's not dynamic programming. Dynamic programming is more about solving problems by solving smaller subproblem and create way to get solution of problem from smaller subproblem.
Your situation looks more like memoization.
For me it could be considered DP if your problem was to compute cos N and you have formula to calculate cos i from array of cos 0, cos 1, ..., cos i - 1, so you calculate cos 1, sin 1 and run you calculation for i from 0 to N.
May be somebody will correct me :)
There's also interesting quote about how dynamic programming differ from divide-and-conquer paradigm:
There are two key attributes that a problem must have in order for
dynamic programming to be applicable: optimal substructure and
overlapping subproblems. If a problem can be solved by combining
optimal solutions to non-overlapping subproblems, the strategy is
called "divide and conquer" instead. This is why mergesort and
quicksort are not classified as dynamic programming problems.
Dynamic programming is the programming technique where you solve a difficult problem by splitting it in smaller problems, which are not independent (this is important!).
Even if you could compute cos i from cos i -1, this would still not be dynamic programming, just recursion.
Dynamic programming classic example is the knapsack problem: http://en.wikipedia.org/wiki/Knapsack_problem
You want to fill a knapsack of size W, with N objects, each one with its size and value.
Since you don't know which permutation of objects will be the best, you "try" everyone.
Recurrence equation will be something like:
OPT(m,w) = MAX ( OPT(m-1, w), //if I don't take this object
OPT(m-1, w - w(m)) //If i take it
Adding the initial case, this is how you solve the problem. Of course you should build the solution starting with m = 0, w = 0 and iterating until m = N and w = W, so that you can reuse previously calculated values.
Using this technique, you can find the optimal combination of objects to bring into the knapsack in just N*W time (which is not polynomial in the input size, of course, otherwise P = NP and no one wants that!), instead of an exponential number of computation steps.
No I don't think this is dynamic programming. Due to limited computing power the values of sine and cosine were fed as pre-computed values which are just like other numeric constants.
For a problem to be solved in dynamic programming technique there are many essential conditions. One of the important condition is that we should be able to break problem into recursive solvable sub-problem, the result these sub-problem of which can be then used as look-up table to replace higher chain in recursion. So it is both recursion and memory.
For more info you can refer Wikipedia link.
http://en.wikipedia.org/wiki/Dynamic_programming
Also the lecture 19 of this course will give you an overview of dynamic programming.
http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-006-introduction-to-algorithms-fall-2011/lecture-videos/lecture-19-dynamic-programming-i-fibonacci-shortest-paths/
Related
I'm working on a problem, and it feels like it might be analogous to an existing problem in mathematical programming, but I'm having trouble finding any such problem.
The problem goes like this: We have n sets of d dimensional vectors, such that each set contains exactly d+1 vectors. Within each set, all vectors have the same length (furthermore, the angle between any two vectors in a set is the same for any set, but I'm not sure whether this relevant). We then need to choose exactly one vector out of every set, and compute the sum of these vectors. Our objective is to make our choices so that the sum of the vectors is minimized.
It feels like the problem is sort of related to the Shortest Vector Problem, or a variant of job scheduling, where scheduling a job on a machine affects all machines, or a partition problem.
Does this problem ring a bell? Specifically, I'm looking for research into solving this problem, as currently my best bet is using an ILP, but I feel there must be something more clever that can be done.
I think this is an MIQP (Mixed Integer Quadratic Programming) or MISOCP (mixed integer second-order cone) problem:
Let
v(i,j) be i vectors in group j (data)
x(i,j) in {0,1}: binary decision variables
w: sum of selected vectors (decision variable)
Then the problem can be stated as:
min ||w||
sum(i, x(i,j)) = 1 for all j
w = sum((i,j), x(i,j)*v(i,j))
If you want you can substitute out w. Indeed I don't use your angle restriction (this is a restriction on the data and not on the decision variables of the model). The x variables are chosen such that we select exactly one vector from each group.
Minimizing the 2-norm can be replaced by minimizing the sum of the squares of the elements (i.e. minimizing the square of the norm).
Assuming the 2-norm, this is a MISOCP problem or convex MIQP problem for which quite a few solvers are available. For 1-norm and infinity-norms we can formulate a linear MIP problem. MIP solvers are widely available.
I have a set of N (N is very large) linear equations with W variables.
For efficiency sake, I need to find the smallest number of linear equations that are solvable (have a unique solution). It can be assumed that a set of X equations containing Y variables has a unique solution when X == Y.
For example, if I have the following as input:
2a = b - c
a = 0.5b
b = 2 + a
I want to return the equation set:
a = 0.5b
b = 2 + a
Currently, I have an implementation that uses some heuristics. I create a matrix, columns are variables and rows are equations. I search the matrix to find a set of fully connected equations, and then one-by-one try removing equations to see if the remaining set of equations is still solvable, if it is continue, if not, return the set of equations.
Is there a known algorithm for this, and am I trying got reinvent the wheel?
Does anyone have input on how to better approach this?
Thanks.
Short answer is "yes", there are known algorithms. For example, you could add a single equation and then compute the rank of the matrix. Then add the next equation and compute the rank. If it hasn't gone up that new equation isn't helping any and you can get rid of it. Once the rank == the number of variables you have a unique solution and you're done. There are libraries (e.g. Colt, JAMA, la4j, etc.) that will do this for you.
Longer answer is that this is surprisingly difficult to do correctly, especially if your matrix gets big. You end up with lots of numerical stability issues and so on. I'm not a numerical linear algebra expert but I know enough to know there are dragons here if you're not careful. Having said that, if your matrices are small and "well conditioned" (the rows/columns aren't almost parallel) then you should be in good shape. It depends on your application.
I can't imagine how such an algorithm would be constructed.
Would the algorithm "for every permutation of N elements, brute-force the traveling salesman problem, where the edges are decided by the order of the elements" have such a complexity?
Here's your algorithm!
import math
def eat_cpu(n):
count = 0
for _ in xrange(math.factorial(math.factorial(n))):
count += 1
return count
eat_cpu(4)
It is a function that calculates (n!)! using the method of incrementation. It takes O((n!)!) time.
Actually, upon reflection, I realized that this algorithm is also O((n!)!):
def dont_eat_cpu(n):
return 0
because O is an upper bound. We commonly forget this when throwing O(...) around. The previous algorithm is thus Theta((n!)!) in addition to being O((n!)!), while this one is just Theta(1).
Enumerating all partitions of a set is O(n!). Now, all permutations of all partitions of a set will be O((n!)!), although the example is a bit artificial. Now, to come up with a useful algorithm it's a totally different story. I am not aware of any such algorithm, and in any case its scaling will be absolutely awful.
You can do better than that - there are known to be problems that require 2^2^(p(n)) time to solve - see http://en.wikipedia.org/wiki/2-EXPTIME - and it appears that these problems are not completely artificial either: "Generalizations of many fully observable games are EXPTIME-complete"
While writing code, i found the following problem, to state it in a simple way:
Partition an array of floats X in array A and B such that the difference between the sum of the values in A and the sum of values of B is minimized
This was part of an investigation I was doing, but I can't find a way to efficiently perform this operation.
Edit:
To answer to those who believe this is from a math contest like PE, SPOJ or homework, it is not. I just had curiosity about this when i was trying to partition an already factorized number p in the set of factors a and b such that b=a+1. If we take logs from both sides, we can show this problem is equivalent to minimize a diference of sums, but that is where i have got stuck.
Just a first simple idea. Use dynamic programming methods.
I assume that this problem can be transformed to knapsack problem. You need to pick items from X (there'll be array A) to maximize sum but don't exceed (sumX - sumA) value (there'll be sum of items from array B). For algorithm to solve knapsack problem by dynamic programming approach look at wiki e.g.
This solution can be wrong, btw... but even if it'll work I'm more than sure that more efficient, elegant and short solutions exist.
A and B are sets of N dimensional vectors (N=10), |B|>=|A| (|A|=10^2, |B|=10^5). Similarity measure sim(a,b) is dot product (required). The task is following: for each vector a in A find vector b in B, such that sum of similarities ss of all pairs is maximal.
My first attempt was greedy algorithm:
find the pair with the highest similarity and remove that pair from A,B
repeat (1) until A is empty
But such greedy algorithm is suboptimal in this case:
a_1=[1, 0]
a_2=[.5, .4]
b_1=[1, 1]
b_2=[.9, 0]
sim(a_1,b_1)=1
sim(a_1,b_2)=.9
sim(a_2,b_1)=.9
sim(a_2, b_2)=.45
Algorithm returns [a_1,b_1] and [a_2, b_2], ss=1.45, but optimal solution yields ss=1.8.
Is there efficient algo to solve this problem? Thanks
This is essentially a matching problem in weighted bipartite graph. Just assume that weight function f is a dot product (|ab|).
I don't think the special structure of your weight function will simplify problem a lot, so you're pretty much down to finding a maximum matching.
You can find some basic algorithms for this problem in this wikipedia article. Although at first glance they don't seem viable for your data (V = 10^5, E = 10^7), I would still research them: some of them might allow you to take advantage of your 'lame' set of vertixes, with one part orders of magnitude smaller than the other.
This article also seems relevant, although doesn't list any algorithms.
Not exactly a solution, but hope it helps.
I second Nikita here, it is an assignment (or matching) problem. I'm not sure this is computationally feasible for your problem, but you could use the Hungarian algorithm, also known as Munkres' assignment algorithm, where the cost of assignment (i,j) is the negative of the dot product of ai and bj. Unless you happen to know how the elements of A and B are formed, I think this is the most efficient known algorithm for your problem.