Dynamic programming recurrence relation - algorithm

I am trying to find and solve the recurrence relation for a dynamic programming approach to UVA #11450. As a disclaimer, this is part of a homework assignment that I have mostly finished but am confused about the analysis.
Here is my (working) code:
int shop(int m, int c, int items[][21], int sol[][20]) {
if (m < 0) return NONE; // No money left
if (c == 0) return 0; // No garments left
if (sol[m][c] != NONE) return sol[m][c]; // We've been here before
// For each model of the current garment
for (int i = 1; i <= items[c-1][0]; i++) {
// Save the result
int result = shop(m-items[c-1][i], c-1, items, sol);
// If there was a valid result, record it for next time
if (result != NONE) sol[m][c] = max(sol[m][c], result+items[c-1][i]);
}
return sol[m][c];
}
I am having trouble with a few aspects of the analysis:
What is the basic operation? My initial reaction would be subtraction, since each time we call the function we subtract one from C.
Since the recursive call is within a loop, does that just mean multiplication in the recurrence relation?
How do I factor in the fact that it uses a dynamic table into the recurrence relation? I know that some problems decompose into linear when a tabula is used, but I'm not sure how this one decomposes.
I know that the complexity (according to Algorithmist) is O(M*C*max(K)) where K is the number of models of each garment, but I'm struggling to work backwards to get the recurrence relation. Here's my guess:
S(c) = k * S(c-1) + 1, S(0) = 0
However, this fails to take M into account.
Thoughts?

You can think of each DP state (m,c) as a vertex of a graph, where the recursive calls to states (m-item_i,c-1) are edges from (m,c) to (m-item_i,i).
Memorization of your recursion means that you only start the search from a vertex once, and also process its outgoing edges only once. So, your algorithm is essentially a linear search on this graph, and has complexity O(|V|+|E|). There are M*C vertices and at most max(K) edges going out of each one, so you can bound the number of edges by O(M*C*max(K)).

Related

Alternative approach for Rod Cutting Algorithm ( Recursive )

In the book Introduction To Algorithms , the naive approach to solving rod cutting problem can be described by the following recurrence:
Let q be the maximum price that can be obtained from a rod of length n.
Let array price[1..n] store the given prices . price[i] is the given price for a rod of length i.
rodCut(int n)
{
initialize q as q=INT_MIN
for i=1 to n
q=max(q,price[i]+rodCut(n-i))
return q
}
What if I solve it using the below approach:
rodCutv2(int n)
{
if(n==0)
return 0
initialize q = price[n]
for i = 1 to n/2
q = max(q, rodCutv2(i) + rodCutv2(n-i))
return q
}
Is this approach correct? If yes, why do we generally use the first one? Why is it better?
NOTE:
I am just concerned with the approach to solving this problem . I know that this problem exhibits optimal substructure and overlapping subproblems and can be solved efficiently using dynamic programming.
The problem with the second version is it's not making use of the price array. There is no base case of the recursion so it'll never stop. Even if you add a condition to return price[i] when n == 1 it'll always return the result of cutting the rod into pieces of size 1.
your 2nd approach is absolutely correct and its time complexity is also same as the 1st one.
In Dynamic Programming also, we can make tabulation on same approach.Here is my solution for recursion :
int rodCut (int price[],int n){
if(n<=0) return 0;
int ans = price[n-1];
for(int i=1; i<=n/2 ; ++i){
ans=max(ans, (rodCut(price , i) + rodCut(price , n-i)));
}
return ans;
}
And, Solution for Dynamic Programming :
int rodCut(int *price,int n){
int ans[n+1];
ans[0]=0; // if length of rod is zero
for(int i=1;i<=n;++i){
int max_value=price[i-1];
for(int j=1;j<=i/2;++j){
max_value=max(max_value,ans[j]+ans[i-j]);
}
ans[i]=max_value;
}
return ans[n];
}
Your algorithm looks almost correct - you need to be a bit careful when n is odd.
However, it's also exponential in time complexity - you make two recursive calls in each call to rodCutv2. The first algorithm uses memoisation (the price array), so avoids computing the same thing multiple times, and so is faster (it's polynomial-time).
Edit: Actually, the first algorithm isn't correct! It never stores values in prices, but I suspect that's just a typo and not intentional.

Modification to subsetsum algorithm by pisinger

I was looking at the algorithm by pisinger as detailed here
Fast solution to Subset sum algorithm by Pisinger
and on wikipedia http://en.wikipedia.org/wiki/Subset_sum_problem
For the case that each xi is positive and bounded by a fixed constant C, Pisinger found a linear time algorithm having time complexity O(NC).[3] (Note that this is for the version of the problem where the target sum is not necessarily zero, otherwise the problem would be trivial.)
It seems that, with his approach, there are two constraints. The first one in particular says that all values in any input, must be <= C.
It sounds to me that, with just that constraint alone, this is not an algorithm that can solve the original subset sum problem (with no restrictions).
But suppose C=1000000 for example. Then before we even run the algorithm on the input list. Can't we just divide every element by some number d (and also divide the target sum by d too) such that every number in the input list will be <= C.
When the algorithm returns some subset s, we just multiply every element in s by d. And we will have our true subset.
With that observation, it feels like it not worth mentioning that we need that C constraint. Or at least say that the input can be changed W.L.O.G (without loss of generality). So we can still solve the same problem no matter the input. That constraint is not making the problem any harder in other words. So really his algorithms only constraint is that it can only handle numbers >=0.
Am I right?
Thanks
What adrian.budau said is correct.
In Polynomial solution,
every value of array should remain as integer after the division you said.
otherwise DP solution does not work anymore, because item values are used as index in DP array.
Following is my Polynomial(DP) solution for subsetsum problem(C=ELEMENT_MAX_VALUE)
bool subsetsum_dp(VI& v, int sum)
{
const int MAX_ELEMENT = 100;
const int MAX_ELEMENT_VALUE = 1000;
static int dp[MAX_ELEMENT+1][MAX_ELEMENT*MAX_ELEMENT_VALUE + 1]; memset(dp, -1, sizeof(dp));
int n = S(v);
dp[0][0] = 1, dp[0][v[0]] = 1;//include, excluse the value
FOR(i, 1, n)
{
REP(j, MAX_ELEMENT*MAX_ELEMENT_VALUE + 1)
{
if (dp[i-1][j] != -1) dp[i][j] = 1, dp[i][j + v[i]] = 1;//include, excluse the value
}
}
if (dp[n-1][sum] == 1) return true;
return false;
}

How to find recurrence relation from recursive algorithm

I know how to find recurrence relation from simple recursive algorithms.
For e.g.
QuickSort(A,LB, UB){
key = A[LB]
i = LB
j = UB + 1
do{
do{
i = i + 1
}while(A[i] < key)
do{
j = j - 1
}while(A[j] > key)
if(i < j){
swap(A[i]<->A[j])
}while(i <= j)
swap(key<->A[j]
QuickSort(A,LB,J-1)
QuickSort(A,J+1,UB)
}
T(n) = T(n - a) + T(a) + n
In the above recursive algorithm it was quite easy to understand how the input size is reducing after each recursive call. But how to find recurrence relation for any algorithm in general, which is not recursive but might also be iterative. So i started learning how to convert iterative algorithm to recursive just to make it easy to find recurrence relation.
I found this link http://refactoring.com/catalog/replaceIterationWithRecursion.html.
I used to convert my linear search algorithm to recursive.
LinearSearch(A,x,LB,UB){
PTR = LB
while(A[PTR]!=x && PTR<=UB){
if(PTR==UB+1){
print("Element does not exist")
}
else{
print("location"+PTR)
}
}
}
got converted to
LinearSearch(A,x,LB,UB){
PTR=LB
print("Location"+search(A,PTR,UB,x))
}
search(A,PTR,UB,x){
if(A[PTR]!=x && PTR<=UB){
if(PTR==UB+1){
return -1
}
else{
return search(A,PTR+1,UB,x)
}
}
else{
return PTR
}
}
This gives the recurrence relation to be T(n) = T(n-1) + 1
But i was wondering is this the right approach to find recurrence relation for any algorithm?
Plus i don't know how to find recurrence relation for algorithms where more than one parameter is increasing or decreasing.
e.g.
unsigned greatest_common_divisor (const unsigned a, const unsigned b)
{
if (a > b)
{
return greatest_common_divisor(a-b, b);
}
else if (b > a)
{
return greatest_common_divisor(a, b-a);
}
else // a == b
{
return a;
}
}
First of all, algorithms are very flexible so you should not expect to have a simple rule that covers all of them.
That said, one thing that I think will be helpful for you is to pay more attention to the structure of the input you pass to your algorithm than to the algorithm yourself. For example, consider that QuickSort you showed in your post. If you glance at those nested do-whiles you are probably going to guess its O(N^2) when in reality its O(N). The real answer is easier to find by looking at the inputs: i always increases and j always decreases and when they finaly meet each other, each of the N indices of the array will have been visited exactly once.
Plus I don't know how to find recurrence relation for algorithms where more than one parameter is increasing or decreasing.
Well, those algorithms are certainly harder than the ones with a single variable. For the euclidean algorithm you used as an example, the complexity is actually not trivial to figure out and it involves thinking about greatest-common-divisors instead of just looking at the source code for the algorithm's implementation.

Connecting a Set of Vertices into an optimally weighted graph

This is essentially the problem of connecting n destinations with the minimal amount of road possible.
The input is a set of vertices (a,b, ... , n)
The weight of an edge between two vertices is easily calculated (example the cartesian distance between the two vertices)
I would like an algorithm that given a set of vertices in euclidian space, returns a set of edges that would constitute a connected graph and whose total weight of edges is as small as it could be.
In graph language, this is the Minimum Spanning Tree of a Connected Graph.
With brute force I would have:
Define all possible edges between all vertices - say you have n
vertices, then you have n(n-1)/2 edges in the complete graph
A possible edge can be on or off (2 states)
Go through all possible edge on/off
combinations: 2^(n(n-1)/2)!
Ignore all those that would not connect the
graph
From the remaining combinations, find the one whose sum of
edge weights is the smallest of all
I understand this is an NP-Hard problem. However, realistically for my application, I will have a maximum of 11 vertices. I would like to be able to solve this on a typical modern smart phone, or at the very least on a small server size.
As a second variation, I would like to obtain the same goal, with the restriction that each vertex is connected to a maximum of one other vertex. Essentially obtaining a single trace, starting from any point, and finishing at any other point, as long as the graph is connected. There is no need to go back to where you started. In graph language, this is the Open Euclidian Traveling Salesman Problem.
Some pseudocode algorithms would be much helpful.
Ok for the first problem you have to build a Minimum Spanning Tree. There are several algorithms to do so, Prim and Kruskal. But take a look also in the first link to the treatment for complete graphs that it is your case.
For the second problem, it becomes a little more complicated. The problem becomes an Open Traveling Salesman Problem (oTSP). Reading the previous link maybe focused on Euclidean and Asymmetric.
Regards
Maybee you could try a greedy algorithm:
1. Create a list sortedList that stores each pair of nodes i and j and is sorted by the
weight w(i,j).
2. Create a HashSet connectedNodes that is empty at the beginning
3. while (connectedNodes.size() < n)
element := first element of sortedList
if (connectedNodes.isEmpty())
connectedNodes.put(element.nodeI);
connectedNodes.put(element.nodeJ);
delete element from sortedList
else
for(element in sortedList) //start again with the first
if(connectedNodes.get(element.nodeI) || connectedNodes.get(element.nodeJ))
if(!(connectedNodes.get(element.nodeI) && connectedNodes.get(element.nodeJ)))
//so it does not include already both nodes
connectedNodes.put(element.nodeI);
connectedNodes.put(element.nodeJ);
delete element from sortedList
break;
else
continue;
So I explain step 3 a little bit:
You add as long nodes till all nodes are connected to one other. It is sure that the graph is connected, because you just add a node, if he has a connection to an other one already in the connectedNodes list.
So this algorithm is greedy what means, it does not make sure, that the solution is optimal. But it is a quite good approximation, because it always takes the shortest edge (because sortedList is sorted by the weight of the edge).
Yo don't get duplicates in connectedNodes, because it is a HashSet, which also make the runtime faster.
All in all the runtime should be O(n^2) for the sorting at the beginning and below its around O(n^3), because in worst case you run in every step through the whole list that has size of n^2 and you do it n times, because you add one element in each step.
But more likely is, that you find an element much faster than O(n^2), i think in most cases it is O(n).
You can solve the travelsalesman problem and the hamilton path problem with the optimap tsp solver fron gebweb or a linear program solver. But the first question seems to ask for a minimum spanning tree maybe the question tag is wrong?
For the first problem, there is an O(n^2 * 2^n) time algorithm. Basically, you can use dynamic programming to reduce the search space. Let's say the set of all vertices is V, so the state space consists of all subsets of V, and the objective function f(S) is the minimum sum of weights of the edges connecting vertices in S. For each state S, you may enumerate over all edges (u, v) where u is in S and v is in V - S, and update f(S + {v}). After checking all possible states, the optimal answer is then f(V).
Below is the sample code to illustrate the idea, but it is implemented in a backward approach.
const int n = 11;
int weight[n][n];
int f[1 << n];
for (int state = 0; state < (1 << n); ++state)
{
int res = INF;
for (int i = 0; i < n; ++i)
{
if ((state & (1 << i)) == 0) continue;
for (int j = 0; j < n; ++j)
{
if (j == i || (state & (1 << j)) == 0) continue;
if (res > f[state - (1 << j)] + weight[i][j])
{
res = f[state - (1 << j)] + weight[i][j];
}
}
}
f[state] = res;
}
printf("%d\n", f[(1 << n) - 1]);
For the second problem, sorry I don't quite understand it. Maybe you should provide some examples?

Coin change algorithm and pseudocode: Need clarification

I'm trying to understand the coin change problem solution, but am having some difficulty.
At the Algorithmist, there is a pseudocode solution for the dynamic programming solution, shown below:
n = goal number
S = [S1, S2, S3 ... Sm]
function sequence(n, m)
//initialize base cases
for i = 0 to n
for j = 0 to m
table[i][j] = table[i-S[j]][j] + table[i][j-1]
This is a pretty standard O(n^2) algorithm that avoids recalculating the same answer multiple times by using a 2-D array.
My issue is two-fold:
How to define the base cases and incorporate them in table[][] as initial values
How to extract out the different sequences from the table
Regarding issue 1, there are three base cases with this algorithm:
if n==0, return 1
if n < 0, return 0
if n >= 1 && m <= 0, return 0
How to incorporate them into table[][], I am not sure. Finally, I have no idea how to extract out the solution set from the array.
We can implement a dynamic programming algorithm in at least two different approaches. One is the top-down approach using memoization, the other is the bottom-up iterative approach.
For a beginner to dynamic programming, I would always recommend using the top-down approach first since this will help them understand the recurrence relationships in dynamic programming.
So in order to solve the coin changing problem, you've already understood what the recurrence relationship says:
table[i][j] = table[i-S[j]][j] + table[i][j-1]
Such a recurrence relationship is good but is not that well-defined since it doesn't have any boundary conditions. Therefore, we need to define boundary conditions in order to ensure the recurrence relationship could successfully terminate without going into an infinite loop.
So what will happen when we try to go down the recursive tree?
If we need to calculate table[i][j], which means the number of approaches to change i using coins from type 0 to j, there are several corner cases we need to handle:
1) What if j == 0?
If j == 0 we will try to solve the sub-problem table(i,j-1), which is not a valid sub-problem. Therefore, one boundary condition is:
if(j==0) {
if(i==0) table[i][j] = 1;
else table[i][j] = 0;
}
2) What if i - S[j] < 0?
We also need to handle this boundary case and we know in such a condition we should either not try to solve this sub-problem or initialize table(i-S[j],j) = 0 for all of these cases.
So in all, if we are going to implement this dynamic programming from a top-down memoization approach, we can do something like this:
int f(int i, int j) {
if(calc[i][j]) return table[i][j];
calc[i][j] = true;
if(j==0) {
if(i==0) return table[i][j]=1;
else return table[i][j]=0;
}
if(i>=S[j])
return table[i][j]=table[i-S[j][j]+table[i][j-1];
else
return table[i][j]=table[i][j-1];
}
In practice, it's also possible that we use the value of table arrays to help track whether this sub-problem has been calculated before (e.g. we can initialize a value of -1 means this sub-problem hasn't been calculated).
Hope the answer is clear. :)

Resources