Coin change algorithm and pseudocode: Need clarification - algorithm

I'm trying to understand the coin change problem solution, but am having some difficulty.
At the Algorithmist, there is a pseudocode solution for the dynamic programming solution, shown below:
n = goal number
S = [S1, S2, S3 ... Sm]
function sequence(n, m)
//initialize base cases
for i = 0 to n
for j = 0 to m
table[i][j] = table[i-S[j]][j] + table[i][j-1]
This is a pretty standard O(n^2) algorithm that avoids recalculating the same answer multiple times by using a 2-D array.
My issue is two-fold:
How to define the base cases and incorporate them in table[][] as initial values
How to extract out the different sequences from the table
Regarding issue 1, there are three base cases with this algorithm:
if n==0, return 1
if n < 0, return 0
if n >= 1 && m <= 0, return 0
How to incorporate them into table[][], I am not sure. Finally, I have no idea how to extract out the solution set from the array.

We can implement a dynamic programming algorithm in at least two different approaches. One is the top-down approach using memoization, the other is the bottom-up iterative approach.
For a beginner to dynamic programming, I would always recommend using the top-down approach first since this will help them understand the recurrence relationships in dynamic programming.
So in order to solve the coin changing problem, you've already understood what the recurrence relationship says:
table[i][j] = table[i-S[j]][j] + table[i][j-1]
Such a recurrence relationship is good but is not that well-defined since it doesn't have any boundary conditions. Therefore, we need to define boundary conditions in order to ensure the recurrence relationship could successfully terminate without going into an infinite loop.
So what will happen when we try to go down the recursive tree?
If we need to calculate table[i][j], which means the number of approaches to change i using coins from type 0 to j, there are several corner cases we need to handle:
1) What if j == 0?
If j == 0 we will try to solve the sub-problem table(i,j-1), which is not a valid sub-problem. Therefore, one boundary condition is:
if(j==0) {
if(i==0) table[i][j] = 1;
else table[i][j] = 0;
}
2) What if i - S[j] < 0?
We also need to handle this boundary case and we know in such a condition we should either not try to solve this sub-problem or initialize table(i-S[j],j) = 0 for all of these cases.
So in all, if we are going to implement this dynamic programming from a top-down memoization approach, we can do something like this:
int f(int i, int j) {
if(calc[i][j]) return table[i][j];
calc[i][j] = true;
if(j==0) {
if(i==0) return table[i][j]=1;
else return table[i][j]=0;
}
if(i>=S[j])
return table[i][j]=table[i-S[j][j]+table[i][j-1];
else
return table[i][j]=table[i][j-1];
}
In practice, it's also possible that we use the value of table arrays to help track whether this sub-problem has been calculated before (e.g. we can initialize a value of -1 means this sub-problem hasn't been calculated).
Hope the answer is clear. :)

Related

Greedy Algorithm for solving Horn formulas

This is my assignment question I've been trying to understand for a couple of days and ultimately solve it. So far, I have got no success. So any guidance, help in understanding or solving the problem is appreciated.
You are given a set of m constraints over n Boolean variables
{x1, x2, ..., xn}.
The constraints are of two types:
equality constraints: xi = xj, for some i != j
inequality constraints: xi != xj, for some i != j
Design an efficient greedy algorithm that given the
set of equality and inequality constraints determines if it is
possible or not to satisfy all the constraints simultaneously.
If it
is possible to satisfy all the constraints, your algorithm should
output an assignment to the variables that satisfyes all the
constraints.
Choose a representation for the input to this problem
and state the problem formally using the notation Input: ..., Output:
....
Describe your greedy algorithm in plain English. In what
sense is your algorithm "greedy"?
Describe your greedy algorithm
in pseudocode.
Briefly justify the correctness of your algorithm.
State and justify the running time of your algorithm. The more
efficient algorithm the better.
What I've figured out so far is that this problem is related to the Boolean satisfiability (SAT) problem. I've tried setting all the variables to false first and then, by counter examples, prove that it cannot satisfy all the constraints at once.
I am getting confused between constraint satisfaction problems (CSP) and Horn SAT. I read certain articles on these to get a solution and this led me to confusion. My logic was to create a tree and apply DFS to check if constraints are satisfied, whereas Horn SAT solutions are leading me to mathematical proofs.
Any help is appreciated as this is my learning stage and I cannot master it all at once. :)
(informal) Classification:
So firstly, it's not the boolean SAT problem, because that's NP-complete. Your teacher has implied that this isn't NP-complete by asking for an efficient (ie. at most polynomial-time) way to always solve the problem.
Modelling (thinking about) the problem:
One way to think of this problem is as a graph, where inequalities represent one type of edge, while equalities represent another:
Thinking of this problem graphically helped me realise that it's a bit like a graph-colouring problem: we could set all nodes to ? (unset), then choose any node to set to true, then do a breadth-first search from that node to set all connecting nodes (setting them to either true or false), checking for any contradiction. If we complete this for a connected component of the graph, without finding contradictions, then we can ignore all nodes in that part and randomly set the value of another node, etc. If we do this until no connected components are left, and we still have no contradictions, then we've set the graph in a way that represents a legitimate solution.
Solution:
Because there's exactly n elements, we can make an associated "bucket" array of the equalities and another for the inequalities (each "bucket" could contain an array of what it equates to, but we could get even more efficient than this if we wanted [the complexity would remain the same]).
Your array of arrays for equalities could be imagined like this:
which would represent that:
0 == 1
1 == 2
3 == 4
Note that this is an irregular matrix, and requires 2*m space. We do the same thing for the an inequality matrix. Moreover, setting up both of these arrays (of arrays) uses O(m + n) space and time complexity.
Now, if there exists a solution, {x0, x1, x2, x3}, then {!x0, !x1, !x2, !x3} is also a solution. Proof:
(xi == xj) iff (!xi == !xj)
So it won't effect our solution if we set one of the elements randomly. Let's set xi to true, and set the others to ? [numerically we'll be dealing with three values: 0 (false), 1 (true), and 2 (unset)].
We'll call this array solution (even though it's not finished yet).
Now we can use recursion to consider all the consequences of setting our value:
(The below code is psuedo-code, as the questioner didn't specify a language. I've made it somewhat c++-style, but just to keep it generic and to use the pretty formatting colours.)
bool Set (int i, bool val) // i is the index
{
if (solution[i] != '?')
return (solution[i] == val);
solution[i] == val;
for (int j = 0; j < equalities[i].size(); j += 1)
{
bool success = Set(equalities[i][j], val);
if (!success)
return false; // Contradiction found
}
for (int j = 0; j < inequalities[i].size(); j += 1)
{
bool success = Set(inequalities[i][j], !val);
if (!success)
return false; // Contradiction found
}
return true; // No contradiction found
}
void Solve ()
{
for (int i = 0; i < solution.size(); i += 1)
solution[i] == '?';
for (int i = 0; i < solution.size(); i += 1)
{
if (solution[i] != '?')
continue; // value has already been set/checked
bool success = Set(i, true);
if (!success)
{
print "No solution";
return;
}
]
print "At least one solution exists. Here is a solution:";
print solution;
}
Because of the first if condition in the Set function, the function can only be executed (beyond the if statement) n times. The Set function can call itself only when passing the first if statement, which it does n times, 1 for each node value. Each time the Set function passes into the body of the function (beyond the if statement), the work it does is proportional to the number of edges associated with the corresponding node. The Solve function can call the Set function at most n times. Hence the number of times that the function can be called is O(m+n), which corresponds to the amount of work done during the solving process.
A trick here is to recognise that the Solve function will need to call the Set function C times, where C is the number of connected components of the graph. Note that each connected component is independent of each other, so the same rule applies: we can legitimately choose a value of one of its elements then consider the consequences.
The fastest solution would still need to read all of the constraints, O(m) and would need to output a solution when it's possible, O(n); therefore it's not possible to get a solution with better time complexity than O(m+n). The above is a greedy algorithm with O(m+n) time and space complexity.
It's probably possible to get better space complexity (while maintaining the O(m+n) time complexity), maybe even O(1), but I'm not sure.
As for Horn formulas, I'm embarrassed to admit that I know nothing about them, but this answer directly responds to everything that was asked of you in the assignment.
Let’s take an example 110 with constraints x1=x2 and x2!=x3
Remember since we are only given the constraints, the algorithm can also end up generating 001 as output as it satisfies the constraints too
One way to solve it would be
Have two lists one for each constraint type,
Each list holds a pair of i,j index.
Sort the lists based on the i index.
Now for each pair in equality constraint check if there’s no constraint in inequality that conflicts with it.
If it does then you can exit right away
Otherwise you have to check if there’s more pairs in equality constraint lists that have one of the pairs.
You can then assign one or zero to that and eventually you would be able to generate the complete output

Big-O & Runing Time of this algorthm , how can convert this to an iterative algorthm

What is the runing time of this algorthm in Big-O and how i convert this to iterative algorthm?
public static int RecursiveMaxOfArray(int[] array) {
int array1[] = new int[array.length/2];
int array2[] = new int[array.length - (array.length/2)];
for (int index = 0; index < array.length/2 ; index++) {
array1[index] = array[index];
}
for (int index = array.length/2; index < array.length; index++) {
array2[index - array.length/2] = array[index] ;
}
if (array.length > 1) {
if(RecursiveMaxOfArray(array1) > RecursiveMaxOfArray(array2)) {
return RecursiveMaxOfArray(array1) ;
}
else {
return RecursiveMaxOfArray(array2) ;
}
}
return array[0] ;
}
At each stage, an array of size N is divided into equal halves. The function is then recursively called three times on an array of size N/2. Why three instead of the four which are written? Because the if statement only enters one of its clauses. Therefore the recurrence relation is T(N) = 3T(N/2) + O(N), which (using the Master theorem) gives O(N^[log2(3)]) = O(n^1.58).
However, you don't need to call it for the third time; just cache the return result of each recursive call in a local variable. The coefficient 3 in the recurrence relation becomes 2; I'll leave it to you to apply the Master theorem on the new recurrence.
There's another answer that accurately describes your algorithm's runtime complexity, how to determine it, and how to improve it, so I won't focus on that. Instead, let's look at the other part of your question:
how [do] i convert this to [an] iterative algorithm?
Well, there's a straightforward solution to that which you hopefully could have gotten yourself - loop over the list and track the smallest value you've seen so far.
However, I'm guessing your question is better phrased as this:
How do I convert a recursive algorithm into an iterative algorithm?
There are plenty of questions and answers on this, not just here on StackOverflow, so I suggest you do some more research on this subject. These blog posts on converting recursion to iteration may be an excellent place to start if this is your approach to take, though I can't vouch for them because I haven't read them. I just googled "convert recursion to iteration," picked the first result, then found this page which links to all four of the blog post.

Alternative approach for Rod Cutting Algorithm ( Recursive )

In the book Introduction To Algorithms , the naive approach to solving rod cutting problem can be described by the following recurrence:
Let q be the maximum price that can be obtained from a rod of length n.
Let array price[1..n] store the given prices . price[i] is the given price for a rod of length i.
rodCut(int n)
{
initialize q as q=INT_MIN
for i=1 to n
q=max(q,price[i]+rodCut(n-i))
return q
}
What if I solve it using the below approach:
rodCutv2(int n)
{
if(n==0)
return 0
initialize q = price[n]
for i = 1 to n/2
q = max(q, rodCutv2(i) + rodCutv2(n-i))
return q
}
Is this approach correct? If yes, why do we generally use the first one? Why is it better?
NOTE:
I am just concerned with the approach to solving this problem . I know that this problem exhibits optimal substructure and overlapping subproblems and can be solved efficiently using dynamic programming.
The problem with the second version is it's not making use of the price array. There is no base case of the recursion so it'll never stop. Even if you add a condition to return price[i] when n == 1 it'll always return the result of cutting the rod into pieces of size 1.
your 2nd approach is absolutely correct and its time complexity is also same as the 1st one.
In Dynamic Programming also, we can make tabulation on same approach.Here is my solution for recursion :
int rodCut (int price[],int n){
if(n<=0) return 0;
int ans = price[n-1];
for(int i=1; i<=n/2 ; ++i){
ans=max(ans, (rodCut(price , i) + rodCut(price , n-i)));
}
return ans;
}
And, Solution for Dynamic Programming :
int rodCut(int *price,int n){
int ans[n+1];
ans[0]=0; // if length of rod is zero
for(int i=1;i<=n;++i){
int max_value=price[i-1];
for(int j=1;j<=i/2;++j){
max_value=max(max_value,ans[j]+ans[i-j]);
}
ans[i]=max_value;
}
return ans[n];
}
Your algorithm looks almost correct - you need to be a bit careful when n is odd.
However, it's also exponential in time complexity - you make two recursive calls in each call to rodCutv2. The first algorithm uses memoisation (the price array), so avoids computing the same thing multiple times, and so is faster (it's polynomial-time).
Edit: Actually, the first algorithm isn't correct! It never stores values in prices, but I suspect that's just a typo and not intentional.

1D Memoization in Recursive solution of Longest Increasing Subsequence

Calculating LIS (Longest Increasing Subsequence) in an array is a very famous Dynamic Programming problem. However in every tutorial they first show the recursive solution without using the concepts of DP and then solve it by applying Bottom-Up DP (Iterative Solution).
My question is:
How will we use Memoization in Recursive Solution itself.
Not just Memoization but memoization using 1D array.
I did some research but could not find anything of relevance. Although there are 2 places where recursive memoization has been asked 1 & 2 but the solutions over there, are using 2D Map / Array for memoization.
Anyway Memoizing the solution with 1D array, is giving wrong output.
Here is what I did:
int lis(int idx, int prev)
{
if(idx >= N)
return 0;
if(dp[idx])
return dp[idx];
int best_ans = lis(idx+1, prev);
int cur_ans = 0;
if(arr[idx] > prev)
{
cur_ans = 1 + lis(idx+1, arr[idx]);
}
int ans = max(best_ans, cur_ans);
dp[idx] = ans;
return ans;
}
int main()
{
// Scan N
// Scan arr
ans = lis(0, -1));
print ans;
}
Although I know the reason why this solution is giving Wrong output as:
There can be more than one solution for the giving index based on what was the previous value.
But I still want to know how it can be done using 1D array.
I am curious to know the solution because I have read that every DP Top-Down solution can be reframed into Bottom-Up and vice-versa.
It would be highly helpful if someone could provide some insight for the same.
Thanks in advance.
This can't be done because the problem fundamentally needs a 2D data structure to solve.
The bottom-up approach can cheat by producing one row at a time of the data structure. Viewed over time, it produces a 2D data structure, but at any given time you only see one dimension of it.
The top-down approach has to build the entire 2D data structure.
This is a fundamental tradeoff in DP. It is usually easier to write down the top-down approach. But the bottom-up approach only has to have part of the overall data structure at any time, and therefore has significantly lower memory requirements.
def LongestIncreasingSubsequenceMemo(nums, i, cache):
if cache[i] > 0:
return cache[i]
result = 1
for j in range(i):
if nums[i] > nums[j]:
result = max(result, 1 + LongestIncreasingSubsequenceMemo(nums, j, cache))
cache[i] = result
return result
def main():
nums = [1,2,3,4,5]
if not nums:
return 0
n = len(nums)
cache = [0 for i in range(n)]
result = 1
for i in range(n):
result = max(result, LongestIncreasingSubsequenceMemo(nums, i, cache))
return result
if __name__ == "__main__":
print(main())
In the above solution, we are taking a one-dimensional array and updating it for each element in the array.
This can be done and there is no requirement for a 2d array. Because we need to find the max LIS ending at every index So if we calculate LIS for the element at arr[0], instead of calculating again and again we calculate once and store it in DP[1].
If we calculated LIS for {arr[0],arr[1]} then we store the result in DP[2] and so on until the DP[n]. See the code below to understand this fully.
Memoization of the recursive code
above code got accepted on gfg as well

Dynamic programming recurrence relation

I am trying to find and solve the recurrence relation for a dynamic programming approach to UVA #11450. As a disclaimer, this is part of a homework assignment that I have mostly finished but am confused about the analysis.
Here is my (working) code:
int shop(int m, int c, int items[][21], int sol[][20]) {
if (m < 0) return NONE; // No money left
if (c == 0) return 0; // No garments left
if (sol[m][c] != NONE) return sol[m][c]; // We've been here before
// For each model of the current garment
for (int i = 1; i <= items[c-1][0]; i++) {
// Save the result
int result = shop(m-items[c-1][i], c-1, items, sol);
// If there was a valid result, record it for next time
if (result != NONE) sol[m][c] = max(sol[m][c], result+items[c-1][i]);
}
return sol[m][c];
}
I am having trouble with a few aspects of the analysis:
What is the basic operation? My initial reaction would be subtraction, since each time we call the function we subtract one from C.
Since the recursive call is within a loop, does that just mean multiplication in the recurrence relation?
How do I factor in the fact that it uses a dynamic table into the recurrence relation? I know that some problems decompose into linear when a tabula is used, but I'm not sure how this one decomposes.
I know that the complexity (according to Algorithmist) is O(M*C*max(K)) where K is the number of models of each garment, but I'm struggling to work backwards to get the recurrence relation. Here's my guess:
S(c) = k * S(c-1) + 1, S(0) = 0
However, this fails to take M into account.
Thoughts?
You can think of each DP state (m,c) as a vertex of a graph, where the recursive calls to states (m-item_i,c-1) are edges from (m,c) to (m-item_i,i).
Memorization of your recursion means that you only start the search from a vertex once, and also process its outgoing edges only once. So, your algorithm is essentially a linear search on this graph, and has complexity O(|V|+|E|). There are M*C vertices and at most max(K) edges going out of each one, so you can bound the number of edges by O(M*C*max(K)).

Resources