{1,3,5} denomination coins; Sum = 11.
find minimum number of coins which can be used to make the sum
(We can use any number of coins of each denomination)
I searched for Run Time complexity of this Coin change problem particularly using dynamic programming method. But was not able to find explanation anywhere.
How to calculate the complexity of non dynamic solution and then change it for the dynamic one? (not the greedy one)
Edit:
Here is an implementation for which analysis was asked.
public int findCoinChange(int[] coins, int sum,int count) {
int ret = 0, maxRet = -1;
if(sum ==0)maxRet = count;
else if(sum < 0)maxRet = -1;
else{
for(int i:coins){
ret = findCoinChange(coins, sum - i,count+1);
if(maxRet< 0)maxRet = ret;
else if(ret >=0 && ret < maxRet){
maxRet = ret;
}
}
}
if(maxRet < 0)return -1;
else return maxRet;
}
Looks like Combinatorial explosion to me. However I am not sure how to deduce a run time complexity for this.
The dynamic programming solution to this problem is clearly O(k * n) (nested loops, blah blah blah) where k is the number of coins and n is the amount of money that change is being made for.
I don't know what you mean by non-dynamic programming solution. Sorry, you're going to have specify what algorithm you mean. The greedy algorithm fails in some cases, so you shouldn't be referring to that. Do you mean the linear programming solution? That's a terrible approach to this problem because we don't know what the complexity is, and it's possible to make it run arbitrarily slowly.
I also don't know what you mean by "change it for the dynamic one."
Related
In the question we have items with different values but all of the items weight doesn't matter. We have a goal of profit that we want to have by picking those items. But we want to have least amount of items and items are infinite.
So let's say our goal is 10 and we have items with values of 1,2,3,4. We want to have 4,4,2 rather than 3,3,3,1. They have same total value but what we wanted was the least amount of items with the same profit.
I already derived both dynamic and recursive methods to solve it but the trouble for me is that I just can not derive a mathematical formula for my recursive method.
Here is my recursive method
static int recursiveSolution(int[] values, int goal, int totalAmount) {
if (goal == 0)
return totalAmount;
if (goal < 0)
return Integer.MAX_VALUE;
totalAmount++;
int minAmount = Integer.MAX_VALUE;
for (int i = 0; i < values.length; i++) {
minAmount = Math.min(minAmount, recursiveSolution(values, goal - values[i], totalAmount));
}
return minAmount;
}
Assuming that what you are asking for is the Big-O of your algorithm...
This looks like a simple brute-force summation search. Unfortunately, the run-time of these are not easy to estimate and are dependent on the inputs values as well as their count (N). The Big-O estimates that I have seen for this type of solution are something like this:
Let
N = the number of values in the `values[]` array,
Gv = PRODUCT(values[])^(1/N)
(the geometric mean of the `values[]` array),
M = the target summation (`totalAmount`),
Then the complexity of the algorithm is (very approximately)
O(N^(M/Gv))
If I recall this correctly.
Hello StackOverflow community!
I had this question in my mind from so many days and finally have decided to get it sorted out. So, given a algorithm or say a function which implements some non-standard algorithm in your daily coding activity, how do you go about analyzing the rum time complexity?
Ok let me be more specific. Suppose you are solving this problem,
Given a NxN matrix consisting of positive integers, find the longest increasing sequence in it. You may only traverse in up, down, left or right directions but not diagonally.
Eg: If the matrix is
[ [9,9,4],
[6,6,8],
[2,1,1] ].
the algorithm must return 4
(The sequence being 1->2->6->9)
So yeah, looks like I have to use DFS. I get this part. I have done my Algorithms course back in Uni and can work my way around such questions. So, I come up with this solution say,
class Solution
{
public int longestIncreasingPathStarting(int[][] matrix, int i, int j)
{
int localMax = 1;
int[][] offsets = {{0,1}, {0,-1}, {1,0}, {-1,0}};
for (int[] offset: offsets)
{
int x = i + offset[0];
int y = j + offset[1];
if (x < 0 || x >= matrix.length || y < 0 || y >= matrix[i].length || matrix[x][y] <= matrix[i][j])
continue;
localMax = Math.max(localMax, 1+longestIncreasingPathStarting(matrix, x, y));
}
return localMax;
}
public int longestIncreasingPath(int[][] matrix)
{
if (matrix.length == 0)
return 0;
int maxLen = 0;
for (int i = 0; i < matrix.length; ++i)
{
for (int j = 0; j < matrix[i].length; ++j)
{
maxLen = Math.max(maxLen, longestIncreasingPathStarting(matrix, i, j));
}
}
return maxLen;
}
}
Inefficient, I know, but I wrote it this way on purpose! Anyways my question is, how do you go about analyzing the run time of longestIncreasingPath(matrix) function?
I can understand the analysis they teach us in a Algos course, you know the standard MergeSort, QuickSort analysis etc. but unfortunately and I hate to say this, that did not prepare me to apply it in my day-day coding job. I want to do it now, and hence would like to start it by analyzing such functions.
Can someone help me out here and describe the steps one would take to analyze the runtime of the above function? That would greatly help me. Thanks in advance, Cheers!
For day to day work eye-balling things usually works well.
In this case you will try to go in every direction recursively. So a really bad example comes to mind like: [[1,2,3], [2,3,4], [3,4,5]] so that you have two options from most cells. I happen to know that this will be O((2*n) ! / (n!*n!)) steps, but another good guess would be O(2^N). Now that you have an example where you know or can compute more easily the complexity, the overall complexity has to be at least that.
Usually, it doesn't really matter which one it is exactly since for both O(N!) and O(2^N) the run-time grows very fast and should only work fast for up to around 10-20 maybe a bit more if you are willing to wait. You would not run this algorithm for N ~= 1000, you would need something polynomial. So an rough estimate that you have a exponential solution would be enough to make a decision.
So in general to get an idea of the complexity, try to relate your solution to other algorithms where you know the complexity already or figure out a worst case scenario for the algorithm where it's easier to judge the complexity. Even if you are slightly off it might still help you make a decision.
If you need to compare algorithms of more similar complexity (ie. O(NlogN) vs O(N^2) for N~=100) you should implement both and benchmark since the constant factor might be the leading contributor to the run-time.
What is the runing time of this algorthm in Big-O and how i convert this to iterative algorthm?
public static int RecursiveMaxOfArray(int[] array) {
int array1[] = new int[array.length/2];
int array2[] = new int[array.length - (array.length/2)];
for (int index = 0; index < array.length/2 ; index++) {
array1[index] = array[index];
}
for (int index = array.length/2; index < array.length; index++) {
array2[index - array.length/2] = array[index] ;
}
if (array.length > 1) {
if(RecursiveMaxOfArray(array1) > RecursiveMaxOfArray(array2)) {
return RecursiveMaxOfArray(array1) ;
}
else {
return RecursiveMaxOfArray(array2) ;
}
}
return array[0] ;
}
At each stage, an array of size N is divided into equal halves. The function is then recursively called three times on an array of size N/2. Why three instead of the four which are written? Because the if statement only enters one of its clauses. Therefore the recurrence relation is T(N) = 3T(N/2) + O(N), which (using the Master theorem) gives O(N^[log2(3)]) = O(n^1.58).
However, you don't need to call it for the third time; just cache the return result of each recursive call in a local variable. The coefficient 3 in the recurrence relation becomes 2; I'll leave it to you to apply the Master theorem on the new recurrence.
There's another answer that accurately describes your algorithm's runtime complexity, how to determine it, and how to improve it, so I won't focus on that. Instead, let's look at the other part of your question:
how [do] i convert this to [an] iterative algorithm?
Well, there's a straightforward solution to that which you hopefully could have gotten yourself - loop over the list and track the smallest value you've seen so far.
However, I'm guessing your question is better phrased as this:
How do I convert a recursive algorithm into an iterative algorithm?
There are plenty of questions and answers on this, not just here on StackOverflow, so I suggest you do some more research on this subject. These blog posts on converting recursion to iteration may be an excellent place to start if this is your approach to take, though I can't vouch for them because I haven't read them. I just googled "convert recursion to iteration," picked the first result, then found this page which links to all four of the blog post.
In the book Introduction To Algorithms , the naive approach to solving rod cutting problem can be described by the following recurrence:
Let q be the maximum price that can be obtained from a rod of length n.
Let array price[1..n] store the given prices . price[i] is the given price for a rod of length i.
rodCut(int n)
{
initialize q as q=INT_MIN
for i=1 to n
q=max(q,price[i]+rodCut(n-i))
return q
}
What if I solve it using the below approach:
rodCutv2(int n)
{
if(n==0)
return 0
initialize q = price[n]
for i = 1 to n/2
q = max(q, rodCutv2(i) + rodCutv2(n-i))
return q
}
Is this approach correct? If yes, why do we generally use the first one? Why is it better?
NOTE:
I am just concerned with the approach to solving this problem . I know that this problem exhibits optimal substructure and overlapping subproblems and can be solved efficiently using dynamic programming.
The problem with the second version is it's not making use of the price array. There is no base case of the recursion so it'll never stop. Even if you add a condition to return price[i] when n == 1 it'll always return the result of cutting the rod into pieces of size 1.
your 2nd approach is absolutely correct and its time complexity is also same as the 1st one.
In Dynamic Programming also, we can make tabulation on same approach.Here is my solution for recursion :
int rodCut (int price[],int n){
if(n<=0) return 0;
int ans = price[n-1];
for(int i=1; i<=n/2 ; ++i){
ans=max(ans, (rodCut(price , i) + rodCut(price , n-i)));
}
return ans;
}
And, Solution for Dynamic Programming :
int rodCut(int *price,int n){
int ans[n+1];
ans[0]=0; // if length of rod is zero
for(int i=1;i<=n;++i){
int max_value=price[i-1];
for(int j=1;j<=i/2;++j){
max_value=max(max_value,ans[j]+ans[i-j]);
}
ans[i]=max_value;
}
return ans[n];
}
Your algorithm looks almost correct - you need to be a bit careful when n is odd.
However, it's also exponential in time complexity - you make two recursive calls in each call to rodCutv2. The first algorithm uses memoisation (the price array), so avoids computing the same thing multiple times, and so is faster (it's polynomial-time).
Edit: Actually, the first algorithm isn't correct! It never stores values in prices, but I suspect that's just a typo and not intentional.
I'm familiar with other sorting algorithms and the worst I've heard of in polynomial time is insertion sort or bubble sort. Excluding the truly terrible bogosort and those like it, are there any sorting algorithms with a worse polynomial time complexity than n^2?
Here's one, implemented in C#:
public void BadSort<T>(T[] arr) where T : IComparable
{
for (int i = 0; i < arr.Length; i++)
{
var shortest = i;
for (int j = i; j < arr.Length; j++)
{
bool isShortest = true;
for (int k = j + 1; k < arr.Length; k++)
{
if (arr[j].CompareTo(arr[k]) > 0)
{
isShortest = false;
break;
}
}
if(isShortest)
{
shortest = j;
break;
}
}
var tmp = arr[i];
arr[i] = arr[shortest];
arr[shortest] = tmp;
}
}
It's basically a really naive sorting algorithm, coupled with a needlessly-complex method of calculating the index with the minimum value.
The gist is this:
For each index
Find the element from this point forward which
when compared with all other elements after it, ends up being <= all of them.
swap this shortest element with the element at this index
The innermost loop (with the comparison) will be executed O(n^3) times in the worst case (descending-sorted input), and every iteration of the outer loop will put one more element into the correct place, getting you just a bit closer to being fully sorted.
If you work hard enough, you could probably find a sorting algorithm with just about any complexity you want. But, as the commenters pointed out, there's really no reason to seek out an algorithm with a worst-case like this. You'll hopefully never run into one in the wild. You really have to try to come up with one this bad.
Here's an example of elegant algorithm called slowsort which runs in Ω(n^(log(n)/(2+ɛ))) for any positive ɛ:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.116.9158&rep=rep1&type=pdf (section 5).
Slow Sort
Returns the sorted vector after performing SlowSort.
It is a sorting algorithm that is of humorous nature and not useful.
It's based on the principle of multiply and surrender, a tongue-in-cheek joke of divide and conquer.
It was published in 1986 by Andrei Broder and Jorge Stolfi in their paper Pessimal Algorithms and Simplexity Analysis.
This algorithm multiplies a single problem into multiple subproblems
It is interesting because it is provably the least efficient sorting algorithm that can be built asymptotically, and with the restriction that such an algorithm, while being slow, must still all the time be working towards a result.
void SlowSort(vector<int> &a, int i, int j)
{
if(i>=j)
return;
int m=i+(j-i)/2;
int temp;
SlowSort(a, i, m);
SlowSort(a, m + 1, j);
if(a[j]<a[m])
{
temp=a[j];
a[j]=a[m];
a[m]=temp;
}
SlowSort(a, i, j - 1);
}