Represent natural number as sum of squares using dynamic programming - algorithm

The problem is to find the minimum number of squares required to sum to a number n.
Some examples:
min[ 1] = 1 (1²)
min[ 2] = 2 (1² + 1²)
min[ 4] = 1 (2²)
min[13] = 2 (3² + 2²)
I'm aware of Lagrange's four-square theorem which states that any natural number can be represented as the sum of four squares.
I'm trying to solve this using DP.
This is what I came up with (its not correct)
min[i] = 1 where i is a square number
min[i] = min(min[i - 1] + 1, 1 + min[i - prev]) where prev is a square number < i
What is the correct DP way to solve this?

I'm not sure if DP is the most efficient way to solve this problem, but you asked for DP.
min[i] = min(min[i - 1] + 1, 1 + min[i - prev]) where prev is a square number < i
This is close, I would write condition as
min[i] = min(1 + min[i - prev]) for each square number 'prev <= i'
Note, that for each i you need to check different possible values of prev.
Here's simple implementation in Java.
Arrays.fill(min, Integer.MAX_VALUE);
min[0] = 0;
for (int i = 1; i <= n; ++i) {
for (int j = 1; j*j <= i; ++j) {
min[i] = Math.min(min[i], min[i - j*j] + 1);
}
}

Seems to me that you're close...
You're taking the min() of two terms, each of which is min[i - p] + 1, where p is either 1 or some other square < i.
To fix this, just take the min() of min[i - p] + 1 over all p (where p is a square < i).
That would be a correct way. There may be a faster way.
Also, it might aid readability if you give min[] and min() different names. :-)
P.S. the above approach requires that you memoize min[], either explicitly, or as part of your DP framework. Otherwise, the complexity of the algorithm, due to recursion, would be something like O(sqrt(n)!) :-p though the average case might be a lot better.
P.P.S. See #Nikita's answer for a nice implementation. To which I would add the following optimizations... (I'm not nitpicking his implementation -- he presented it as a simple one.)
Check whether n is a perfect square, before entering the outer loop: if so, min[n] = 1 and we're done.
Check whether i is a perfect square before entering the inner loop: if so, min[i] = 1, and skip the inner loop.
Break out of the inner loop if min[i] has been set to 2, because it won't get better (if it could be done with one square, we would never have entered the inner loop, thanks to the previous optimization).
I wonder if the termination condition on the inner loop can be changed to reduce the number of iterations, e.g. j*j*2 <= i or even j*j*4 <= i. I think so but I haven't got my head completely around it.
For large i, it would be faster to compute a limit for j before the inner loop, and compare j directly to it for the loop termination condition, rather than squaring j on every inner loop iteration. E.g.
float sqrti = Math.sqrt(i);
for (int j = 1; j <= sqrti; ++j) {
On the other hand, you need j^2 for the recursion step anyway, so as long as you store it, you might as well use it.

For variety, here's another answer:
Define minsq[i, j] as the minimum number of squares from {1^2, 2^2, ..., j^2} that sum up to i. Then the recursion is:
minsq[i, j] = min(minsq[i - j*j, j] + 1, minsq[i, j - 1])
i.e., to compute minsq[i, j] we either use j^2 or we don't. Our answer for n is then:
minsq[n, floor(sqrt(n))]
This answer is perhaps conceptually simpler than the one presented earlier, but code-wise it is more difficult since one needs to be careful with the base cases. The time complexity for both answers is asymptotically the same.

I present a generalized very efficient dynamical programming algorithm to find the minimum number of positive integers of given power to reach a given target in JavaScript.
For example to reach 50000 with integers of 4th power the result would be [10,10,10,10,10] or to reach 18571 with integers of 7th power would result [3,4]. This algorithm would even work with rational powers such as to reach 222 with integers of 3/5th power would be [ 32, 32, 243, 243, 243, 3125 ]
function getMinimumCubes(tgt,p){
var maxi = Math.floor(Math.fround(Math.pow(tgt,1/p))),
hash = {0:[]},
pow = 0,
t = 0;
for (var i = 1; i <= maxi; i++){
pow = Math.fround(Math.pow(i,p));
for (var j = 0; j <= tgt - pow; j++){
t = j + pow;
hash[t] = hash[t] ? hash[t].length <= hash[j].length ? hash[t]
: hash[j].concat(i)
: hash[j].concat(i);
}
}
return hash[tgt];
}
var target = 729,
result = [];
console.time("Done in");
result = getMinimumCubes(target,2);
console.timeEnd("Done in");
console.log("Minimum number of integers to square and add to reach", target, "is", result.length, "as", JSON.stringify(result));
console.time("Done in");
result = getMinimumCubes(target,6);
console.timeEnd("Done in");
console.log("Minimum number of integers to take 6th power and add to reach", target, "is", result.length, "as", JSON.stringify(result));
target = 500;
console.time("Done in");
result = getMinimumCubes(target,3);
console.timeEnd("Done in");
console.log("Minimum number of integers to cube and add to reach", target, "is", result.length, "as", JSON.stringify(result));
target = 2017;
console.time("Done in");
result = getMinimumCubes(target,4);
console.timeEnd("Done in");
console.log("Minimum number of integers to take 4th power and add to reach", target, "is", result.length, "as", JSON.stringify(result));
target = 99;
console.time("Done in");
result = getMinimumCubes(target,2/3);
console.timeEnd("Done in");
console.log("Minimum number of integers to take 2/3th power and add to reach", target, "are", result);

Related

Dynamic Programming - Rod Cutting Bottom Up Algorithm (CLRS) Solution Incorrect?

For the "rod cutting" problem:
Given a rod of length n inches and an array of prices that contains prices of all pieces of size smaller than n. Determine the maximum value obtainable by cutting up the rod and selling the pieces. [link]
Introduction to Algorithms (CLRS) page 366 gives this pseudocode for a bottom-up (dynamic programming) approach:
1. BOTTOM-UP-CUT-ROD(p, n)
2. let r[0 to n]be a new array .
3. r[0] = 0
4. for j = 1 to n
5. q = -infinity
6. for i = 1 to j
7. q = max(q, p[i] + r[j - i])
8. r[j] = q
9. return r[n]
Now, I'm having trouble understanding the logic behind line 6. Why are they doing max(q, p[i] + r[j - i]) instead of max(q, r[i] + r[j - i])? Since, this is a bottom up approach, we'll compute r[1] first and then r[2], r[3]... so on. This means while computing r[x] we are guaranteed to have r[x - 1].
r[x] denotes the max value we can get for a rod of length x (after cutting it up to maximize profit) whereas p[x] denotes the price of a single piece of rod of length x. Lines 3 - 8 are computing the value r[j] for j = 1 to n and lines 5 - 6 are computing the maximum price we can sell a rod of length j for by considering all the possible cuts. So, how does it ever make sense to use p[i] instead of r[i] in line 6. If trying to find the max price for a rod after we cut it at length = i, shouldn't we add the prices of r[i] and r[j - 1]?
I've used this logic to write a Java code and it seems to give the correct output for a number of test cases I've tried. Am I missing some cases in which where my code produces incorrect / inefficient solutions? Please help me out. Thanks!
class Solution {
private static int cost(int[] prices, int n) {
if (n == 0) {
return 0;
}
int[] maxPrice = new int[n];
for (int i = 0; i < n; i++) {
maxPrice[i] = -1;
}
for (int i = 1; i <= n; i++) {
int q = Integer.MIN_VALUE;
if (i <= prices.length) {
q = prices[i - 1];
}
for (int j = i - 1; j >= (n / 2); j--) {
q = Math.max(q, maxPrice[j - 1] + maxPrice[i - j - 1]);
}
maxPrice[i - 1] = q;
}
return maxPrice[n - 1];
}
public static void main(String[] args) {
int[] prices = {1, 5, 8, 9, 10, 17, 17, 20};
System.out.println(cost(prices, 8));
}
}
They should be equivalent.
The intuition behind the CLRS approach is that they are trying to find the single "last cut", assuming that the last piece of rod has length i and thus has value exactly p[i]. In this formulation, the "last piece" of length i is not cut further, but the remainder of length j-i is.
Your approach considers all splits of the rod into two pieces, where each of the two parts can be cut further. This considers a superset of cases compared to the CLRS approach.
Both approaches are correct and have the same asymptotic complexity. However, I would argue that the CLRS solution is more "canonical" because it more closely matches a common form of DP solution where you only consider the last "thing" (in this case, the last piece of uncut rod).
I guess both of the approach are correct.
before we prove both of them are correct lets define what exactly each approach does
p[i] + r[j - i] will give you the max value you can obtain from a rod of length j and of the piece is of size "i"(cannot divide that piece further)
r[i] + r[j-i] will give you the max value you can obtain from a rod of length i and the first cut was made at length "i"(can devide both the pieces further)
Now consider we have a rod of length X and the solution set will contain piece of length k
and since k is 0 < k < X you will find the max value at p[k] + r[X-k] in the first approach
and in the second approach you can find the same result with r[k] + r[X-k] since we know that r[k] will be >= p[k]
But in you approach you can get the result much faster(half of the time) since you are slicing the rod from both ends
so in you approach you can run the inner loop for half of the length should be good.
But I think in you code there is a bug in inner for loop
it should be j >= (i / 2) instead of j >= (n / 2)

Efficiently calculate edit distance between two strings

I have a string S of length 1000 and a query string Q of length 100. I want to calculate the edit distance of query string Q with every sub-string of string S of length 100. One naive way to do is calculate dynamically edit distance of every sub-string independently i.e. edDist(q,s[0:100]), edDist(q,s[1:101]), edDist(q,s[2:102])....... edDist(q,s[900:1000]) .
def edDist(x, y):
""" Calculate edit distance between sequences x and y using
matrix dynamic programming. Return distance. """
D = zeros((len(x)+1, len(y)+1), dtype=int)
D[0, 1:] = range(1, len(y)+1)
D[1:, 0] = range(1, len(x)+1)
for i in range(1, len(x)+1):
for j in range(1, len(y)+1):
delt = 1 if x[i-1] != y[j-1] else 0
D[i, j] = min(D[i-1, j-1]+delt, D[i-1, j]+1, D[i, j-1]+1)
return D[len(x), len(y)]
Can somebody suggest an alternate approach to calculate edit distance efficiently. My take on this is that we know the edDist(q,s[900:1000]). Can we somehow use this knowledge to calculate edDist[(q,s[899:999])]...since there we have a difference of 1 character only and then proceed backward to edDist[(q,s[1:100])] using the previously calculated edit Distance ?
Improving Space Complexity
One way to make your Levenshtein distance algorithm more efficient is to reduce the amount of memory required for your calculation.
To use an entire matrix, that requires you to utilize O(n * m) memory, where n represents the length of the first string and m the second string.
If you think about it, the only parts of the matrix we really care about are the last two columns that we're checking - the previous column and the current column.
Knowing this, we can pretend we have a matrix, but only really ever create these two columns; writing over the data when we need to update them.
All we need here is two arrays of size n + 1:
var column_crawler_0 = new Array(n + 1);
var column_crawler_1 = new Array(n + 1);
Initialize the values of these pseudo columns:
for (let i = 0; i < n + 1; ++i) {
column_crawler_0[i] = i;
column_crawler_1[i] = 0;
}
And then go through your normal algorithm, but just make sure that you're updating these arrays with the new values as we go along:
for (let j = 1; j < m + 1; ++j) {
column_crawler_1[0] = j;
for (let i = 1; i < n + 1; ++i) {
// Perform normal Levenshtein calculation method, updating current column
let cost = a[i-1] === b[j-1] ? 0 : 1;
column_crawler_1[i] = MIN(column_crawler_1[i - 1] + 1, column_crawler_0[i] + 1, column_crawler_0[i - 1] + cost);
}
// Copy current column into previous before we move on
column_crawler_1.map((e, i) => {
column_crawler_0[i] = e;
});
}
return column_crawler_1.pop()
If you want to analyze this approach further, I wrote a small open sourced library using this specific technique, so feel free to check it out if you're curious.
Improving Time Complexity
There's no non-trivial way to improve a Levenshtein distance algorithm to perform faster than O(n^2). There are a few complicated approached, one using VP-Tree data structures. There are a few good sources if you're curious to read about them here and here, and these approaches can reach up to an asymptotical speed of O(nlgn).

Find elements in an integer array such that their sum is X and sum of their square is least

Given an array arr of length n, find any elements within arr such that their sum is x and sum of their squares is least. I'm trying to find the algorithm with least complexity. So far, I've written a simple recursive algorithm that finds all the subset within array and put the sum check as base condition. I've written my code is javascript as below:
var arr = [3, 4, 2, 1];
var arr2 = arr.map(function(n) { return n*n; });
var max_sum = 5;
var most_min = -1;
function _rec(i, _sum, _square) {
if(_sum >= max_sum) {
if(most_min == -1 || _square < most_min) {
most_min = _square;
console.log("MIN: " + most_min);
}
console.log("END");
return;
}
if(i >= arr.length)
return;
console.log(i);
var n = arr[i];
// square of above number
var n2 = arr2[i];
_sum = _sum + n;
_square = _square + n2;
_rec(i+1, _sum, _square);
_sum = _sum - n;
_square = _square - n2;
_rec(i+1, _sum, _square);
}
_rec(0, 0, 0);
Visit http://jsfiddle.net/1dxgq6d5/6/ to see the output of above algorithm. Above algorithm is quite simple, it is finding all subsets by evaluating two choices at every recursive step; 1) choose the current number or 2) reject and then carry on with recursion.
I'm trying to find an algorithm which is more efficient then simple recursion above. Any suggestion or help would be appreciated.
One more hypothesis
I'm thinking that if I sort the array, and find the subset of element with least variance (separations between each other) such that their sum is x would fulfill my requirements. Not sure, if this is going to be very helpful, but I'm currently working this in hope to improve my current blind recursive approach.
First off, you're finding subsets, not permutations, because you don't care about the order of the elements in each set.
Secondly, even without trying to minimize the sum of the squares, just finding whether there's a subset that sums to a target number is NP-complete -- this is the subset sum problem. It's currently believed by most computer scientists that P != NP, so there's no efficient (polynomial-time) algorithm for this.
Subset sum is only weakly NP-hard, so it's possible to get an efficient solution with dynamic programming (assuming that the input array consists of integers having a relatively small sum). Switch from trying all possibilities recursively and depth-first to trying all possibilities iteratively and breadth-first by storing the possibilities for the first k elements in an array. Before considering element k + 1, filter this array by discarding all but the lowest sum of squares for each total that can be made.
I solved the problem in more efficient way than simple recursion. I'm using dynamic programming approach. Below is the python code I wrote:
_sum=7
_set=[1,1,2,3,4,6]
current_idx = 0
sum_mapping = [[-1 for i in range(len(_set) + 1)] for i in range(_sum)]
max_sum = _set[current_idx]
for i in range(0, _sum):
current_sum = i + 1
for j in [i for i in range(0, current_idx+1)][::-1] + \
[i for i in range(current_idx + 1, len(_set))]:
required_value = current_sum - _set[j]
if required_value < 0:
break
if required_value == 0 or sum_mapping[required_value - 1][j] != -1:
_j = j + 1
sum_mapping[i][_j:] = [j]*(len(_set) - j)
break
if max_sum == current_sum:
current_idx = current_idx + 1
max_sum = max_sum + _set[current_idx]
_cur = sum_mapping[_sum-1][len(_set)]
if _cur != -1:
_l_sum = _sum
while _l_sum != 0:
print(_set[_cur])
_l_sum = _l_sum - _set[_cur]
_cur = sum_mapping[_l_sum -1][len(_set)]
Here is ideone output: http://ideone.com/OgGN2f

maximum sum of a subset of size K with sum less than M

Given:
array of integers
value K,M
Question:
Find the maximum sum which we can obtain from all K element subsets of given array such that sum is less than value M?
is there a non dynamic programming solution available to this problem?
or if it is only dp[i][j][k] can only solve this type of problem!
can you please explain the algorithm.
Many people have commented correctly that the answer below from years ago, which uses dynamic programming, incorrectly encodes solutions allowing an element of the array to appear in a "subset" multiple times. Luckily there is still hope for a DP based approach.
Let dp[i][j][k] = true if there exists a size k subset of the first i elements of the input array summing up to j
Our base case is dp[0][0][0] = true
Now, either the size k subset of the first i elements uses a[i + 1], or it does not, giving the recurrence
dp[i + 1][j][k] = dp[i][j - a[i + 1]][k - 1] OR dp[i][j][k]
Put everything together:
given A[1...N]
initialize dp[0...N][0...M][0...K] to false
dp[0][0][0] = true
for i = 0 to N - 1:
for j = 0 to M:
for k = 0 to K:
if dp[i][j][k]:
dp[i + 1][j][k] = true
if j >= A[i] and k >= 1 and dp[i][j - A[i + 1]][k - 1]:
dp[i + 1][j][k] = true
max_sum = 0
for j = 0 to M:
if dp[N][j][K]:
max_sum = j
return max_sum
giving O(NMK) time and space complexity.
Stepping back, we've made one assumption here implicitly which is that A[1...i] are all non-negative. With negative numbers, initializing the second dimension 0...M is not correct. Consider a size K subset made up of a size K - 1 subset with sum exceeding M and one other sufficiently negative element of A[] such that overall sum no longer exceeds M. Similarly, our size K - 1 subset could sum to some extremely negative number and then with a sufficiently positive element of A[] sum to M. In order for our algorithm to still work in both cases we would need to increase the second dimension from M to the difference between the sum of all positive elements in A[] and the sum of all negative elements (the sum of the absolute values of all elements in A[]).
As for whether a non dynamic programming solution exists, certainly there is the naive exponential time brute force solution and variations that optimize the constant factor in the exponent.
Beyond that? Well your problem is closely related to subset sum and the literature for the big name NP complete problems is rather extensive. And as a general principle algorithms can come in all shapes and sizes -- it's not impossible for me to imagine doing say, randomization, approximation, (just choose the error parameter to be sufficiently small!) plain old reductions to other NP complete problems (convert your problem into a giant boolean circuit and run a SAT solver). Yes these are different algorithms. Are they faster than a dynamic programming solution? Some of them, probably. Are they as simple to understand or implement, without say training beyond standard introduction to algorithms material? Probably not.
This is a variant of the Knapsack or subset-problem, where in terms of time (at the cost of exponential growing space requirements as the input size grows), dynamic programming is the most efficient method that CORRECTLY solves this problem. See Is this variant of the subset sum problem easier to solve? for a similar question to yours.
However, since your problem is not exactly the same, I'll provide an explanation anyways. Let dp[i][j] = true, if there is a subset of length i that sums to j and false if there isn't. The idea is that dp[][] will encode the sums of all possible subsets for every possible length. We can then simply find the largest j <= M such that dp[K][j] is true. Our base case dp[0][0] = true because we can always make a subset that sums to 0 by picking one of size 0.
The recurrence is also fairly straightforward. Suppose we've calculated the values of dp[][] using the first n values of the array. To find all possible subsets of the first n+1 values of the array, we can simply take the n+1_th value and add it to all the subsets we've seen before. More concretely, we have the following code:
initialize dp[0..K][0..M] to false
dp[0][0] = true
for i = 0 to N:
for s = 0 to K - 1:
for j = M to 0:
if dp[s][j] && A[i] + j < M:
dp[s + 1][j + A[i]] = true
for j = M to 0:
if dp[K][j]:
print j
break
We're looking for a subset of K elements for which the sum of the elements is a maximum, but less than M.
We can place bounds [X, Y] on the largest element in the subset as follows.
First we sort the (N) integers, values[0] ... values[N-1], with the element values[0] is the smallest.
The lower bound X is the largest integer for which
values[X] + values[X-1] + .... + values[X-(K-1)] < M.
(If X is N-1, then we've found the answer.)
The upper bound Y is the largest integer less than N for which
values[0] + values[1] + ... + values[K-2] + values[Y] < M.
With this observation, we can now bound the second-highest term for each value of the highest term Z, where
X <= Z <= Y.
We can use exactly the same method, since the form of the problem is exactly the same. The reduced problem is finding a subset of K-1 elements, taken from values[0] ... values[Z-1], for which the sum of the elements is a maximum, but less than M - values[Z].
Once we've bound that value in the same way, we can put bounds on the third-largest value for each pair of the two highest values. And so on.
This gives us a tree structure to search, hopefully with much fewer combinations to search than N choose K.
Felix is correct that this is a special case of the knapsack problem. His dynamic programming algorithm takes O(K*M) size and O(K*K*M) amount of time. I believe his use of the variable N really should be K.
There are two books devoted to the knapsack problem. The latest one, by Kellerer, Pferschy and Pisinger [2004, Springer-Verlag, ISBN 3-540-40286-1] gives an improved dynamic programming algorithm on their page 76, Figure 4.2 that takes O(K+M) space and O(KM) time, which is huge reduction compared to the dynamic programming algorithm given by Felix. Note that there is a typo on the book's last line of the algorithm where it should be c-bar := c-bar - w_(r(c-bar)).
My C# implementation is below. I cannot say that I have extensively tested it, and I welcome feedback on this. I used BitArray to implement the concept of the sets given in the algorithm in the book. In my code, c is the capacity (which in the original post was called M), and I used w instead of A as the array that holds the weights.
An example of its use is:
int[] optimal_indexes_for_ssp = new SubsetSumProblem(12, new List<int> { 1, 3, 5, 6 }).SolveSubsetSumProblem();
where the array optimal_indexes_for_ssp contains [0,2,3] corresponding to the elements 1, 5, 6.
using System;
using System.Collections.Generic;
using System.Collections;
using System.Linq;
public class SubsetSumProblem
{
private int[] w;
private int c;
public SubsetSumProblem(int c, IEnumerable<int> w)
{
if (c < 0) throw new ArgumentOutOfRangeException("Capacity for subset sum problem must be at least 0, but input was: " + c.ToString());
int n = w.Count();
this.w = new int[n];
this.c = c;
IEnumerator<int> pwi = w.GetEnumerator();
pwi.MoveNext();
for (int i = 0; i < n; i++, pwi.MoveNext())
this.w[i] = pwi.Current;
}
public int[] SolveSubsetSumProblem()
{
int n = w.Length;
int[] r = new int[c+1];
BitArray R = new BitArray(c+1);
R[0] = true;
BitArray Rp = new BitArray(c+1);
for (int d =0; d<=c ; d++) r[d] = 0;
for (int j = 0; j < n; j++)
{
Rp.SetAll(false);
for (int k = 0; k <= c; k++)
if (R[k] && k + w[j] <= c) Rp[k + w[j]] = true;
for (int k = w[j]; k <= c; k++) // since Rp[k]=false for k<w[j]
if (Rp[k])
{
if (!R[k]) r[k] = j;
R[k] = true;
}
}
int capacity_used= 0;
for(int d=c; d>=0; d--)
if (R[d])
{
capacity_used = d;
break;
}
List<int> result = new List<int>();
while (capacity_used > 0)
{
result.Add(r[capacity_used]);
capacity_used -= w[r[capacity_used]];
} ;
if (capacity_used < 0) throw new Exception("Subset sum program has an internal logic error");
return result.ToArray();
}
}

Coin changing algorithm

Suppose I have a set of coins having denominations a1, a2, ... ak.
One of them is known to be equal to 1.
I want to make change for all integers 1 to n using the minimum number of coins.
Any ideas for the algorithm.
eg. 1, 3, 4 coin denominations
n = 11
optimal selection is 3, 0, 2 in the order of coin denominations.
n = 12
optimal selection is 2, 2, 1.
Note: not homework just a modification of this problem
This is a classic dynamic programming problem (note first that the greedy algorithm does not always work here!).
Assume the coins are ordered so that a_1 > a_2 > ... > a_k = 1. We define a new problem. We say that the (i, j) problem is to find the minimum number of coins making change for j using coins a_i > a_(i + 1) > ... > a_k. The problem we wish to solve is (1, j) for any j with 1 <= j <= n. Say that C(i, j) is the answer to the (i, j) problem.
Now, consider an instance (i, j). We have to decide whether or not we are using one of the a_i coins. If we are not, we are just solving a (i + 1, j) problem and the answer is C(i + 1, j). If we are, we complete the solution by making change for j - a_i. To do this using as few coins as possible, we want to solve the (i, j - a_i) problem. We arrange things so that these two problems are already solved for us and then:
C(i, j) = C(i + 1, j) if a_i > j
= min(C(i + 1, j), 1 + C(i, j - a_i)) if a_i <= j
Now figure out what the initial cases are and how to translate this to the language of your choice and you should be good to go.
If you want to try you hands at another interesting problem that requires dynamic programming, look at Project Euler Problem 67.
Here's a sample implementation of a dynamic programming algorithm in Python. It is simpler than the algorithm that Jason describes, because it only calculates 1 row of the 2D table he describes.
Please note that using this code to cheat on homework will make Zombie Dijkstra cry.
import sys
def get_best_coins(coins, target):
costs = [0]
coins_used = [None]
for i in range(1,target + 1):
if i % 1000 == 0:
print '...',
bestCost = sys.maxint
bestCoin = -1
for coin in coins:
if coin <= i:
cost = 1 + costs[i - coin]
if cost < bestCost:
bestCost = cost
bestCoin = coin
costs.append(bestCost)
coins_used.append(bestCoin)
ret = []
while target > 0:
ret.append(coins_used[target])
target -= coins_used[target]
return ret
coins = [1,10,25]
target = 100033
print get_best_coins(coins, target)
solution in C# code
public static long findPermutations(int n, List<long> c)
{
// The 2-dimension buffer will contain answers to this question:
// "how much permutations is there for an amount of `i` cents, and `j`
// remaining coins?" eg. `buffer[10][2]` will tell us how many permutations
// there are when giving back 10 cents using only the first two coin types
// [ 1, 2 ].
long[][] buffer = new long[n + 1][];
for (var i = 0; i <= n; ++i)
buffer[i] = new long[c.Count + 1];
// For all the cases where we need to give back 0 cents, there's exactly
// 1 permutation: the empty set. Note that buffer[0][0] won't ever be
// needed.
for (var j = 1; j <= c.Count; ++j)
buffer[0][j] = 1;
// We process each case: 1 cent, 2 cent, etc. up to `n` cents, included.
for (int i = 1; i <= n; ++i)
{
// No more coins? No permutation is possible to attain `i` cents.
buffer[i][0] = 0;
// Now we consider the cases when we have J coin types available.
for (int j = 1; j <= c.Count; ++j)
{
// First, we take into account all the known permutations possible
// _without_ using the J-th coin (actually computed at the previous
// loop step).
var value = buffer[i][j - 1];
// Then, we add all the permutations possible by consuming the J-th
// coin itself, if we can.
if (c[j - 1] <= i)
value += buffer[i - c[j - 1]][j];
// We now know the answer for this specific case.
buffer[i][j] = value;
}
}
// Return the bottom-right answer, the one we were looking for in the
// first place.
return buffer[n][c.Count];
}
Following is the bottom up approach of dynamic programming.
int[] dp = new int[amount+ 1];
Array.Fill(dp,amount+1);
dp[0] = 0;
for(int i=1;i<=amount;i++)
{
for(int j=0;j<coins.Length;j++)
{
if(coins[j]<=i) //if the amount is greater than or equal to the current coin
{
//refer the already calculated subproblem dp[i-coins[j]]
dp[i] = Math.Min(dp[i],dp[i-coins[j]]+1);
}
}
}
if(dp[amount]>amount)
return -1;
return dp[amount];
}

Resources