counting boolean parenthesizations implementation - algorithm

Given a boolean expression containing the symbols {true, false, and, or, xor}, count the number of ways to parenthesize the expression such that it evaluates to true.
For example, there is only 1 way to parenthesize 'true and false xor true' such that it evaluates to true.
Here is my algorithm
we can calculate the total number of parenthesization of a string
Definition:
N - the total number of
True - the number of parenthesizations that evaluates to true
False - the number of parenthesizations that evaluates to false
True + False = N
Left_True - the number of parenthesization in the left part that evaluates to True
same to Left_False, Right_True, Right_False
we iterate the input string from left to right and deal with each operator as follows:
if it is "and", the number of parenthesization leads to true is
Left_True * Right_True;
if it is "xor", the number of parenthesization leads to true
Left_True * Right_False + Left_False * Right_True
if it is 'or', the number is
N - Left_False * Right_False
Here is my psuedocode
n = number of operator within the String
int[n][n] M; // save number of ways evaluate to true
for l = 2 to n
for i = 1 to n-l+1
do j = i+l-1
// here we have different string varying from 2 to n starting from i and ending at j
for k = i to j-1
// (i,k-1) is left part
// (k+1, j) is right part
switch(k){
case 'and': // calculate, update array m
case 'or': // same
case 'xor':
}
we save all the solutions to subproblems and read them when we meet them again. thus save time.
Can we have a better solution?

Your pseudocode gives an algorithm in O(2^n). I think you can have something in O(n^3).
First of all, let's see the complexity of your algorithm. Let's say that the number of operations needed to check the parenthesization is T(n). If I understood well, your algorithm consists of :
Cut the expression in two (n-1 possibilities)
Check if the left and the right part have appropriate parenthesization.
So T(n) = checking if you cut at the first place + checking if you cut at the second place + ... + checking if you cut at the last place
T(n) = T(1)+T(n-1) + T(2)+T(n-2) + ... + T(n-1)+T(1) + n
A bit of computation will tell you that T(n) = 2^n*T(1) + O(n^2) = O(2^n)
My idea is that what you only need is to check for parenthesization are the "subwords". The "subword_i_j" consists of all the litterals between position i and position j. Of course i<j so you have N*(N-1)/2 subwords. Let's say that L[i][j] is the number of valid parenthesizations of the subword_i_j. For the sake of convenience, I'll forget the other values M[i][j] that states the number of parenthesization that leads to false, but don't forget that it's here!
You want to compute all the possible subwords starting from the smallest ones (size 1) to the biggest one (size N).
You begin by computing L[i][i] for all i. There are N such values. It's easy, if the i-th litteral is True then L[i][i]=1 else L[i][i]=0. Now, you know the number of parenthesization for all subwords of size 1.
Lets say that you know the parenthesization for all subwords of size S.
Then compute L[i][i+S] for i between 1 and N-S. These are subwords of size S+1. It consists of splitting the subword in all possible ways (S ways), and checking if the left part(which is a subword of size S1<=S) and the right part(which is of size S2<=S) and the operator inbetween (or, xor, and) are compatible. There are S*(N-S) such values.
Finally, you'll end up with L[1][N] which will tell you if there is a valid parenthesization.
The cost is :
checking subwords of size 1 + checking subwords of size 2 + ... + checking subwords of size N
= N + N-1 + 2*(N-2) + 2*(N-2) + .. + (N-1)*(1)
= O(N^3)
The reason the complexity is better is that in your pseudocode, you check multiple times the same subwords without storing the result in memory.
Edit : Arglllll, I overlooked the sentence we save all the solutions to subproblems and read them when we meet them again. thus save time.. Well, seems that if you do, you also have an algorithm in worst-case O(N^3). Don't think you can do much better than that...

This problem can be solved by Dynamic-Algorithm and it is similar to Matrix chain multiplication problem, the detail answer is follow:
1、Let the operation consist of operand a_i and operator b_j (1<=i<=n, 1<=j<=n-1 n is the size of operand), substitute true for 1, substitute false for 0
2、Let DPone[i][j] be the number of ways to parenthesize in {a_i b_i a_i+1 ... b_j-1 b_j} such that the result is 1, Let DPzero[i][j] be the number of ways to parenthesize in {a_i b_i a_i+1 ... b_j-1 b_j} such that the result is 0
3、Build function oper(i,j,k), the return value is the number of ways such that operation's result is 1 when b_k is the last used operator in {a_i b_i a_i+1 ... b_j-1 b_j}, the direct operation method is based on b_k. For example, b_i is and, so the return value is DPone[i][k]*DPone[k+1][j].
4、Now the DP equation is follow:
DPone[i][j] = max{ sum ( oper(i,j,k) ) i<=k<=j-1 }
so we just need to determine DPone[1][n]. The complexity is O(n^3)
Intention:
1、We should determine DPzero[i][j] after determine DPone[i][j], but it's simple, DPzero[i][j]=total_Parenthesize_Ways[i][j]-DPone[i][j]
2、the order to find DPone is [1][1],[2][2],...[n][n],[1][2],[2][3],...[n-1][n],[1][3],[2][4]......[2][n],[1][n], of course, [1][1]~[n][n] should be initialized by ourselves.

Here is the code for counting parenthesizations for an array of booleans and operators.
Time complexity O(N^3) and space complexity O(N^2)
public static int CountingBooleanParenthesizations(bool[] boolValues, string[] operators)
{
int[,] trueTable = new int[boolValues.Length, boolValues.Length];
int[,] falseTable = new int[boolValues.Length, boolValues.Length];
for (int j = 0; j < boolValues.Length; j++)
{
for (int i = j; i >= 0; i--)
{
if (i == j)
{
trueTable[i, j] = boolValues[i] ? 1 : 0;
falseTable[i, j] = boolValues[i] ? 0 : 1;
}
else
{
int trueSum = 0;
int falseSum = 0;
for (int k = i; k < j; k++)
{
int total1 = trueTable[i, k] + falseTable[i, k];
int total2 = trueTable[k + 1, j] + falseTable[k + 1, j];
switch (operators[k])
{
case "or":
{
int or = falseTable[i, k] * falseTable[k + 1, j];
falseSum += or;
or = total1 * total2 - or;
trueSum += or;
}
break;
case "and":
{
int and = trueTable[i, k] * trueTable[k + 1, j];
trueSum += and;
and = total1 * total2 - and;
falseSum += and;
}
break;
case "xor":
{
int xor = trueTable[i, k] * falseTable[k + 1, j] + falseTable[i, k] * trueTable[k + 1, j];
trueSum += xor;
xor = total1 * total2 - xor;
falseSum += xor;
}
break;
}
}
trueTable[i, j] = trueSum;
falseTable[i, j] = falseSum;
}
}
}
return trueTable[0, boolValues.Length - 1];
}

Related

Length of Longest Subarray with all same elements

I have this problem:
You are given an array of integers A and an integer k.
You can decrement elements of A up to k times, with the goal of producing a consecutive subarray whose elements are all equal. Return the length of the longest possible consecutive subarray that you can produce in this way.
For example, if A is [1,7,3,4,6,5] and k is 6, then you can produce [1,7,3,4-1,6-1-1-1,5-1-1] = [1,7,3,3,3,3], so you will return 4.
What is the optimal solution?
The subarray must be made equal to its lowest member since the only allowed operation is reduction (and reducing the lowest member would add unnecessary cost). Given:
a1, a2, a3...an
the cost to reduce is:
sum(a1..an) - n * min(a1..an)
For example,
3, 4, 6, 5
sum = 18
min = 3
cost = 18 - 4 * 3 = 6
One way to reduce the complexity from O(n^2) to a log factor is: for each element as the rightmost (or leftmost) element of the candidate best subarray, binary search the longest length within cost. To do that, we only need the sum, which we can get from a prefix sum in O(1), the length (which we are searching on already), and minimum range query, which is well-studied.
In response to comments below this post, here is a demonstration that the sequence of costs as we extend a subarray from each element as rightmost increases monotonically and can therefore be queried with binary search.
JavaScript code:
function cost(A, i, j){
const n = j - i + 1;
let sum = 0;
let min = Infinity;
for (let k=i; k<=j; k++){
sum += A[k];
min = Math.min(min, A[k]);
}
return sum - n * min;
}
function f(A){
for (let j=0; j<A.length; j++){
const rightmost = A[j];
const sequence = [];
for (let i=j; i>=0; i--)
sequence.push(cost(A, i, j));
console.log(rightmost + ': ' + sequence);
}
}
var A = [1,7,3,1,4,6,5,100,1,4,6,5,3];
f(A);
def cost(a, i, j):
n = j - i
s = 0
m = a[i]
for k in range(i,j):
s += a[k]
m = min(m, a[k])
return s - n * m;
def solve(n,k,a):
m=1
for i in range(n):
for j in range(i,n+1):
if cost(a,i,j)<=k:
x = j - i
if x>m:
m=x
return m
This is my python3 solution as per your specifications.

dynamic programming reduction of brute force

A emoticon consists of an arbitrary positive number of underscores between two semicolons. Hence, the shortest possible emoticon is ;_;. The strings ;__; and ;_____________; are also valid emoticons.
given a String containing only(;,_).The problem is to divide string into one or more emoticons and count how many division are possible. Each emoticon must be a subsequence of the message, and each character of the message must belong to exactly one emoticon. Note that the subsequences are not required to be contiguous. subsequence definition.
The approach I thought of is to write a recursive method as follows:
countDivision(string s){
//base cases
if(s.empty()) return 1;
if(s.length()<=3){
if(s.length()!=3) return 0;
return s[0]==';' && s[1]=='_' && s[2]==';';
}
result=0;
//subproblems
genrate all valid emocticon and remove it from s let it be w
result+=countDivision(w);
return result;
}
The solution above will easily timeout when n is large such as 100. What kind of approach should I use to convert this brute force solution to a dynamic programming solution?
Few examples
1. ";_;;_____;" ans is 2
2. ";;;___;;;" ans is 36
Example 1.
";_;;_____;" Returns: 2
There are two ways to divide this string into two emoticons.
One looks as follows: ;_;|;_____; and the other looks like
this(rembember we can pick subsequence it need not be contigous): ;_ ;|; _____;
I'll describe an O(n^4)-time and -space dynamic programming solution (that can easily be improved to use just O(n^3) space) that should work for up to n=100 or so.
Call a subsequence "fresh" if consists of a single ;.
Call a subsequence "finished" if it corresponds to an emoticon.
Call a subsequence "partial" if it has nonzero length and is a proper prefix of an emoticon. (So for example, ;, ;_, and ;___ are all partial subsequences, while the empty string, _, ;; and ;___;; are not.)
Finally, call a subsequence "admissible" if it is fresh, finished or partial.
Let f(i, j, k, m) be the number of ways of partitioning the first i characters of the string into exactly j+k+m admissible subsequences, of which exactly j are fresh, k are partial and m are finished. Notice that any prefix of a valid partition into emoticons determines i, j, k and m uniquely -- this means that no prefix of a valid partition will be counted by more than one tuple (i, j, k, m), so if we can guarantee that, for each tuple (i, j, k, m), the partition prefixes within that tuple are all counted once and only once, then we can add together the counts for tuples to get a valid total. Specifically, the answer to the question will then be the sum over all 1 <= j <= n of f(n, 0, j, 0).
If s[i] = "_":
f(i, j, k, m) =
(j+1) * f(i-1, j+1, k, m-1) // Convert any of the j+1 fresh subsequences to partial
+ m * f(i-1, j, k, m) // Add _ to any of the m partial subsequences
Else if s[i] = ";":
f(i, j, k, m) =
f(i-1, j-1, k, m) // Start a fresh subsequence
+ (m+1) * f(i-1, j, k-1, m+1) // Finish any of the m+1 partial subsequences
We also need the base cases
f(0, 0, 0, 0) = 1
f(0, _, _, _) = 0
f(i, j, k, m) = 0 if any of i, j, k or m are negative
My own C++ implementation gives the correct answer of 36 for ;;;___;;; in a few milliseconds, and e.g. for ;;;___;;;_;_; it gives an answer of 540 (also in a few milliseconds). For a string consisting of 66 ;s followed by 66 _s followed by 66 ;s, it takes just under 2s and reports an answer of 0 (probably due to overflow of the long long).
Here's a fairly straightforward memoized recursion that returns an answer immediately for a string of 66 ;s followed by 66 _s followed by 66 ;s. The function has three parameters: i = index in the string, j = number of accumulating emoticons with only a left semi-colon, and k = number of accumulating emoticons with a left semi-colon and one or more underscores.
An array is also constructed for how many underscores and semi-colons are available to the right of each index, to help decide on the next possibilities.
Complexity is O(n^3) and the problem constrains the search space, where j is at most n/2 and k at most n/4.
Commented JavaScript code:
var s = ';_;;__;_;;';
// record the number of semi-colons and
// underscores to the right of each index
var cs = new Array(s.length);
cs.push(0);
var us = new Array(s.length);
us.push(0);
for (var i=s.length-1; i>=0; i--){
if (s[i] == ';'){
cs[i] = cs[i+1] + 1;
us[i] = us[i+1];
} else {
us[i] = us[i+1] + 1;
cs[i] = cs[i+1];
}
}
// memoize
var h = {};
function f(i,j,k){
// memoization
var key = [i,j,k].join(',');
if (h[key] !== undefined){
return h[key];
}
// base case
if (i == s.length){
return 1;
}
var a = 0,
b = 0;
if (s[i] == ';'){
// if there are still enough colons to start an emoticon
if (cs[i] > j + k){
// start a new emoticon
a = f(i+1,j+1,k);
}
// close any of k partial emoticons
if (k > 0){
b = k * f(i+1,j,k-1);
}
}
if (s[i] == '_'){
// if there are still extra underscores
if (j < us[i] && k > 0){
// apply them to partial emoticons
a = k * f(i+1,j,k);
}
// convert started emoticons to partial
if (j > 0){
b = j * f(i+1,j-1,k+1);
}
}
return h[key] = a + b;
}
console.log(f(0,0,0)); // 52

Find minimum sum that cannot be formed

Given positive integers from 1 to N where N can go upto 10^9. Some K integers from these given integers are missing. K can be at max 10^5 elements. I need to find the minimum sum that can't be formed from remaining N-K elements in an efficient way.
Example; say we have N=5 it means we have {1,2,3,4,5} and let K=2 and missing elements are: {3,5} then remaining array is now {1,2,4} the minimum sum that can't be formed from these remaining elements is 8 because :
1=1
2=2
3=1+2
4=4
5=1+4
6=2+4
7=1+2+4
So how to find this un-summable minimum?
I know how to find this if i can store all the remaining elements by this approach:
We can use something similar to Sieve of Eratosthenes, used to find primes. Same idea, but with different rules for a different purpose.
Store the numbers from 0 to the sum of all the numbers, and cross off 0.
Then take numbers, one at a time, without replacement.
When we take the number Y, then cross off every number that is Y plus some previously-crossed off number.
When we have done this for every number that is remaining, the smallest un-crossed-off number is our answer.
However, its space requirement is high. Can there be a better and faster way to do this?
Here's an O(sort(K))-time algorithm.
Let 1 &leq; x1 &leq; x2 &leq; … &leq; xm be the integers not missing from the set. For all i from 0 to m, let yi = x1 + x2 + … + xi be the partial sum of the first i terms. If it exists, let j be the least index such that yj + 1 < xj+1; otherwise, let j = m. It is possible to show via induction that the minimum sum that cannot be made is yj + 1 (the hypothesis is that, for all i from 0 to j, the numbers x1, x2, …, xi can make all of the sums from 0 to yi and no others).
To handle the fact that the missing numbers are specified, there is an optimization that handles several consecutive numbers in constant time. I'll leave it as an exercise.
Let X be a bitvector initialized to zero. For each number Ni you set X = (X | X << Ni) | Ni. (i.e. you can make Ni and you can increase any value you could make previously by Ni).
This will set a '1' for every value you can make.
Running time is linear in N, and bitvector operations are fast.
process 1: X = 00000001
process 2: X = (00000001 | 00000001 << 2) | (00000010) = 00000111
process 4: X = (00000111 | 00000111 << 4) | (00001000) = 01111111
First number you can't make is 8.
Here is my O(K lg K) approach. I didn't test it very much because of lazy-overflow, sorry about that. If it works for you, I can explain the idea:
const int MAXK = 100003;
int n, k;
int a[MAXK];
long long sum(long long a, long long b) { // sum of elements from a to b
return max(0ll, b * (b + 1) / 2 - a * (a - 1) / 2);
}
void answer(long long ans) {
cout << ans << endl;
exit(0);
}
int main()
{
cin >> n >> k;
for (int i = 1; i <= k; ++i) {
cin >> a[i];
}
a[0] = 0;
a[k+1] = n+1;
sort(a, a+k+2);
long long ans = 0;
for (int i = 1; i <= k+1; ++i) {
// interval of existing numbers [lo, hi]
int lo = a[i-1] + 1;
int hi = a[i] - 1;
if (lo <= hi && lo > ans + 1)
break;
ans += sum(lo, hi);
}
answer(ans + 1);
}
EDIT: well, thanks God #DavidEisenstat in his answer wrote the description of the approach I used, so I don't have to write it. Basically, what he mentions as exercise is not adding the "existing numbers" one by one, but all at the same time. Before this,you just need to check if some of them breaks the invariant, which can be done using binary search. Hope it helped.
EDIT2: as #DavidEisenstat pointed in the comments, the binary search is not needed, since only the first number in every interval of existing numbers can break the invariant. Modified the code accordingly.

Find elements in an integer array such that their sum is X and sum of their square is least

Given an array arr of length n, find any elements within arr such that their sum is x and sum of their squares is least. I'm trying to find the algorithm with least complexity. So far, I've written a simple recursive algorithm that finds all the subset within array and put the sum check as base condition. I've written my code is javascript as below:
var arr = [3, 4, 2, 1];
var arr2 = arr.map(function(n) { return n*n; });
var max_sum = 5;
var most_min = -1;
function _rec(i, _sum, _square) {
if(_sum >= max_sum) {
if(most_min == -1 || _square < most_min) {
most_min = _square;
console.log("MIN: " + most_min);
}
console.log("END");
return;
}
if(i >= arr.length)
return;
console.log(i);
var n = arr[i];
// square of above number
var n2 = arr2[i];
_sum = _sum + n;
_square = _square + n2;
_rec(i+1, _sum, _square);
_sum = _sum - n;
_square = _square - n2;
_rec(i+1, _sum, _square);
}
_rec(0, 0, 0);
Visit http://jsfiddle.net/1dxgq6d5/6/ to see the output of above algorithm. Above algorithm is quite simple, it is finding all subsets by evaluating two choices at every recursive step; 1) choose the current number or 2) reject and then carry on with recursion.
I'm trying to find an algorithm which is more efficient then simple recursion above. Any suggestion or help would be appreciated.
One more hypothesis
I'm thinking that if I sort the array, and find the subset of element with least variance (separations between each other) such that their sum is x would fulfill my requirements. Not sure, if this is going to be very helpful, but I'm currently working this in hope to improve my current blind recursive approach.
First off, you're finding subsets, not permutations, because you don't care about the order of the elements in each set.
Secondly, even without trying to minimize the sum of the squares, just finding whether there's a subset that sums to a target number is NP-complete -- this is the subset sum problem. It's currently believed by most computer scientists that P != NP, so there's no efficient (polynomial-time) algorithm for this.
Subset sum is only weakly NP-hard, so it's possible to get an efficient solution with dynamic programming (assuming that the input array consists of integers having a relatively small sum). Switch from trying all possibilities recursively and depth-first to trying all possibilities iteratively and breadth-first by storing the possibilities for the first k elements in an array. Before considering element k + 1, filter this array by discarding all but the lowest sum of squares for each total that can be made.
I solved the problem in more efficient way than simple recursion. I'm using dynamic programming approach. Below is the python code I wrote:
_sum=7
_set=[1,1,2,3,4,6]
current_idx = 0
sum_mapping = [[-1 for i in range(len(_set) + 1)] for i in range(_sum)]
max_sum = _set[current_idx]
for i in range(0, _sum):
current_sum = i + 1
for j in [i for i in range(0, current_idx+1)][::-1] + \
[i for i in range(current_idx + 1, len(_set))]:
required_value = current_sum - _set[j]
if required_value < 0:
break
if required_value == 0 or sum_mapping[required_value - 1][j] != -1:
_j = j + 1
sum_mapping[i][_j:] = [j]*(len(_set) - j)
break
if max_sum == current_sum:
current_idx = current_idx + 1
max_sum = max_sum + _set[current_idx]
_cur = sum_mapping[_sum-1][len(_set)]
if _cur != -1:
_l_sum = _sum
while _l_sum != 0:
print(_set[_cur])
_l_sum = _l_sum - _set[_cur]
_cur = sum_mapping[_l_sum -1][len(_set)]
Here is ideone output: http://ideone.com/OgGN2f

Represent natural number as sum of squares using dynamic programming

The problem is to find the minimum number of squares required to sum to a number n.
Some examples:
min[ 1] = 1 (1²)
min[ 2] = 2 (1² + 1²)
min[ 4] = 1 (2²)
min[13] = 2 (3² + 2²)
I'm aware of Lagrange's four-square theorem which states that any natural number can be represented as the sum of four squares.
I'm trying to solve this using DP.
This is what I came up with (its not correct)
min[i] = 1 where i is a square number
min[i] = min(min[i - 1] + 1, 1 + min[i - prev]) where prev is a square number < i
What is the correct DP way to solve this?
I'm not sure if DP is the most efficient way to solve this problem, but you asked for DP.
min[i] = min(min[i - 1] + 1, 1 + min[i - prev]) where prev is a square number < i
This is close, I would write condition as
min[i] = min(1 + min[i - prev]) for each square number 'prev <= i'
Note, that for each i you need to check different possible values of prev.
Here's simple implementation in Java.
Arrays.fill(min, Integer.MAX_VALUE);
min[0] = 0;
for (int i = 1; i <= n; ++i) {
for (int j = 1; j*j <= i; ++j) {
min[i] = Math.min(min[i], min[i - j*j] + 1);
}
}
Seems to me that you're close...
You're taking the min() of two terms, each of which is min[i - p] + 1, where p is either 1 or some other square < i.
To fix this, just take the min() of min[i - p] + 1 over all p (where p is a square < i).
That would be a correct way. There may be a faster way.
Also, it might aid readability if you give min[] and min() different names. :-)
P.S. the above approach requires that you memoize min[], either explicitly, or as part of your DP framework. Otherwise, the complexity of the algorithm, due to recursion, would be something like O(sqrt(n)!) :-p though the average case might be a lot better.
P.P.S. See #Nikita's answer for a nice implementation. To which I would add the following optimizations... (I'm not nitpicking his implementation -- he presented it as a simple one.)
Check whether n is a perfect square, before entering the outer loop: if so, min[n] = 1 and we're done.
Check whether i is a perfect square before entering the inner loop: if so, min[i] = 1, and skip the inner loop.
Break out of the inner loop if min[i] has been set to 2, because it won't get better (if it could be done with one square, we would never have entered the inner loop, thanks to the previous optimization).
I wonder if the termination condition on the inner loop can be changed to reduce the number of iterations, e.g. j*j*2 <= i or even j*j*4 <= i. I think so but I haven't got my head completely around it.
For large i, it would be faster to compute a limit for j before the inner loop, and compare j directly to it for the loop termination condition, rather than squaring j on every inner loop iteration. E.g.
float sqrti = Math.sqrt(i);
for (int j = 1; j <= sqrti; ++j) {
On the other hand, you need j^2 for the recursion step anyway, so as long as you store it, you might as well use it.
For variety, here's another answer:
Define minsq[i, j] as the minimum number of squares from {1^2, 2^2, ..., j^2} that sum up to i. Then the recursion is:
minsq[i, j] = min(minsq[i - j*j, j] + 1, minsq[i, j - 1])
i.e., to compute minsq[i, j] we either use j^2 or we don't. Our answer for n is then:
minsq[n, floor(sqrt(n))]
This answer is perhaps conceptually simpler than the one presented earlier, but code-wise it is more difficult since one needs to be careful with the base cases. The time complexity for both answers is asymptotically the same.
I present a generalized very efficient dynamical programming algorithm to find the minimum number of positive integers of given power to reach a given target in JavaScript.
For example to reach 50000 with integers of 4th power the result would be [10,10,10,10,10] or to reach 18571 with integers of 7th power would result [3,4]. This algorithm would even work with rational powers such as to reach 222 with integers of 3/5th power would be [ 32, 32, 243, 243, 243, 3125 ]
function getMinimumCubes(tgt,p){
var maxi = Math.floor(Math.fround(Math.pow(tgt,1/p))),
hash = {0:[]},
pow = 0,
t = 0;
for (var i = 1; i <= maxi; i++){
pow = Math.fround(Math.pow(i,p));
for (var j = 0; j <= tgt - pow; j++){
t = j + pow;
hash[t] = hash[t] ? hash[t].length <= hash[j].length ? hash[t]
: hash[j].concat(i)
: hash[j].concat(i);
}
}
return hash[tgt];
}
var target = 729,
result = [];
console.time("Done in");
result = getMinimumCubes(target,2);
console.timeEnd("Done in");
console.log("Minimum number of integers to square and add to reach", target, "is", result.length, "as", JSON.stringify(result));
console.time("Done in");
result = getMinimumCubes(target,6);
console.timeEnd("Done in");
console.log("Minimum number of integers to take 6th power and add to reach", target, "is", result.length, "as", JSON.stringify(result));
target = 500;
console.time("Done in");
result = getMinimumCubes(target,3);
console.timeEnd("Done in");
console.log("Minimum number of integers to cube and add to reach", target, "is", result.length, "as", JSON.stringify(result));
target = 2017;
console.time("Done in");
result = getMinimumCubes(target,4);
console.timeEnd("Done in");
console.log("Minimum number of integers to take 4th power and add to reach", target, "is", result.length, "as", JSON.stringify(result));
target = 99;
console.time("Done in");
result = getMinimumCubes(target,2/3);
console.timeEnd("Done in");
console.log("Minimum number of integers to take 2/3th power and add to reach", target, "are", result);

Resources