N number of smileys using minimum keystrokes - algorithm

This is an interview question, i wasn't able to solve myself. Can somebody she some light?
Question: Assume you have chat client, in the text box you have one smiley (it's already there). Now you want that smiley N times and only two operations are allowed Copy and Paste. Result must be N smileys no more no less. Copy is basically ^a + ^c. You always have to copy all the smiley that are there in text box at a given time. Copy is one keystroke and paste is one keystroke. Write a algorithm / program for this problem. What will be the complexity?

Whatever the number of smileys N is, factor N into a product of primes. Then for each prime p in the factorization you need to do one copy and p-1 pastes. This is the minimal number of operations possible if "copy" means that you have to copy the entire string of smileys. The total number of operations is the sum of all the prime factors in the factorization of N.

user2566092's answer is correct, but the statement
This is the minimal number of operations possible
is claimed without proof. Here's a formal justification.
Let n be the target number of smileys. A configuration (r, k) consists of the number r of smileys in the result and the number k of smileys on the clipboard. The initial configuration is (1, 0). From (r, k), the configurations (r, r) [copy] and (r + k, k) [paste] are reachable in one keystroke.
The following lemma implies that, after a copy, the number of smileys in the result divides n evenly.
Lemma Suppose that g divides both r and k (g|r and g|k). Every configuration (r', k') reachable from (r, k) satisfies g|r' and g|k'.
Proof by induction. If the next step is a copy, to (r, r), then g|r and g|r. If the next step is a paste, to (r + k, k), then g|r + k and g|k.
Now we can prove the main claim.
Theorem The minimum number of steps from (1, 0) to some (n, k) where k is arbitrary is p1 + p2 + ... + pm, where n = p1 p2 ... pm is a prime factorization.
Proof user2566092 covered the upper bound: for i from 1 to m, copy once and paste pi - 1 times. The lower bound is obvious when n = 1. Otherwise, the best first action clearly is a copy, to (1, 1). We show by induction that the minimum number of steps from (1, 1) to (n, k) is p1 + p2 + ... + pm - 1.
From (1, 1), by the lemma, all we can do is paste until r divides n. Fix a particular r|n. The best action sequences from (r, r) to (n, n) and from (1, 1) to (n/r, n/r) clearly are the same. Without loss of generality by permuting the primes, assume that r = p1 p2 ... pj. Since a b >= a + b for a, b >= 2, we have r >= p1 + p2 + ... + pj. If r < n, then the total cost, by the inductive hypothesis, is at least r + pj+1 + pj+2 + ... + pm - 1 >= p1 + p2 + ... + pj - 1. If r = n, then the total cost is n - 1 >= p1 + p2 + ... + pm - 1.

2 * 2log(N).
Each time duplicate everything, until you don't need the complete set anymore. Then you could duplicate only part of the list.
E.g. 7 smileys:
1 copy everything
2 copy everything
4 copy only 3
Have 7.

The program for the above problem is given below. The time complexity of this solution is O(n) in worst case. The worst case occurs when the given number is prime.
#include <iostream>
using namespace std;
int main()
{
int smiles;
cin>>smiles;
int ans=0;
for(int i=2;i<=smiles;i++)
{
while(smiles%i==0)
{
ans+=i;
smiles/=i;
}
}
cout<<ans<<"\n";
return 0;
}

Related

Fast algorithm for sum of steps taken by the Euclidean algorithm over pairs of numbers under an upper bound

Note: This may involve a good deal of number theory, but the formula I found online is only an approximation, so I believe an exact solution requires some sort of iterative calculation by a computer.
My goal is to find an efficient algorithm (in terms of time complexity) to solve the following problem for large values of n:
Let R(a,b) be the amount of steps that the Euclidean algorithm takes to find the GCD of nonnegative integers a and b. That is, R(a,b) = 1 + R(b,a%b), and R(a,0) = 0. Given a natural number n, find the sum of R(a,b) for all 1 <= a,b <= n.
For example, if n = 2, then the solution is R(1,1) + R(1,2) + R(2,1) + R(2,2) = 1 + 2 + 1 + 1 = 5.
Since there are n^2 pairs corresponding to the numbers to be added together, simply computing R(a,b) for every pair can do no better than O(n^2), regardless of the efficiency of R. Thus, to improve the efficiency of the algorithm, a faster method must somehow calculate the sum of R(a,b) over many values at once. There are a few properties that I suspect might be useful:
If a = b, then R(a,b) = 1
If a < b, then R(a,b) = 1 + R(b,a)
R(a,b) = R(ka,kb) where k is some natural number
If b <= a, then R(a,b) = R(a+b,b)
If b <= a < 2b, then R(a,b) = R(2a-b,a)
Because of the first two properties, it is only necessary to find the sum of R(a,b) over pairs where a > b. I tried using this in addition to the third property in a method that computes R(a,b) only for pairs where a and b are also coprime in addition to a being greater than b. The total sum is then n plus the sum of (n / a) * ((2 * R(a,b)) + 1) over all such pairs (using integer division for n / a). This algorithm still had time complexity O(n^2), I discovered, due to Euler's totient function being roughly linear.
I don't need any specific code solution, I just need to figure out the procedure for a more efficient algorithm. But if the programming language matters, my attempts to solve this problem have used C++.
Side note: I have found that a formula has been discovered that nearly solves this problem, but it is only an approximation. Note that the formula calculates the average rather than the sum, so it would just need to be multiplied by n^2. If the formula could be expanded to reduce the error, it might work, but from what I can tell, I'm not sure if this is possible.
Using Stern-Brocot, due to symmetry, we can look at just one of the four subtrees rooted at 1/3, 2/3, 3/2 or 3/1. The time complexity is still O(n^2) but obviously performs less calculations. The version below uses the subtree rooted at 2/3 (or at least that's the one I looked at to think through :). Also note, we only care about the denominators there since the numerators are lower. Also note the code relies on rules 2 and 3 as well.
C++ code (takes about a tenth of a second for n = 10,000):
#include <iostream>
using namespace std;
long g(int n, int l, int mid, int r, int fromL, int turns){
long right = 0;
long left = 0;
if (mid + r <= n)
right = g(n, mid, mid + r, r, 1, turns + (1^fromL));
if (mid + l <= n)
left = g(n, l, mid + l, mid, 0, turns + fromL);
// Multiples
int k = n / mid;
// This subtree is rooted at 2/3
return 4 * k * turns + left + right;
}
long f(int n) {
// 1/1, 2/2, 3/3 etc.
long total = n;
// 1/2, 2/4, 3/6 etc.
if (n > 1)
total += 3 * (n >> 1);
if (n > 2)
// Technically 3 turns for 2/3 but
// we can avoid a subtraction
// per call by starting with 2. (I
// guess that means it could be
// another subtree, but I haven't
// thought it through.)
total += g(n, 2, 3, 1, 1, 2);
return total;
}
int main() {
cout << f(10000);
return 0;
}
I think this is a hard problem. We can avoid division and reduce the space usage to linear at least via the Stern--Brocot tree.
def f(n, a, b, r):
return r if a + b > n else r + f(n, a + b, b, r) + f(n, a + b, a, r + 1)
def R_sum(n):
return sum(f(n, d, d, 1) for d in range(1, n + 1))
def R(a, b):
return 1 + R(b, a % b) if b else 0
def test(n):
print(R_sum(n))
print(sum(R(a, b) for a in range(1, n + 1) for b in range(1, n + 1)))
test(100)

Minimum steps to reach target, given (1, 0) and operations A = 2A - B or B = 2B - A

I had an interview and I was not able to give a best approach for the problem.
A=1, B=0
- Operation L: A=2A-B
- Operation R: B=2B-A
For each step, only one operation(L or R) is taken place.
For a given number N, what is the minimum number of operations required to make A or B equals to N?
The most important one is an efficiency.
Thanks in advance.
In k operations you can get all values of N in [-(2^k)+1, 2^k].
Notice that abs(A) + abs(B) = 2^k for all possible k paths, and that A & B exactly cover the range [-(2^k)+1, 2^k] in the set of paths of length k.
k=0: (1,0)
k=1: (1,-1), (2,0)
k=2: (1,-3), (2,-2), (3,-1), (4,0)
etc...
Given N we can find the minimum k via log. Then we know the final pair is (N, N - 2^k) (or (N-2^k, N) if N <= 0). It's easy to follow the path back up to k=0 because one of the two elements will be out of range for the next smaller k.
E.g., N = 35.
Log2(35) = 5.13, so we use k=6.
2^6 = 64, so our final pair is (35, -29)
(35,-29) -> (3,-29) -> (3, -13) -> (3, -5) -> (3,-1) -> (1,-1) -> (1,0)
Figuring out k is O(1), finding the path is O(k) which is O(log(abs(N)).
It's not likely you need to prove anything in an interview, but if you did, you could use this:
By observation: A - B = 2^k for k steps observed for small k.
Then via induction: we have some valid (A, B) s.t. A-B = 2^k. Then L gets us (2A-B, B), but 2A-B-B = 2A-2B = 2(A-B) = 2^(k+1) as desired. Similarly for R.
It would be a challenging task for an interview but I would start with the recursion of trying to find the origin from the result. Given valid (A', B), where A' is the target we are after,
A' = 2A - B
for some A, which means that
A = (A' + B) / 2
The latter tells us that all (A' + B) in the path must be divisible by 2, and since the path ends (starts) at 1, all the (A' + B) in it are powers of 2.
Another property we can observe, although it may not be relevant to the solution is that once we switch in the first step to (even, even) or (odd, odd), we cannot switch back.

How do you determine the average-case complexity of this algorithm?

It's usually easy to calculate the time complexity for the best case and the worst case, but when it comes to the average case especially when there's a probability p given, I don't know where to start.
Let's look at the following algorithm to compute the product of all the elements in a matrix:
int computeProduct(int[][] A, int m, int n) {
int product = 1;
for (int i = 0; i < m; i++ {
for (int j = 0; j < n; j++) {
if (A[i][j] == 0) return 0;
product = product * A[i][j];
}
}
return product;
}
Suppose p is the probability of A[i][j] being 0 (i.e. the algorithm terminates there, return 0); how do we derive the average case time complexity for this algorithm?
Let’s consider a related problem. Imagine you have a coin that flips heads with probability p. How many times, on expectation, do you need to flip the coin before it comes up heads? The answer is 1/p, since
There’s a p chance that you need one flip.
There’s a p(1-p) chance that you need two flips (the first flip has to go tails and the second has to go heads).
There’s a p(1-p)^2 chance that you need three flips (the first two flips need to go tails and the third has to go heads)
...
There’s a p(1-p)^(k-1) chance that you need k flips (the first k-1 flips need to go tails and the kth needs to go heads.)
So this means the expected value of the number of flips is
p + 2p(1 - p) + 3p(1 - p)^2 + 4p(1 - p)^3 + ...
= p(1(1 - p)^0 + 2(1 - p)^1 + 3(1 - p)^2 + ...)
So now we need to work out what this summation is. The general form is
p sum from k = 1 to infinity (k(1 - p)^k).
Rather than solving this particular summation, let's make this more general. Let x be some variable that, later, we'll set equal to 1 - p, but which for now we'll treat as a free value. Then we can rewrite the above summation as
p sum from k = 1 to infinity (kx^(k-1)).
Now for a cute trick: notice that the inside of this expression is the derivative of x^k with respect to x. Therefore, this sum is
p sum from k = 1 to infinity (d/dx x^k).
The derivative is a linear operator, so we can move it out to the front:
p d/dx sum from k = 1 to infinity (x^k)
That inner sum (x + x^2 + x^3 + ...) is the Taylor series for 1 / (1 - x) - 1, so we can simplify this to get
p d/dx (1 / (1 - x) - 1)
= p / (1 - x)^2
And since we picked x = 1 - p, this simplifies to
p / (1 - (1 - p))^2
= p / p^2
= 1 / p
Whew! That was a long derivation. But it shows that the expected number of coin tosses needed is 1/p.
Now, in your case, your algorithm can be thought of as tossing mn coins that come up heads with probability p and stopping if any of them come up heads. Surely, the expected number of coins you’d need to toss won’t be more than the case where you’re allowed to flip infinitely often, so your expected runtime is at most O(1 / p) (assuming p > 0).
If we assume that p is independent of m and n, then we can notice that at after some initial growth, each added term into our summation as we increase the number of flips is exponentially lower than the previous ones. More specifically, after adding in roughly logarithmically many terms into the sum we’ll be off from the total in the case of the infinite summation. Therefore, provided that mn is roughly larger than Θ(log p), the sum ends up being Θ(1 / p). So in a big-O sense, if mn is independent of p, the runtime is Θ(1 / p).

How to solve the equation sum{max(a_i, x)}=y with variable x? Is there any algorithm with O(n) time complexity?

I am trying to find an algorithm to solve the following equation:
∑ max(ai, x) = y
in which the ai are constants and x is the variable.
I can find an algorithm with O(n log n) time complexity as follows:
First of all, sort the ai in O(n log n) time, and arrange intervals
(−∞, a0), (a0, a1), …, (ai, ai+1), …, (an−1, an), (an, ∞)
Then, for each interval, assume x belongs to this interval, and solve the equation. We could get a x̂, and then test whether x̂ belongs to this interval or not. If x̂ belongs to the corresponding interval, we will assign x̂ to x, and return x. On the other hand, we will try the next interval until we get the solution.
The above method is an O(n log n) algorithm due to the sort. With the definition of the equation-solving problem, I expect an algorithm with O(n) time complexity. Is there any reference for this problem?
First of all, this only has a solution if the sum of all a_i is smaller than y. You should check this first, because the algorithm below depends on this property.
Assume that we have chosen some pivot p from all a_i and want to calculate the x that corresponds to the interval [p, q), where q is the next larger a_i. This is:
If you move p to the next larger a_i, x changes as follows:
, where p' is the new pivot and n is the old number of a_i that are smaller or equal to p. Under the assumption that the sum of all a_i is smaller than y, this clearly leads to a decrease of x. Similarly, if we choose a smaller p, x is increased.
Coming back to the first equation, we can observe the following: If x is smaller than p, we should choose a smaller p. If x is greater than the smallest of the greater a_is, we should choose a larger p. In every other case, we have found the right x.
This can be utilized in a quick select procedure. #MvG's comment brought me onto this track. All credits for the quick select idea go to him. Here is some pseudo code (modified version from Wikipedia):
findX(list, y)
left := 0
right := length(list) - 1
sumGreater := 0 // the sum of all a_i greater than the current interval
numSmaller := 0 // the number of all a_i smaller than the current interval
minGreater := inf //the minimum of all a_i greater than the current interval
loop
if left = right
return (y - sumGreater) / (numSmaller + 1)
pivotIndex := medianOfMedians(list, left, right)
//the partition function will also sum the elements larger than the pivot,
//count the elements smaller than the pivot, and find the minimum of the
//larger elements
(pivotIndex, partialSumGreater, partialNumSmaller, partialMinGreater)
:= partition(list, left, right, pivotIndex)
x := (y - sumGreater - partialSumGreater) / (numSmaller + partialNumSmaller + 1)
if(x >= list[pivotIndex] && x < min(partialMinGreater, minGreater))
return x
else if x < list[pivotIndex]
right := pivotIndex - 1
minGreater := list[pivotIndex]
sumGreater += partialSumGreater + list[pivotIndex]
else
left := pivotIndex + 1
numSmaller += partialNumSmaller + 1
The key idea is that the partitioning function gathers some additional statistics. This does not change the time complexity of the partitioning function because it requires O(n) additional operations, leaving a total time complexity of O(n) for the partitioning function. The medianOfMedians function is also linear in time. The remaining operations in the loop are constant time. Assuming that the median of medians yields good pivots, the total time of the entire algorithm is approximately O(n + n/2 + n/4 + n/8 ...) = O(n).
Since comments might get deleted, I'm turning my own comments into a coherent answer. Contrary to the original question, I'm using indices 1 through n, avoiding the a0 originally used. So this is consistent one-based indexing using inclusive indices.
Assume for the moment that bi are the coefficients from your input, but in sorted order, so bi ≤ bi+1. As you essentially already wrote, if bi ≤ x ≤ bi+1 then the result is i ⋅ x + bi+1 + ⋯ + bn since the first i terms will use the x and the other terms will use the bj. Solving for x you get x = (y − bi+1 − ⋯ - bn) / i and putting that back into your inequality you have i ⋅ bi ≤ y − bi+1 − ⋯ − bn ≤ i ⋅ bi+1. Concentrating on one of the inequalities, you want the largest i such that
i ⋅ bi ≤ y − bi+1 − ⋯ − bn       (subsequently called “the inequality”)
But in order to make this work on unsorted ai, you'd need something similar to the median of medians. That is an algorithm which achieves O(n) guaranteed worst-case behavior for the problem of selecting a median, where the typical quickselect would take O(n²) in the worst case although it usually does quite well in practice.
Actually your problem is not that different from quickselect. You can pick a pivot coefficient, and split the remainder into larger and smaller values. Then you evaluate the inequality for the pivot element. If it is satisfied, you recurse into the list of larger elements, otherwise you recurse into the list of smaller elements, until at some point you have two adjacent elements, one which satisfies the inequality and one which does not.
This is O(n²) in the worst case, since you might need O(n) recursive calls, each of them taking O(n) time to process its input. Just like the O(n²) quickselect itself is suboptimal. The median-of-medians shows that that problem can indeed be solved in O(n). So we either need to find a similar solution here, or reformulate this problem here in terms of finding the median, or write some algorithm wich makes use of the median in a reasonable way.
Actually Nico Schertler found a way to achieve that last option: Take the algorithm I outlined above, but choose the pivot element to be the median. That way you can guarantee that each recursive call will process at most half as much input as the previous call. Since the median of medians itself is O(n) this can be done without exceeding the O(n) bound for each recursive call.
So in pseudocode it's like this (using inclusive indices throughout):
# f: Process whole problem with coefficients a_1 through a_n
f(y, a, n) := begin
if y < (sum of a_i for i from 1 through n): # O(n)
throw Error "Cannot satisfy equation" # Or omit check and risk division by zero
return g(a, 1, n, y) # O(n)
end
# g: Recursively process part of the problem, namely a_l through a_r
# Precondition: we know inequality holds for i = l - 1 and fails for i = r + 1
# a: the array as provided to f; will get modified in place
# l: left index (inclusive)
# r: right index (inclusive)
# y: (original y) - (sum of a_j for j from r + 1 through n)
g(a, l, r, y) := begin # process a_l through a_r O(r-l)
if r < l: # inequality holds in r but fails in l O(1)
return y / r # compute x for the case of i = r O(1)
m = median(a, l, r) # computed using median of medians O(r-l)
i = floor((l + r) / 2) # index of median, with same tie breaks O(1)
partition(a, l, r, m) # so a_l…a_(i-1) ≤ a_i=m ≤ a_(i+1)…a_r O(r-l)
rhs = y - (sum of a_j for j from i + 1 to r) # O((r-l)/2)
if i * a_i ≤ rhs: # condition holds, check larger i
return g(a, i + 1, r, y) # recurse in right half of list O((r-l)/2)
else: # condition fails, check smaller i
return g(a, l, i - 1, rhs - m) # recurse in left half of list O((r-l)/2)
end

Pyramids dynamic programming

I encountered this question in an interview and could not figure it out. I believe it has a dynamic programming solution but it eludes me.
Given a number of bricks, output the total number of 2d pyramids possible, where a pyramid is defined as any structure where a row of bricks has strictly less bricks than the row below it. You do not have to use all the bricks.
A brick is simply a square, the number of bricks in a row is the only important bit of information.
Really stuck with this one, I thought it would be easy to solve each problem 1...n iteratively and sum. But coming up with the number of pyramids possible with exactly i bricks is evading me.
example, n = 6
X
XX
X
XX XXX
X
XXX XXXX
XX X
XXX XXXX XXXXX
X
XX XX X
XXX XXXX XXXXX XXXXXX
So the answer is 13 possible pyramids from 6 bricks.
edit
I am positive this is a dynamic programming problem, because it makes sense to (once you've determined the first row) simply look to the index in your memorized array of your remainder of bricks to see how many pyramids fit atop.
It also makes sense to consider bottom rows of width at least n/2 because we can't have more bricks atop than on the bottom row EXCEPT and this is where I lose it and my mind falls apart, in certain (few cases) you can I.e. N = 10
X
XX
XXX
XXXX
Now the bottom row has 4 but there are 6 left to place on top
But with n = 11 we cannot have a bottom row with less than n/2 bricks. There is another wierd inconsistency like that with n = 4 where we cannot have a bottom row of n/2 = 2 bricks.
Let's choose a suitable definition:
f(n, m) = # pyramids out of n bricks with base of size < m
The answer you are looking for now is (given that N is your input number of bricks):
f(N, N+1) - 1
Let's break that down:
The first N is obvious: that's your number of bricks.
Your bottom row will contain at most N bricks (because that's all you have), so N+1 is a sufficient lower bound.
Finally, the - 1 is there because technically the empty pyramid is also a pyramid (and will thus be counted) but you exclude that from your solutions.
The base cases are simple:
f(n, 0) = 1 for any n >= 0
f(0, m) = 1 for any m >= 0
In both cases, it's the empty pyramid that we are counting here.
Now, all we need still is a recursive formula for the general case.
Let's assume we are given n and m and choose to have i bricks on the bottom layer. What can we place on top of this layer? A smaller pyramid, for which we have n - i bricks left and whose base has size < i. This is exactly f(n - i, i).
What is the range for i? We can choose an empty row so i >= 0. Obviously, i <= n because we have only n bricks. But also, i <= m - 1, by definition of m.
This leads to the recursive expression:
f(n, m) = sum f(n - i, i) for 0 <= i <= min(n, m - 1)
You can compute f recursively, but using dynamic programming it will be faster of course. Storing the results matrix is straightforward though, so I leave that up to you.
Coming back to the original claim that f(N, N+1)-1 is the answer you are looking for, it doesn't really matter which value to choose for m as long as it is > N. Based on the recursive formula it's easy to show that f(N, N + 1) = f(N, N + k) for every k >= 1:
f(N, N + k) = sum f(N - i, i) for 0 <= i <= min(N, N + k - 1)
= sum f(N - i, i) for 0 <= i <= N
= sum f(N - i, i) for 0 <= i <= min(N, N + 1 - 1)
In how many ways can you build a pyramid of width n? By putting any pyramid of width n-1 or less anywhere atop the layer of n bricks. So if p(n) is the number of pyramids of width n, then p(n) = sum [m=1 to n-1] (p(m) * c(n, m)), where c(n, m) is the number of ways you can place a layer of width m atop a layer of width n (I trust that you can work that one out yourself).
This, however, doesn't place a limitation on the number of bricks. Generally, in DP, any resource limitation must be modeled as a separate dimension. So your problem is now p(n, b): "How many pyramids can you build of width n with a total of b bricks"? In the recursive formula, for each possible way of building a smaller pyramid atop your current one, you need to refer to the correct amount of remaining bricks. I leave it as a challenge for you to work out the recursive formula; let me know if you need any hints.
You can think of your recursion as: given x bricks left where you used n bricks on last row, how many pyramids can you build. Now you can fill up rows from either top to bottom row or bottom to top row. I will explain the former case.
Here the recursion might look something like this (left is number of bricks left and last is number of bricks used on last row)
f(left,last)=sum (1+f(left-i,i)) for i in range [last+1,left] inclusive.
Since when you use i bricks on current row you will have left-i bricks left and i will be number of bricks used on this row.
Code:
int calc(int left, int last) {
int total=0;
if(left<=0) return 0; // terminal case, no pyramid with no brick
for(int i=last+1; i<=left; i++) {
total+=1+calc(left-i,i);
}
return total;
}
I will leave it to you to implement memoized or bottom-up dp version. Also you may want to start from bottom row and fill up upper rows in pyramid.
Since we are asked to count pyramids of any cardinality less than or equal to n, we may consider each cardinality in turn (pyramids of 1 element, 2 elements, 3...etc.) and sum them up. But in how many different ways can we compose a pyramid from k elements? The same number as the count of distinct partitions of k (for example, for k = 6, we can have (6), (1,5), (2,4), and (1,2,3)). A generating function/recurrence for the count of distinct partitions is described in Wikipedia and a sequence at OEIS.
Recurrence, based on the Pentagonal number Theorem:
q(k) = ak + q(k − 1) + q(k − 2) − q(k − 5) − q(k − 7) + q(k − 12) + q(k − 15) − q(k − 22)...
where ak is (−1)^(abs(m)) if k = 3*m^2 − m for some integer m and is 0 otherwise.
(The subtracted coefficients are generalized pentagonal numbers.)
Since the recurrence described in Wikipedia obliges the calculation of all preceding q(n)'s to arrive at a larger q(n), we can simply sum the results along the way to obtain our result.
JavaScript code:
function numPyramids(n){
var distinctPartitions = [1,1],
pentagonals = {},
m = _m = 1,
pentagonal_m = 2,
result = 1;
while (pentagonal_m / 2 <= n){
pentagonals[pentagonal_m] = Math.abs(_m);
m++;
_m = m % 2 == 0 ? -m / 2 : Math.ceil(m / 2);
pentagonal_m = _m * (3 * _m - 1);
}
for (var k=2; k<=n; k++){
distinctPartitions[k] = pentagonals[k] ? Math.pow(-1,pentagonals[k]) : 0;
var cs = [1,1,-1,-1],
c = 0;
for (var i in pentagonals){
if (i / 2 > k)
break;
distinctPartitions[k] += cs[c]*distinctPartitions[k - i / 2];
c = c == 3 ? 0 : c + 1;
}
result += distinctPartitions[k];
}
return result;
}
console.log(numPyramids(6)); // 13

Resources