TAOCP Vol 1: Overflowing multiple stacks proof - algorithm

I am self-studying TAOCP and trying to make sense of the solution to the following problem from Chapter 2.2.2 Linear Lists: Sequential Allocation.
[30] If σ is any sequence of insertions and deletions such as (12), let s0 (σ) be the number of stack overflows that occur when the simple method of Fig. 4 is applied to σ with initial conditions (11), and let s1 (σ) be the corresponding number of overflows with respect to other initial conditions such as (13). Prove that s0 (σ) ≤ s1 (σ)+L∞ − L0.
For s0, the initial conditions are that BASE[j] = TOP[j] = L0 for 1 <= j <= n (11), and BASE[n+1]=L∞. In other words, initially all space (L∞ − L0) is given to the last stack (the n-th stack), and all stacks are empty. s1 can be any other initial condition, such as, for example, evenly dividing all space among the n empty stacks.
Note that when stack i overflows, the algorithm is to find the nearest subsequent stack that is not at full capacity, and "move things up one notch." If all subsequent stacks (with respect to i) are at full capacity, then find the nearest previous stack (with respect to i) that is not full, and "move things down one notch." Lastly, if we cannot find any room for the new stack entry, then we must give up.
Our intuition is the following:
It is clear that many of the first stack overflows that occur with this method could be eliminated if we chose our initial conditions wisely, instead of allocating all space initially to the nth stack as suggested in (11). ... No matter how well the initial allocation is set up, it can save at most a fixed number of overflows, and the effect is noticeable only in the early stages of a program run.
The given solution is this:
First show that BASE[j]0 ≤ BASE[j]1 at all times. Then observe that each overflow for stack i in s0(σ) that does not also overflow in s1(σ) occurs at a time when stack i has gotten larger than ever before, yet its new size is not more than the original size allocated to stack i in s1(σ).
I don't think that the "larger than ever before" part of the statement is true. A stack's capacity can shrink as result of another stack overflowing. Say stack i is full, then a deletion from stack i occurs, then stack i shrinks, and lastly an insertion into stack i occurs. In this case an overflow occurs in stack i when stack i grows larger, but not "larger than ever before." For this reason, I do not understand the solution, or see what I am supposed to see, I suppose.
I am trying to figure out a way to prove that s0 (σ) will encounter at most L∞ − L0 more overflows than s0 (σ) will, but am currently stumped. Any help would be appreciated :-)

I think you might have a valid objection, but I’m not near my copy of TAoCP, and I’m not sure that I have correctly implemented the algorithm (see Python below).
Let’s suppose we have n = 4 stacks. We start the “bad” initial conditions (s0) where stacks 0, 1, 2 each have capacity 0 and stack 3 has capacity 4, and the “good” initial conditions (s1) where each stack has capacity 1.
A bad sequence is: push stack 2, pop stack 2, push stack 1, pop stack 1, push stack 0, push stack 1, push stack 2. Each push overflows s0 but not s1.
import random
def push(L, U, i):
if U[i] < L[i + 1]:
U[i] += 1
return False
for k in range(i + 1, len(U)):
if U[k] < L[k + 1]:
for j in range(k, i, -1):
U[j] += 1
L[j] += 1
U[i] += 1
return True
for k in range(i - 1, -1, -1):
if U[k] < L[k + 1]:
for j in range(k + 1, i):
L[j] -= 1
U[j] -= 1
L[i] -= 1
return True
return False
def pop(L, U, i):
if L[i] < U[i]:
U[i] -= 1
def test():
for t in range(1000000):
n = 4
cap = 1
L0 = [0] * n + [n * cap]
U0 = L0[:n]
L1 = list(range(0, (n + 1) * cap, cap))
U1 = L1[:n]
c0 = 0
c1 = 0
ops = []
for r in range(7):
i = random.randrange(n)
if random.randrange(2):
c0 += push(L0, U0, i)
c1 += push(L1, U1, i)
ops.append("push {}".format(i))
else:
pop(L0, U0, i)
pop(L1, U1, i)
ops.append("pop {}".format(i))
assert L0[-1] == n * cap == L1[-1]
assert all(U0[i] - L0[i] == U1[i] - L1[i] for i in range(n))
assert c0 <= c1 + L1[-1], "\n".join(ops + [str(c0), str(c1)])
if __name__ == "__main__":
test()

Related

Find the max sum of removed elements from head or tail?

You are given a stack of N integers such that the first element represents the top of the stack and the last element represents the bottom of the stack. You need to pop at least one element from the stack. At any one moment, you can convert stack into a queue. The bottom of the stack represents the front of the queue. You cannot convert the queue back into a stack. Your task is to remove exactly K elements such that the sum of the K removed elements is maximised.
Print the maximum possible sum M of the K removed elements
Input: N=10, K=4
stack = [10 9 1 2 3 4 5 6 7 8]
Output: 34
Explanation:
Pop two elements from the stack. i.e {10,9}
Then convert the stack into queue and remove first 2 elements from the
queue. i.e {8,7}.
The maximum possible sum is 10+9+8+7 = 34
I was thinking of solving it with greedy algo, with code like following:
stk = [10, 9, 1, 30, 3, 4, 5, 100, 1, 8]
n = 10
k = 4
removed = 0
top = 0
sum = 0
bottom = len(stk)-1
while removed < k:
if stk[top] >= stk[bottom]:
sum += stk[top]
top += 1
else:
sum += stk[bottom]
bottom -= 1
removed += 1
print(sum)
under normal test case(given one) it'll work, but it'll fail in many other scenarios like:
[10, 9, 1, 30, 3, 4, 5, 100, 1, 8]
Any suggestions on how to improve on this?
The data structure gives you the option to select 1,...,n values from the stop of the structure and then m elements from the bottom of the structure where K = m+n. You can find the maximum by starting out with summing the first K elements of the structure and then work your way backwards by replacing the n'th element with the first element from the back. Working backwards to you get to only one stack element. Keep track of the maximum along the way.
In python:
lst = [10, 9, 1, 30, 3, 4, 5, 100, 1, 8]
K = 4
sum_ = sum(lst[0:K])
max_so_far = sum_
for i in range(K-1):
sum_ = sum_ - lst[K-1-i] + lst[-i-1]
max_so_far = max(sum_, max_so_far)
print(max_so_far)
The running time is O(n).
If you carefully look at the problem, it basically boils down to:
Select x elements from the start and y elements from the end of the array, such that the sum is maximum and x+y = K.
That is a pretty simple problem to solve, which basically requires this algorithm:
ans = sum(last K elements)
for i in range(0, K):
ans = max(ans, ans + array[i] - array[n-(k-i+1)]) #picking elements from the start and removing elements from the end
// code in C language
/*
Idea :
W.K.T even it is a stack or a queue we will pop/delete takes place at end or starting of array. we are making use of that.
2.K elements has to be removed in a stack such that their SUM is max
popping of elements has to be at start or end of stack and no. of elements has to be removed are 'K'.
4.We are taking all possibilities of K
i.e. if we take n elements from the front part of stack then K-n elements ahs to selected from back of stack
eg : K = 5 --> possibilities:
(0 5) (1 4) (2 3) (3 2) (4 1) (5 0) -->as you can observe first indx is increasing to K and second indx is decaying to 0 From K.
(0 5) means selecting 0 elements from front of stack and 5 elementts from back of stack
similarly, the other possibilities.
We are taking each possibility and calculating their sum , storing them in separate array 'Sum' and getting max of the sum[] and it will be answer . And the possibility (suppose a b) at which we are getting Max will be our moment where you can convert stack into a queue . Before stack converted into stack we have popped a elements from stack i.e. front of array and after stack converted to queue we deleting b elements which will be done at front of queue i.e. back of the stack.
*/
enter image description here

Find final square in matrix walking like a spiral

Given the matrix A x A and a number of movements N.
And walking like a spiral:
right while possible, then
down while possible, then
left while possible, then
up while possible, repeat until got N.
Image with example (A = 8; N = 36)
In this example case, the final square is (4; 7).
My question is: Is it possible to use a generic formula to solve this?
Yes, it is possible to calculate the answer.
To do so, it will help to split up the problem into three parts.
(Note: I start counting at zero to simplify the math. This means that you'll have to add 1 to some parts of the answer. For instance, my answer to A = 8, N = 36 would be the final square (3; 6), which has the label 35.)
(Another note: this answer is quite similar to Nyavro's answer, except that I avoid the recursion here.)
In the first part, you calculate the labels on the diagonal:
(0; 0) has label 0.
(1; 1) has label 4*(A-1). The cycle can be evenly split into four parts (with your labels: 1..7, 8..14, 15..21, 22..27).
(2; 2) has label 4*(A-1) + 4*(A-3). After taking one cycle around the A x A matrix, your next cycle will be around a (A - 2) x (A - 2) matrix.
And so on. There are plenty of ways to now figure out the general rule for (K; K) (when 0 < K < A/2). I'll just pick the one that's easiest to show:
4*(A-1) + 4*(A-3) + 4*(A-5) + ... + 4*(A-(2*K-1)) =
4*A*K - 4*(1 + 3 + 5 + ... + (2*K-1)) =
4*A*K - 4*(K + (0 + 2 + 4 + ... + (2*K-2))) =
4*A*K - 4*(K + 2*(0 + 1 + 2 + ... + (K-1))) =
4*A*K - 4*(K + 2*(K*(K-1)/2)) =
4*A*K - 4*(K + K*(K-1)) =
4*A*K - 4*(K + K*K - K) =
4*A*K - 4*K*K =
4*(A-K)*K
(Note: check that 4*(A-K)*K = 28 when A = 8 and K = 1. Compare this to the label at (2; 2) in your example.)
Now that we know what labels are on the diagonal, we can figure out how many layers (say K) we have to remove from our A x A matrix so that the final square is on the edge. If we do this, then answering our question
What are the coordinates (X; Y) when I take N steps in a A x A matrix?
can be done by calculating this K and instead solve the question
What are the coordinates (X - K; Y - K) when I take N - 4*(A-K)*K steps in a (A - 2*K) x (A - 2*K) matrix?
To do this, we should find the largest integer K such that K < A/2 and 4*(A-K)*K <= N.
The solution to this is K = floor(A/2 - sqrt(A*A-N)/2).
All that remains is to find out the coordinates of a square that is N along the edge of some A x A matrix:
if 0*E <= N < 1*E, the coordinates are (0; N);
if 1*E <= N < 2*E, the coordinates are (N - E; E);
if 2*E <= N < 3*E, the coordinates are (E; 3*E - N); and
if 3*E <= N < 4*E, the coordinates are (4*E - N; 0).
Here, E = A - 1.
To conclude, here is a naive (layerNumber gives incorrect answers for large values of a due to float inaccuracy) Haskell implementation of this answer:
finalSquare :: Integer -> Integer -> Maybe (Integer, Integer)
finalSquare a n
| Just (x', y') <- edgeSquare a' n' = Just (x' + k, y' + k)
| otherwise = Nothing
where
k = layerNumber a n
a' = a - 2*k
n' = n - 4*(a-k)*k
edgeSquare :: Integer -> Integer -> Maybe (Integer, Integer)
edgeSquare a n
| n < 1*e = Just (0, n)
| n < 2*e = Just (n - e, e)
| n < 3*e = Just (e, 3*e - n)
| n < 4*e = Just (4*e - n, 0)
| otherwise = Nothing
where
e = a - 1
layerNumber :: Integer -> Integer -> Integer
layerNumber a n = floor $ aa/2 - sqrt(aa*aa-nn)/2
where
aa = fromInteger a
nn = fromInteger n
Here is the possible solution:
f a n | n < (a-1)*1 = (0, n)
| n < (a-1)*2 = (n-(a-1), a-1)
| n < (a-1)*3 = (a-1, 3*(a-1)-n)
| n < (a-1)*4 = (4*(a-1)-n, 0)
| otherwise = add (1,1) (f (a-2) (n - 4*(a-1))) where
add (x1, y1) (x2, y2) = (x1+x2, y1+y2)
This is a basic solution, it may be generalized further - I just don't know how much generalization you need. So you can get the idea.
Edit
Notes:
The solution is for 0-based index
Some check for existence is required (n >= a*a)
I'm going to propose a relatively simple workaround here which generates all the indices in O(A^2) time so that they can later be accessed in O(1) for any N. If A changes, however, we would have to execute the algorithm again, which would once more consume O(A^2) time.
I suggest you use a structure like this to store the indices to access your matrix:
Coordinate[] indices = new Coordinate[A*A]
Where Coordinate is just a pair of int.
You can then fill your indices array by using some loops:
(This implementation uses 1-based array access. Correct expressions containing i, sentinel and currentDirection accordingly if this is an issue.)
Coordinate[] directions = { {1, 0}, {0, 1}, {-1, 0}, {0, -1} };
Coordinate c = new Coordinate(1, 1);
int currentDirection = 1;
int i = 1;
int sentinel = A;
int sentinelIncrement = A - 1;
boolean sentinelToggle = false;
while(i <= A * A) {
indices[i] = c;
if (i >= sentinel) {
if (sentinelToggle) {
sentinelIncrement -= 1;
}
sentinel += sentinelIncrement;
sentinelToggle = !sentinelToggle;
currentDirection = currentDirection mod 4 + 1;
}
c += directions[currentDirection];
i++;
}
Alright, off to the explanation: I'm using a variable called sentinel to keep track of where I need to switch directions (directions are simply switched by cycling through the array directions).
The value of sentinel is incremented in such a way that it always has the index of a corner in our spiral. In your example the sentinel would take on the values 8, 15, 22, 28, 34, 39... and so on.
Note that the index of "sentinel" increases twice by 7 (8, 15 = 8 + 7, 22 = 15 + 7), then by 6 (28 = 22 + 6, 34 = 28 + 6), then by 5 and so on. In my while loop I used the boolean sentinelToggle for this. Each time we hit a corner of the spiral (this is exactly iff i == sentinel, which is where the if-condition comes in) we increment the sentinel by sentinelIncrement and change the direction we're heading. If sentinel has been incremented twice by the same value, the if-condition if (sentinelToggle) will be true, so sentinelIncrement is decreased by one. We have to decrease sentinelIncrement because our spiral gets smaller as we go on.
This goes on as long as i <= A*A, that is, as long as our array indices has still entries that are zero.
Note that this does not give you a closed formula for a spiral coordinate in respect to N (which would be O(1) ); instead it generates the indices for all N which takes up O(A^2) time and after that guarantees access in O(1) by simply calling indices[N].
O(n^2) hopefully shouldn't hurt too badly because I'm assuming that you'll also need to fill your matrix at some point which also takes O(n^2).
If efficiency is a problem, consider getting rid off sentinelToggle so it doesn't mess up branch prediction. Instead, decrement sentinelIncrement every time the while condition is met. To get the same effect for your sentinel value, simply start sentinelIncrement at (A - 1) * 2 and every time the if-condition is met, execute:
sentinel += sentinelIncrement / 2
The integer division will have the same effect as only decreasing sentinelIncrement every second time. I didn't do this whole thing in my version because I think it might be more easily understandable with just a boolean value.
Hope this helps!

Number of unique sequences of 3 digits (-1,0,1) given a length that matches a sum

Say you have a vertical game board of length n (being the number of spaces). And you have a three-sided die that has the options: go forward one, stay and go back one. If you go below or above the number of board game spaces it is an invalid game. The only valid move once you reach the end of the board is "stay". Given an exact number of die rolls t, is it possible to algorithmically work out the number of unique dice rolls that result in a winning game?
So far I've tried producing a list of every possible combination of (-1,0,1) for the given number of die rolls and sorting through the list to see if any add up to the length of the board and also meet all the requirements for being a valid game. But this is impractical for dice rolls above 20.
For example:
t=1, n=2; Output=1
t=3, n=2; Output=3
You can use a dynamic programming approach. The sketch of a recurrence is:
M(0, 1) = 1
M(t, n) = T(t-1, n-1) + T(t-1, n) + T(t-1, n+1)
Of course you have to consider the border cases (like going off the board or not allowing to exit the end of the board, but it's easy to code that).
Here's some Python code:
def solve(N, T):
M, M2 = [0]*N, [0]*N
M[0] = 1
for i in xrange(T):
M, M2 = M2, M
for j in xrange(N):
M[j] = (j>0 and M2[j-1]) + M2[j] + (j+1<N-1 and M2[j+1])
return M[N-1]
print solve(3, 2) #1
print solve(2, 1) #1
print solve(2, 3) #3
print solve(5, 20) #19535230
Bonus: fancy "one-liner" with list compreehension and reduce
def solve(N, T):
return reduce(
lambda M, _: [(j>0 and M[j-1]) + M[j] + (j<N-2 and M[j+1]) for j in xrange(N)],
xrange(T), [1]+[0]*N)[-1]
Let M[i, j] be an N by N matrix with M[i, j] = 1 if |i-j| <= 1 and 0 otherwise (and the special case for the "stay" rule of M[N, N-1] = 0)
This matrix counts paths of length 1 from position i to position j.
To find paths of length t, simply raise M to the t'th power. This can be performed efficiently by linear algebra packages.
The solution can be read off: M^t[1, N].
For example, computing paths of length 20 on a board of size 5 in an interactive Python session:
>>> import numpy
>>> M = numpy.matrix('1 1 0 0 0;1 1 1 0 0; 0 1 1 1 0; 0 0 1 1 1; 0 0 0 0 1')
>>> M
matrix([[1, 1, 0, 0, 0],
[1, 1, 1, 0, 0],
[0, 1, 1, 1, 0],
[0, 0, 1, 1, 1],
[0, 0, 0, 0, 1]])
>>> M ** 20
matrix([[31628466, 51170460, 51163695, 31617520, 19535230],
[51170460, 82792161, 82787980, 51163695, 31617520],
[51163695, 82787980, 82792161, 51170460, 31628465],
[31617520, 51163695, 51170460, 31628466, 19552940],
[ 0, 0, 0, 0, 1]])
So there's M^20[1, 5], or 19535230 paths of length 20 from start to finish on a board of size 5.
Try a backtracking algorithm. Recursively "dive down" into depth t and only continue with dice values that could still result in a valid state. Propably by passing a "remaining budget" around.
For example, n=10, t=20, when you reached depth 10 of 20 and your budget is still 10 (= steps forward and backwards seemed to cancelled), the next recursion steps until depth t would discontinue the 0 and -1 possibilities, because they could not result in a valid state at the end.
A backtracking algorithms for this case is still very heavy (exponential), but better than first blowing up a bubble with all possibilities and then filtering.
Since zeros can be added anywhere, we'll multiply those possibilities by the different arrangements of (-1)'s:
X (space 1) X (space 2) X (space 3) X (space 4) X
(-1)'s can only appear in spaces 1,2 or 3, not in space 4. I got help with the mathematical recurrence that counts the number of ways to place minus ones without skipping backwards.
JavaScript code:
function C(n,k){if(k==0||n==k)return 1;var p=n;for(var i=2;i<=k;i++)p*=(n+1-i)/i;return p}
function sumCoefficients(arr,cs){
var s = 0, i = -1;
while (arr[++i]){
s += cs[i] * arr[i];
}
return s;
}
function f(n,t){
var numMinusOnes = (t - (n-1)) >> 1
result = C(t,n-1),
numPlaces = n - 2,
cs = [];
for (var i=1; numPlaces-i>=i-1; i++){
cs.push(-Math.pow(-1,i) * C(numPlaces + 1 - i,i));
}
var As = new Array(cs.length),
An;
As[0] = 1;
for (var m=1; m<=numMinusOnes; m++){
var zeros = t - (n-1) - 2*m;
An = sumCoefficients(As,cs);
As.unshift(An);
As.pop();
result += An * C(zeros + 2*m + n-1,zeros);
}
return result;
}
Output:
console.log(f(5,20))
19535230

Number of Paths in a Triangle

I recently encountered a much more difficult variation of this problem, but realized I couldn't generate a solution for this very simple case. I searched Stack Overflow but couldn't find a resource that previously answered this.
You are given a triangle ABC, and you must compute the number of paths of certain length that start at and end at 'A'. Say our function f(3) is called, it must return the number of paths of length 3 that start and end at A: 2 (ABA,ACA).
I'm having trouble formulating an elegant solution. Right now, I've written a solution that generates all possible paths, but for larger lengths, the program is just too slow. I know there must be a nice dynamic programming solution that reuses sequences that we've previously computed but I can't quite figure it out. All help greatly appreciated.
My dumb code:
def paths(n,sequence):
t = ['A','B','C']
if len(sequence) < n:
for node in set(t) - set(sequence[-1]):
paths(n,sequence+node)
else:
if sequence[0] == 'A' and sequence[-1] == 'A':
print sequence
Let PA(n) be the number of paths from A back to A in exactly n steps.
Let P!A(n) be the number of paths from B (or C) to A in exactly n steps.
Then:
PA(1) = 1
PA(n) = 2 * P!A(n - 1)
P!A(1) = 0
P!A(2) = 1
P!A(n) = P!A(n - 1) + PA(n - 1)
= P!A(n - 1) + 2 * P!A(n - 2) (for n > 2) (substituting for PA(n-1))
We can solve the difference equations for P!A analytically, as we do for Fibonacci, by noting that (-1)^n and 2^n are both solutions of the difference equation, and then finding coefficients a, b such that P!A(n) = a*2^n + b*(-1)^n.
We end up with the equation P!A(n) = 2^n/6 + (-1)^n/3, and PA(n) being 2^(n-1)/3 - 2(-1)^n/3.
This gives us code:
def PA(n):
return (pow(2, n-1) + 2*pow(-1, n-1)) / 3
for n in xrange(1, 30):
print n, PA(n)
Which gives output:
1 1
2 0
3 2
4 2
5 6
6 10
7 22
8 42
9 86
10 170
11 342
12 682
13 1366
14 2730
15 5462
16 10922
17 21846
18 43690
19 87382
20 174762
21 349526
22 699050
23 1398102
24 2796202
25 5592406
26 11184810
27 22369622
28 44739242
29 89478486
The trick is not to try to generate all possible sequences. The number of them increases exponentially so the memory required would be too great.
Instead, let f(n) be the number of sequences of length n beginning and ending A, and let g(n) be the number of sequences of length n beginning with A but ending with B. To get things started, clearly f(1) = 1 and g(1) = 0. For n > 1 we have f(n) = 2g(n - 1), because the penultimate letter will be B or C and there are equal numbers of each. We also have g(n) = f(n - 1) + g(n - 1) because if a sequence ends begins A and ends B the penultimate letter is either A or C.
These rules allows you to compute the numbers really quickly using memoization.
My method is like this:
Define DP(l, end) = # of paths end at end and having length l
Then DP(l,'A') = DP(l-1, 'B') + DP(l-1,'C'), similar for DP(l,'B') and DP(l,'C')
Then for base case i.e. l = 1 I check if the end is not 'A', then I return 0, otherwise return 1, so that all bigger states only counts those starts at 'A'
Answer is simply calling DP(n, 'A') where n is the length
Below is a sample code in C++, you can call it with 3 which gives you 2 as answer; call it with 5 which gives you 6 as answer:
ABCBA, ACBCA, ABABA, ACACA, ABACA, ACABA
#include <bits/stdc++.h>
using namespace std;
int dp[500][500], n;
int DP(int l, int end){
if(l<=0) return 0;
if(l==1){
if(end != 'A') return 0;
return 1;
}
if(dp[l][end] != -1) return dp[l][end];
if(end == 'A') return dp[l][end] = DP(l-1, 'B') + DP(l-1, 'C');
else if(end == 'B') return dp[l][end] = DP(l-1, 'A') + DP(l-1, 'C');
else return dp[l][end] = DP(l-1, 'A') + DP(l-1, 'B');
}
int main() {
memset(dp,-1,sizeof(dp));
scanf("%d", &n);
printf("%d\n", DP(n, 'A'));
return 0;
}
EDITED
To answer OP's comment below:
Firstly, DP(dynamic programming) is always about state.
Remember here our state is DP(l,end), represents the # of paths having length l and ends at end. So to implement states using programming, we usually use array, so DP[500][500] is nothing special but the space to store the states DP(l,end) for all possible l and end (That's why I said if you need a bigger length, change the size of array)
But then you may ask, I understand the first dimension which is for l, 500 means l can be as large as 500, but how about the second dimension? I only need 'A', 'B', 'C', why using 500 then?
Here is another trick (of C/C++), the char type indeed can be used as an int type by default, which value is equal to its ASCII number. And I do not remember the ASCII table of course, but I know that around 300 will be enough to represent all the ASCII characters, including A(65), B(66), C(67)
So I just declare any size large enough to represent 'A','B','C' in the second dimension (that means actually 100 is more than enough, but I just do not think that much and declare 500 as they are almost the same, in terms of order)
so you asked what DP[3][1] means, it means nothing as the I do not need / calculate the second dimension when it is 1. (Or one can think that the state dp(3,1) does not have any physical meaning in our problem)
In fact, I always using 65, 66, 67.
so DP[3][65] means the # of paths of length 3 and ends at char(65) = 'A'
You can do better than the dynamic programming/recursion solution others have posted, for the given triangle and more general graphs. Whenever you are trying to compute the number of walks in a (possibly directed) graph, you can express this in terms of the entries of powers of a transfer matrix. Let M be a matrix whose entry m[i][j] is the number of paths of length 1 from vertex i to vertex j. For a triangle, the transfer matrix is
0 1 1
1 0 1.
1 1 0
Then M^n is a matrix whose i,j entry is the number of paths of length n from vertex i to vertex j. If A corresponds to vertex 1, you want the 1,1 entry of M^n.
Dynamic programming and recursion for the counts of paths of length n in terms of the paths of length n-1 are equivalent to computing M^n with n multiplications, M * M * M * ... * M, which can be fast enough. However, if you want to compute M^100, instead of doing 100 multiplies, you can use repeated squaring: Compute M, M^2, M^4, M^8, M^16, M^32, M^64, and then M^64 * M^32 * M^4. For larger exponents, the number of multiplies is about c log_2(exponent).
Instead of using that a path of length n is made up of a path of length n-1 and then a step of length 1, this uses that a path of length n is made up of a path of length k and then a path of length n-k.
We can solve this with a for loop, although Anonymous described a closed form for it.
function f(n){
var as = 0, abcs = 1;
for (n=n-3; n>0; n--){
as = abcs - as;
abcs *= 2;
}
return 2*(abcs - as);
}
Here's why:
Look at one strand of the decision tree (the other one is symmetrical):
A
B C...
A C
B C A B
A C A B B C A C
B C A B B C A C A C A B B C A B
Num A's Num ABC's (starting with first B on the left)
0 1
1 (1-0) 2
1 (2-1) 4
3 (4-1) 8
5 (8-3) 16
11 (16-5) 32
Cleary, we can't use the strands that end with the A's...
You can write a recursive brute force solution and then memoize it (aka top down dynamic programming). Recursive solutions are more intuitive and easy to come up with. Here is my version:
# search space (we have triangle with nodes)
nodes = ["A", "B", "C"]
#cache # memoize!
def recurse(length, steps):
# if length of the path is n and the last node is "A", then it's
# a valid path and we can count it.
if length == n and ((steps-1)%3 == 0 or (steps+1)%3 == 0):
return 1
# we don't want paths having len > n.
if length > n:
return 0
# from each position, we have two possibilities, either go to next
# node or previous node. Total paths will be sum of both the
# possibilities. We do this recursively.
return recurse(length+1, steps+1) + recurse(length+1, steps-1)

Find the minimum number of operations required to compute a number using a specified range of numbers

Let me start with an example -
I have a range of numbers from 1 to 9. And let's say the target number that I want is 29.
In this case the minimum number of operations that are required would be (9*3)+2 = 2 operations. Similarly for 18 the minimum number of operations is 1 (9*2=18).
I can use any of the 4 arithmetic operators - +, -, / and *.
How can I programmatically find out the minimum number of operations required?
Thanks in advance for any help provided.
clarification: integers only, no decimals allowed mid-calculation. i.e. the following is not valid (from comments below): ((9/2) + 1) * 4 == 22
I must admit I didn't think about this thoroughly, but for my purpose it doesn't matter if decimal numbers appear mid-calculation. ((9/2) + 1) * 4 == 22 is valid. Sorry for the confusion.
For the special case where set Y = [1..9] and n > 0:
n <= 9 : 0 operations
n <=18 : 1 operation (+)
otherwise : Remove any divisor found in Y. If this is not enough, do a recursion on the remainder for all offsets -9 .. +9. Offset 0 can be skipped as it has already been tried.
Notice how division is not needed in this case. For other Y this does not hold.
This algorithm is exponential in log(n). The exact analysis is a job for somebody with more knowledge about algebra than I.
For more speed, add pruning to eliminate some of the search for larger numbers.
Sample code:
def findop(n, maxlen=9999):
# Return a short postfix list of numbers and operations
# Simple solution to small numbers
if n<=9: return [n]
if n<=18: return [9,n-9,'+']
# Find direct multiply
x = divlist(n)
if len(x) > 1:
mults = len(x)-1
x[-1:] = findop(x[-1], maxlen-2*mults)
x.extend(['*'] * mults)
return x
shortest = 0
for o in range(1,10) + range(-1,-10,-1):
x = divlist(n-o)
if len(x) == 1: continue
mults = len(x)-1
# We spent len(divlist) + mults + 2 fields for offset.
# The last number is expanded by the recursion, so it doesn't count.
recursion_maxlen = maxlen - len(x) - mults - 2 + 1
if recursion_maxlen < 1: continue
x[-1:] = findop(x[-1], recursion_maxlen)
x.extend(['*'] * mults)
if o > 0:
x.extend([o, '+'])
else:
x.extend([-o, '-'])
if shortest == 0 or len(x) < shortest:
shortest = len(x)
maxlen = shortest - 1
solution = x[:]
if shortest == 0:
# Fake solution, it will be discarded
return '#' * (maxlen+1)
return solution
def divlist(n):
l = []
for d in range(9,1,-1):
while n%d == 0:
l.append(d)
n = n/d
if n>1: l.append(n)
return l
The basic idea is to test all possibilities with k operations, for k starting from 0. Imagine you create a tree of height k that branches for every possible new operation with operand (4*9 branches per level). You need to traverse and evaluate the leaves of the tree for each k before moving to the next k.
I didn't test this pseudo-code:
for every k from 0 to infinity
for every n from 1 to 9
if compute(n,0,k):
return k
boolean compute(n,j,k):
if (j == k):
return (n == target)
else:
for each operator in {+,-,*,/}:
for every i from 1 to 9:
if compute((n operator i),j+1,k):
return true
return false
It doesn't take into account arithmetic operators precedence and braces, that would require some rework.
Really cool question :)
Notice that you can start from the end! From your example (9*3)+2 = 29 is equivalent to saying (29-2)/3=9. That way we can avoid the double loop in cyborg's answer. This suggests the following algorithm for set Y and result r:
nextleaves = {r}
nops = 0
while(true):
nops = nops+1
leaves = nextleaves
nextleaves = {}
for leaf in leaves:
for y in Y:
if (leaf+y) or (leaf-y) or (leaf*y) or (leaf/y) is in X:
return(nops)
else:
add (leaf+y) and (leaf-y) and (leaf*y) and (leaf/y) to nextleaves
This is the basic idea, performance can be certainly be improved, for instance by avoiding "backtracks", such as r+a-a or r*a*b/a.
I guess my idea is similar to the one of Peer Sommerlund:
For big numbers, you advance fast, by multiplication with big ciphers.
Is Y=29 prime? If not, divide it by the maximum divider of (2 to 9).
Else you could subtract a number, to reach a dividable number. 27 is fine, since it is dividable by 9, so
(29-2)/9=3 =>
3*9+2 = 29
So maybe - I didn't think about this to the end: Search the next divisible by 9 number below Y. If you don't reach a number which is a digit, repeat.
The formula is the steps reversed.
(I'll try it for some numbers. :) )
I tried with 2551, which is
echo $((((3*9+4)*9+4)*9+4))
But I didn't test every intermediate result whether it is prime.
But
echo $((8*8*8*5-9))
is 2 operations less. Maybe I can investigate this later.

Resources