Find the number of all possible path in a grid, from (0, 0) to (n, n) - algorithm

I don't know how to find the number of all possible path in a grid, from a Point A to a Point B.
The point A is on (0,0) and the point B is on (n,n).
A can move up, down, right, and left, and can't move on visited points.
While A moving, A(x,y) = (x,y|(0=<x=<n)∩(0=<y=<n)).

You can solve this problem with recursive backtracking, but there's another approach which I think is more interesting.
If we work out the first few cases by hand we find that:
A 1x1 square has 1 path
A 2x2 square has 2 paths
A 3x3 square has 12 paths
If we then go to OEIS (the Online Encyclopedia of Integer Sequences) and put in the search phrase "1,2,12 paths", the very first result is A007764 which is entitled "Number of nonintersecting (or self-avoiding) rook paths joining opposite corners of an n X n grid".
Knowing what integer sequence you're looking for unlocks significant mathematical resources, including source code to generate the sequence, related sequences, and best-known values.
The known values of the sequence are:
1 1
2 2
3 12
4 184
5 8512
6 1262816
7 575780564
8 789360053252
9 3266598486981642
10 41044208702632496804
11 1568758030464750013214100
12 182413291514248049241470885236
13 64528039343270018963357185158482118
14 69450664761521361664274701548907358996488
15 227449714676812739631826459327989863387613323440
16 2266745568862672746374567396713098934866324885408319028
17 68745445609149931587631563132489232824587945968099457285419306
18 6344814611237963971310297540795524400449443986866480693646369387855336
19 1782112840842065129893384946652325275167838065704767655931452474605826692782532
20 1523344971704879993080742810319229690899454255323294555776029866737355060592877569255844
21 3962892199823037560207299517133362502106339705739463771515237113377010682364035706704472064940398
22 31374751050137102720420538137382214513103312193698723653061351991346433379389385793965576992246021316463868
23 755970286667345339661519123315222619353103732072409481167391410479517925792743631234987038883317634987271171404439792
24 55435429355237477009914318489061437930690379970964331332556958646484008407334885544566386924020875711242060085408513482933945720
25 12371712231207064758338744862673570832373041989012943539678727080484951695515930485641394550792153037191858028212512280926600304581386791094
26 8402974857881133471007083745436809127296054293775383549824742623937028497898215256929178577083970960121625602506027316549718402106494049978375604247408
27 17369931586279272931175440421236498900372229588288140604663703720910342413276134762789218193498006107082296223143380491348290026721931129627708738890853908108906396
You can generate the first few terms yourself on paper or via recursive backtracking, per the other answer.

I would suggest solving this with naive recursion.
Keep a set visted of places that you have visited. And in pseudo-code that is deliberately not any particular language:
function recursive_call(i, j, visited=none)
if visited is none then
visited = set()
end if
if i = n and j = n then
return 1
else if (i, j) in visited or not in grid then
return 0
else
total = 0
add (i, j) to visited
for direction in directions:
(new_i, new_j) = move(i, j, direction)
total += recursive_call(new_i, new_j, visited)
remove (i, j) from visited
return total
end if
end function

Related

Getting minimum possible number after performing operations on array elements

Question : Given an integer(n) denoting the no. of particles initially
Given an array of sizes of these particles
These particles can go into any number of simulations (possibly none)
In one simualtion two particles combines to give another particle with size as the difference between the size of them (possibly 0).
Find the smallest particle that can be formed.
constraints
n<=1000
size<=1e9
Example 1
3
30 10 8
Output
2
Explaination- 10 - 8 is the smallest we can achive
Example 2
4
1 2 4 8
output
1
explanation
We cannot make another 1 so as to get 0 so smallest without any simulation is 1
example 3
5
30 27 26 10 6
output
0
30-26=4
10-6 =4
4-4 =0
My thinking: I can only think of the brute force solution which will obviously time out. Can anyone help me out here with just the approach? I think it's related to dynamic programming
I think this can be solved in O(n^2log(n))
Consider your third example: 30 27 26 10 6
Sort the input to make it : 6 10 26 27 30
Build a list of differences for each (i,j) combination.
For:
i = 1 -> 4 20 21 24
i = 2 -> 16, 17, 20
i = 3 -> 1, 4
i = 4 -> 3
There is no list for i = 5 why? because it is already considered for combination with other particles before.
Now consider the below cases:
Case 1
The particle i is not combined with any other particle yet. This means some other particle should have been combined with a particle other than i.
This suggests us that we need to search for A[i] in the lists j = 1 to N except for j = i.
Get the nearest value. This can be done using binary search. Because your difference lists are sorted! Then your result for now is |A[i] - NearestValueFound|
Case 2
The particle i is combined with some other particle.
Take example i = 1 above and lets consider that its combined with particle 2. The result is 4.
So search for 4 in all the lists except list 2 - because we consider that particle 2 is already combined with particle 1 and we shouldn't search list 2.
Do we have a best match? It seems we have a match 4 found in the list 3. It needn't be 0 - in this case it is 0 so just return 0.
Repeat Case 1, 2 for all particles. Time complexity is O(n^2log(n)), because you are doing a binary search on all lists for each i except the list i.
import itertools as it
N = int(input())
nums = list()
for i in range(N):
nums.append(int(input()))
_min = min(nums)
def go(li):
global _min
if len(li)>1:
for i in it.combinations(li, 2):
temp = abs(i[0] - i[1])
if _min > temp:
_min = temp
k = li.copy()
k.remove(i[0])
k.remove(i[1])
k.append(temp)
go(k)
go(nums)
print(_min)

How do I find the lowest numbers in a square table, only one per column and one per row [duplicate]

Suppose we have a table of numbers like this (we can assume it is a square table):
20 2 1 3 4
5 1 14 8 9
15 12 17 17 11
16 1 1 15 18
20 13 15 5 11
Your job is to calculate the maximum sum of n numbers where n is the number of rows or columns in the table. The catch is each number must come from a unique row and column.
For example, selecting the numbers at (0,0), (1,1), (2,2), (3,3), and (4,4) is acceptable, but (0,0), (0,1), (2,2), (3,3), and (4,4) is not because the first two numbers were pulled from the same row.
My (laughable) solution to this problem is iterating through all the possible permutations of the rows and columns. This works for small grids but, of course, it is incredibly slow as n gets big. It has O(n!) time complexity, if I'm not mistaken (sample Python code below).
I really think this can be solved in better time, but I'm not coming up with anything sufficiently clever.
So my question is, what algorithm should be used to solve this?
If it helps, this problem seems similar to the
knapsack problem.
import itertools
import re
grid = """20 2 1 3 4
5 1 14 8 9
15 12 17 17 11
16 1 1 15 18
20 13 15 5 11"""
grid = [[int(x) for x in re.split("\s+", line)] for line in grid.split("\n")]
possible_column_indexes = itertools.permutations(range(len(grid)))
max_sum = 0
max_positions = []
for column_indexes in possible_column_indexes:
current_sum = 0
current_positions = []
for row, col in enumerate(column_indexes):
current_sum += grid[row][col]
current_positions.append("(%d, %d)" % (row, col))
if current_sum > max_sum:
max_sum = current_sum
max_positions = current_positions
print "Max sum is", max_sum
for position in max_positions:
print position
This is the maximum cost bipartite matching problem. The classical way to solve it is by using the Hungarian algorithm.
Basically you have a bipartite graph: the left set is the rows and the right set is the columns. Each edge from row i to column j has cost matrix[i, j]. Find the matching that maximizes the costs.
For starters, you can use dynamic programming.
In your straightforward approach, you are doing exactly the same computation many, many times.
For example, at some point you answer the question: "For the last three columns with rows 1 and 2 already taken, how do I maximize the sum?" You compute the answer to this question twice, once when you pick row 1 from column 1 and row 2 from column 2, and once when you pick them vice-versa.
So don't do that. Cache the answer -- and also cache all similar answers to all similar questions -- and re-use them.
I do not have time right now to analyze the running time of this approach. I think it is O(2^n) or thereabouts. More later maybe...

Maximum path cost in matrix

Can anyone tell the algorithm for finding the maximum path cost in a NxM matrix starting from top left corner and ending with bottom right corner with left ,right , down movement is allowed in a matrix and contains negative cost. A cell can be visited any number of times and after visiting a cell its cost is replaced with 0
Constraints
1 <= nxm <= 4x10^6
INPUT
4 5
1 2 3 -1 -2
-5 -8 -1 2 -150
1 2 3 -250 100
1 1 1 1 20
OUTPUT
37
Explanation is given in the image
Explanation of Output
Since you have also negative costs then use bellman-ford. What you do is that you change sign of all the costs(convert negative signs to positive and positive to negative) then find the shortest path and this path will be the longest because you have changed the signs.
If the sign is never becoms negative then use dijkstra shrtest-path but before that make all values negative and this will return you the longest path with it's cost.
You matrix is a direct graph. In your image you are trying to find a path(max or min) from index (0,0) to (n-1,n-1).
You need these things to represent it as a graph.
You need a linkedlist and in each node you have a first_Node, second_Node,Cost to move from first node to second.
An array of linkedlist. In each array index you save a linkedlist.If for example there is a path from 0 to 5 and 0 to 1(it's an undirected graph) then your graph will look like this.
If you want a direct-graph then simply add in adj[0] = 5 and do not add in adj[5] = 0 , this means that there is path from 0 to 5 but not from 5 to zero.
Here linkedlist represents only nodes which are connected not there cost. You have to add extra variable there which keep cost for each two nodes and it will look like this.
Now instead of first linkedlist put this linkedlist in your array and you have a graph now to run shortest or longest path algorithm.
If you want an intellgent algorithm then you can use A* with heuristic, i guess manhattan will be best.
If cost of your edges is not negative then use Dijkstra.
If cost is negative then use bellman-ford algorithm.
You can always find the longest path by converting the minus sign to plus and plus to minus and then run shortest path algorithm. Path founded will be longest.
I answered this question and as you said in comments to look at point two. If that's a task then main idea of this assignment is ensure the Monotonocity.
h stands for heuristic cost.
A stands for accumulated cost.
Which says that each node the h(A) =< h(A) + A(A,B). Means if you want to move from A to B then cost should not be decreasing(can you do something with your values such that this property will hold) but increasing and once you satisfy this condition then everyone node which A* chooses , that node will be part of your path from source to Goal because this is the path with shortest/longest value.
pathMax You can enforece monotonicity. If there is path from A to B such that f(S...AB) < f(S ..B) then set cost of the f(S...AB) = Max(f(S...AB) , f(S...A)) where S means source.
Since moving up is not allowed, paths always look like a set of horizontal intervals that share at least 1 position (for the down move). Answers can be characterized as, say
struct Answer {
int layer[N][2]; // layer[i][0] and [i][1] represent interval start&end
// with 0 <= layer[i][0] <= layer[i][1] < M
// layer[0][0] = 0, layer[N][1] = M-1
// and non-empty intersection of layers i and i+1
};
An alternative encoding is to note only layer widths and offsets to each other; but you would still have to make sure that the last layer includes the exit cell.
Assuming that you have a maxLayer routine that finds the highest-scoring interval in each layer (const O(M) per layer), and that all such such layers overlap, this would yield an O(N+M) optimal answer. However, it may be necessary to expand intervals to ensure that overlap occurs; and there may be multiple highest-scoring intervals in a given layer. At this point I would model the problem as a directed graph:
each layer has one node per score-maximizing horizontal continuous interval.
nodes from one layer are connected to nodes in the next layer according to the cost of expanding both intervals to achieve at least 1 overlap. If they already overlap, the cost is 0. Edge costs will always be zero or negative (otherwise, either source or target intervals could have scored higher by growing bigger). Add the (expanded) source-node interval value to the connection cost to get an "edge weight".
You can then run Dijkstra on this graph (negate edge weights so that the "longest path" is returned) to find the optimal path. Even better, since all paths pass once and only once through each layer, you only need to keep track of the best route to each node, and only need to build nodes and edges for the layer you are working on.
Implementation details ahead
to calculate maxLayer in O(M), use Kadane's Algorithm, modified to return all maximal intervals instead of only the first. Where the linked algorithm discards an interval and starts anew, you would instead keep a copy of that contender to use later.
given the sample input, the maximal intervals would look like this:
[0]
1 2 3 -1 -2 [1 2 3]
-5 -8 -1 2 -150 => [2]
1 2 3 -250 100 [1 2 3] [100]
1 1 1 1 20 [1 1 1 1 20]
[0]
given those intervals, they would yield the following graph:
(0)
| =>0
(+6)
\ -1=>5
\
(+2)
=>7/ \ -150=>-143
/ \
(+7) (+100)
=>12 \ / =>-43
\ /
(+24)
| =>37
(0)
when two edges incide on a single node (row 1 1 1 1 20), carry forward only the highest incoming value.
For each element in a row, find the maximum cost that can be obtained if we move horizontally across the row, given that we go through that element.
Eg. For the row
1 2 3 -1 -2
The maximum cost for each element obtained if we move horizontally given that we pass through that element will be
6 6 6 5 3
Explanation:
for element 3: we can move backwards horizontally touching 1 and 2. we will not move horizontally forward as the values -1 and -2, reduces the cost value.
So the maximum cost for 3 = 1 + 2 + 3 = 6
The maximum cost matrix for each of elements in a row if we move horizontally, for the input you have given in the description will be
6 6 6 5 3
-5 -7 1 2 -148
6 6 6 -144 100
24 24 24 24 24
Since we can move vertically from one row to the below row, update the maximum cost for each element as follows:
cost[i][j] = cost[i][j] + cost[i-1][j]
So the final cost matrix will be :
6 6 6 5 3
1 -1 7 7 -145
7 5 13 -137 -45
31 29 37 -113 -21
Maximum value in the last row of the above matrix will be give you the required output i.e 37

MATLAB: Fast creation of random symmetric Matrix with fixed degree (sum of rows)

I am searching for a method to create, in a fast way a random matrix A with the follwing properties:
A = transpose(A)
A(i,i) = 0 for all i
A(i,j) >= 0 for all i, j
sum(A) =~ degree; the sum of rows are randomly distributed by a distribution I want to specify (here =~ means approximate equality).
The distribution degree comes from a matrix orig, specifically degree=sum(orig), thus I know that matrices with this distribution exist.
For example: orig=[0 12 7 5; 12 0 1 9; 7 1 0 3; 5 9 3 0]
orig =
0 12 7 5
12 0 1 9
7 1 0 3
5 9 3 0
sum(orig)=[24 22 11 17];
Now one possible matrix A=[0 11 5 8, 11 0 4 7, 5 4 0 2, 8 7 2 0] is
A =
0 11 5 8
11 0 4 7
5 4 0 2
8 7 2 0
with sum(A)=[24 22 11 17].
I am trying this for quite some time, but unfortunatly my two ideas didn't work:
version 1:
I switch Nswitch times two random elements: A(k1,k3)--; A(k1,k4)++; A(k2,k3)++; A(k2,k4)--; (the transposed elements aswell).
Unfortunatly, Nswitch = log(E)*E (with E=sum(sum(nn))) in order that the Matrices are very uncorrelated. As my E > 5.000.000, this is not feasible (in particular, as I need at least 10 of such matrices).
version 2:
I create the matrix according to the distribution from scratch. The idea is, to fill every row i with degree(i) numbers, based on the distribution of degree:
nn=orig;
nnR=zeros(size(nn));
for i=1:length(nn)
degree=sum(nn);
howmany=degree(i);
degree(i)=0;
full=rld_cumsum(degree,1:length(degree));
rr=randi(length(full),[1,howmany]);
ff=full(rr);
xx=i*ones([1,length(ff)]);
nnR = nnR + accumarray([xx(:),ff(:)],1,size(nnR));
end
A=nnR;
However, while sum(A')=degree, sum(A) systematically deviates from degree, and I am not able to find the reason for that.
Small deviations from degree are fine of course, but there seem to be systmatical deviations in particulat of the matrices contain in some places large numbers.
I would be very happy if somebody could either show me a fast method for version1, or a reason for the systematic deviation of the distribution in version 2, or a method to create such matrices in a different way. Thank you!
Edit:
This is the problem in matsmath's proposed solution:
Imagine you have the matrix:
orig =
0 12 3 1
12 0 1 9
3 1 0 3
1 9 3 0
with r(i)=[16 22 7 13].
Step 1: r(1)=16, my random integer partition is p(i)=[0 7 3 6].
Step 2: Check that all p(i)<=r(i), which is the case.
Step 3:
My random matrix starts looks like
A =
0 7 3 6
7 0 . .
3 . 0 .
6 . . 0
with the new row sum vector rnew=[r(2)-p(2),...,r(n)-p(n)]=[15 4 7]
Second iteration (here the problem occures):
Step 1: rnew(1)=15, my random integer partition is p(i)=[0 A B]: rnew(1)=15=A+B.
Step 2: Check that all p(i)<=rnew(i), which gives A<=4, B<=7. So A+B<=11, but A+B has to be 15. contradiction :-/
Edit2:
This is the code representing (to the best of my knowledge) the solution posted by David Eisenstat:
orig=[0 12 3 1; 12 0 1 9; 3 1 0 3; 1 9 3 0];
w=[2.2406 4.6334 0.8174 1.6902];
xfull=zeros(4);
for ii=1:1000
rndmat=[poissrnd(w(1),1,4); poissrnd(w(2),1,4); poissrnd(w(3),1,4); poissrnd(w(4),1,4)];
kkk=rndmat.*(ones(4)-eye(4)); % remove diagonal
hhh=sum(sum(orig))/sum(sum(kkk))*kkk; % normalisation
xfull=xfull+hhh;
end
xf=xfull/ii;
disp(sum(orig)); % gives [16 22 7 13]
disp(sum(xf)); % gives [14.8337 9.6171 18.0627 15.4865] (obvious systematic problem)
disp(sum(xf')) % gives [13.5230 28.8452 4.9635 10.6683] (which is also systematically different from [16, 22, 7, 13]
Since it's enough to approximately preserve the degree sequence, let me propose a random distribution where each entry above the diagonal is chosen according to a Poisson distribution. My intuition is that we want to find weights w_i such that the i,j entry for i != j has mean w_i*w_j (all of the diagonal entries are zero). This gives us a nonlinear system of equations:
for all i, (sum_{j != i} w_i*w_j) = d_i,
where d_i is the degree of i. Equivalently,
for all i, w_i * (sum_j w_j) - w_i^2 = d_i.
The latter can be solved by applying Newton's method as described below from a starting solution of w_i = d_i / sqrt(sum_j d_j).
Once we have the w_is, we can sample repeatedly using poissrnd to generate samples of multiple Poisson distributions at once.
(If I have time, I'll try implementing this in numpy.)
The Jacobian matrix of the equation system for a 4 by 4 problem is
(w_2 + w_3 + w_4) w_1 w_1 w_1
w_2 (w_1 + w_3 + w_4) w_2 w_2
w_3 w_3 (w_1 + w_2 + w_4) w_3
w_4 w_4 w_4 (w_1 + w_2 + w_3).
In general, let A be a diagonal matrix where A_{i,i} = sum_j w_j - 2*w_i. Let u = [w_1, ..., w_n]' and v = [1, ..., 1]'. The Jacobian can be written J = A + u*v'. The inverse is given by the Sherman--Morrison formula
A^-1*u*v'*A^-1
J^-1 = (A + u*v')^-1 = A^-1 - -------------- .
1 + v'*A^-1*u
For the Newton step, we need to compute J^-1*y for some given y. This can be done straightforwardly in time O(n) using the above equation. I'll add more detail when I get the chance.
First approach (based on version2)
Let your row sum vector given by the matrix orig [r(1),r(2),...,r(n)].
Step 1. Take a random integer partition of the integer r(1) into exactly n-1 parts, say p(2), p(3), ..., p(n)
Step 2. Check if p(i)<=r(i) for all i=2...n. If not, go to Step 1.
Step 3. Fill out your random matrix first row and colum by the entries 0, p(2), ... , p(n), and consider the new row sum vector [r(2)-p(2),...,r(n)-p(n)].
Repeat these steps with a matrix of order n-1.
The point is, that you randomize one row at a time, and reduce the problem to searching for a matrix of size one less.
As pointed out by OP in the comment, this naive algorithm fails. The reason is that the matrices in question have a further necessary condition on their entries as follows:
FACT:
If A is an orig matrix with row sums [r(1), r(2), ..., r(n)] then necessarily for every i=1..n it holds that r(i)<=-r(i)+sum(r(j),j=1..n).
That is, any row sum, say the ith, r(i), is necessarily at most as big as the sum of the other row sums (not including r(i)).
In light of this, a revised algorithm is possible. Note that in Step 2b. we check if the new row sum vector has the property discussed above.
Step 1. Take a random integer partition of the integer r(1) into exactly n-1 parts, say p(2), p(3), ..., p(n)
Step 2a. Check if p(i)<=r(i) for all i=2...n. If not, go to Step 1.
Step 2b. Check if r(i)-p(i)<=-r(i)+p(i)+sum(r(j)-p(j),j=2..n) for all i=2..n. If not, go to Step 1.
Step 3. Fill out your random matrix first row and colum by the entries 0, p(2), ... , p(n), and consider the new row sum vector [r(2)-p(2),...,r(n)-p(n)].
Second approach (based on version1)
I am not sure if this approach gives you random matrices, but it certainly gives you different matrices.
The idea here is to change some parts of your orig matrix locally, in a way which maintains all of its properties.
You should look for a random 2x2 submatrix below the main diagonal which contains strictly positive entries, like [[a,b],[c,d]] and perturbe its contents by a random value r to [[a+r,b-r],[c-r,d+r]]. You make the same change above the main diagonal too, to keep your new matrix symmetric. Here the point is that the changes within the entries "cancel" each other out.
Of course, r should be chosen in a way such that b-r>=0 and c-r>=0.
You can pursue this idea to modify larger submatrices too. For example, you might choose 3 random row coordinates r1, r2, r2 and 3 random column coordinates c1, c2, and c3 and then make changes in your orig matrix at the 9 positions (ri,cj) as follows: you change your 3x3 submatrix [[a b c],[d e f], [g h i]] to [[a-r b+r c] [d+r e f-r], [g h-r i+r]]. You do the same at the transposed places. Again, the random value r must be chosen in a way so that a-r>=0 and f-r>=0 and h-r>=0. Moreover, c1 and r1, and c3 and r3 must be distinct as you can't change the 0 entries in the main diagonal of the matrix orig.
You can repeat such things over and over again, say 100 times, until you find something which looks random. Note that this idea uses the fact that you have existing knowledge of a solution, this is the matrix orig, while the first approach does not use such knowledge at all.

Maximize sum of table where each number must come from unique row and column

Suppose we have a table of numbers like this (we can assume it is a square table):
20 2 1 3 4
5 1 14 8 9
15 12 17 17 11
16 1 1 15 18
20 13 15 5 11
Your job is to calculate the maximum sum of n numbers where n is the number of rows or columns in the table. The catch is each number must come from a unique row and column.
For example, selecting the numbers at (0,0), (1,1), (2,2), (3,3), and (4,4) is acceptable, but (0,0), (0,1), (2,2), (3,3), and (4,4) is not because the first two numbers were pulled from the same row.
My (laughable) solution to this problem is iterating through all the possible permutations of the rows and columns. This works for small grids but, of course, it is incredibly slow as n gets big. It has O(n!) time complexity, if I'm not mistaken (sample Python code below).
I really think this can be solved in better time, but I'm not coming up with anything sufficiently clever.
So my question is, what algorithm should be used to solve this?
If it helps, this problem seems similar to the
knapsack problem.
import itertools
import re
grid = """20 2 1 3 4
5 1 14 8 9
15 12 17 17 11
16 1 1 15 18
20 13 15 5 11"""
grid = [[int(x) for x in re.split("\s+", line)] for line in grid.split("\n")]
possible_column_indexes = itertools.permutations(range(len(grid)))
max_sum = 0
max_positions = []
for column_indexes in possible_column_indexes:
current_sum = 0
current_positions = []
for row, col in enumerate(column_indexes):
current_sum += grid[row][col]
current_positions.append("(%d, %d)" % (row, col))
if current_sum > max_sum:
max_sum = current_sum
max_positions = current_positions
print "Max sum is", max_sum
for position in max_positions:
print position
This is the maximum cost bipartite matching problem. The classical way to solve it is by using the Hungarian algorithm.
Basically you have a bipartite graph: the left set is the rows and the right set is the columns. Each edge from row i to column j has cost matrix[i, j]. Find the matching that maximizes the costs.
For starters, you can use dynamic programming.
In your straightforward approach, you are doing exactly the same computation many, many times.
For example, at some point you answer the question: "For the last three columns with rows 1 and 2 already taken, how do I maximize the sum?" You compute the answer to this question twice, once when you pick row 1 from column 1 and row 2 from column 2, and once when you pick them vice-versa.
So don't do that. Cache the answer -- and also cache all similar answers to all similar questions -- and re-use them.
I do not have time right now to analyze the running time of this approach. I think it is O(2^n) or thereabouts. More later maybe...

Resources