Solving a constrained system of linear equations - matrix

I have a system of equations of the form y=Ax+b where y, x and b are n×1 vectors and A is a n×n (symmetric) matrix.
So here is the wrinkle. Not all of x is unknown. Certain rows of x are specified and the corresponding rows of y are unknown. Below is an example
| 10 | | 5 -2 1 | | * | | -1 |
| * | = | -2 2 0 | | 1 | + | 1 |
| 1 | | 1 0 1 | | * | | 2 |
where * designates unknown quantities.
I have built a solver for problems such as the above in Fortran, but I wanted to know if there is a decent robust solver out-there as part of Lapack or MLK for these types of problems?
My solver is based on a sorting matrix called pivot = [1,3,2] which rearranges the x and y vectors according to known and unknown
| 10 | | 5 1 -2 | | * | | -1 |
| 1 | | 1 1 0 | | * | + | 2 |
| * | | -2 0 2 | | 1 | | 1 |
and the solving using a block matrix solution & LU decomposition
! solves a n×n system of equations where k values are known from the 'x' vector
function solve_linear_system(A,b,x_known,y_known,pivot,n,k) result(x)
use lu
integer(c_int),intent(in) :: n, k, pivot(n)
real(c_double),intent(in) :: A(n,n), b(n), x_known(k), y_known(n-k)
real(c_double) :: x(n), y(n), r(n-k), A1(n-k,n-k), A3(n-k,k), b1(n-k)
integer(c_int) :: i, j, u, code, d, indx(n-k)
u = n-k
!store known `x` and `y` values
x(pivot(u+1:n)) = x_known
y(pivot(1:u)) = y_known
!define block matrices
! |y_known| = | A1 A3 | | * | + |b1|
| | * | = | A3` A2 | | x_known | |b2|
A1 = A(pivot(1:u), pivot(1:u))
A3 = A(pivot(1:u), pivot(u+1:n))
b1 = b(pivot(1:u))
!define new rhs vector
r = y_known -matmul(A3, x_known)-b1
% solve `A1*x=r` with LU decomposition from NR book for 'x'
call ludcmp(A1,u,indx,d,code)
call lubksb(A1,u,indx,r)
% store unknown 'x' values (stored into 'r' by 'lubksb')
x(pivot(1:u)) = r
end function
For the example above the solution is
| 10.0 | | 3.5 |
y = | -4.0 | x = | 1.0 |
| 1.0 | | -4.5 |
PS. The linear systems have typically n<=20 equations.

The problem with only unknowns is a linear least squares problem.
Your a-priori knowledge can be introduced with equality-constraints (fixing some variables), transforming it to an linear equality-constrained least squares problem.
There is indeed an algorithm within lapack solving the latter, called xGGLSE.
Here is some overview.
(It also seems, you need to multiply b with -1 in your case to be compatible with the definition)
Edit: On further inspection, i missed the unknowns within y. Ouch. This is bad.

First, i would rewrite your system into a AX=b form where A and b are known. In your example, and provided that i didn't make any mistakes, it would give :
5 0 1 x1 13
A = 2 1 0 X = x2 and b = 3
1 0 1 x3 -1
Then you can use plenty of methods coming from various libraries, like LAPACK or BLAS depending on the properties of your matrix A (positive-definite ,...). As a starting point, i would suggest a simple method with a direct inversion of the matrix A, especially if your matrix is small. There are also many iterative approach ( Jacobi, Gradients, Gauss seidel ...) that you can use for bigger cases.
Edit : An idea to solve it in 2 steps
First step : You can rewrite your system in 2 subsystem that have X and Y as unknows but dimension are equals to the numbers of unknowns in each vector.
The first subsystem in X will be AX = b which can be solved by direct or iterative methods.
Second step : The second system in Y can be directly resolved once you know X cause Y will be expressed in the form Y = A'X + b'
I think this approach is more general.

Related

Max sum in the vector, according to the condition, which is defined by another vector

I have 2 variables in the data.frame: a, b.
I need to find the max sum of a, where sum of b = x.
Ok, for example:
| a | b |
|401| 2 |
|380| 3 |
|380| 2 |
|370| 1 |
So, for sum(b)=1, max(sum(a)) = 370, for sum(b)=2, max(sum(a))=401 etc.
How can I find a solution to this problem?
Not sure that this problem can be solved using linear programming

Calculating final market distribution - competitive programming

I came across following question while practicing competitive programming. I solved it manually, kinda designing an approach, but my answer is wrong and I cannot imagine how to scale my approach.
Question:
N coffee chains are competing for market share by a fierce advertising battle. each day a percentage of customers will be convinced to switch from one chain to another.
Current market share and daily probability of customer switching is given. If the advertising runs forever, what will be the final distribution of market share?
Assumptions: Total market share is 1.0, probability that a customer switches is independent of other customers and days.
Example: 2 coffee chains: A and B market share of A: 0.4 market share of B: 0.6.
Each day, there is a 0.2 probability that a customer switches from A to B Each day, there is a 0.1 probability that a customer switches from B to A
input: market_share=[0.4,0.6],
switch_prob = [[.8,.2][.1,.9]]
output: [0.3333 0.6667]
Everything till here is part of a question, I did not form the example or assumptions, they were given with the question.
My_attempt: In my understanding, switch probabilities indicate the probability of switching the from A to B.
Hence,
market_share_of_A = current_market_share - lost_customers + gained_customers and
marker_share_of_B = (1 - marker_share_of_A)
iter_1:
lost_customers = 0.4 * 0.8 * 0.2 = 0.064
gained_customers = 0.6 * 0.2 * 0.1 = 0.012
market_share_of_A = 0.4 - 0.064 + 0.012 = 0.348
marker_share_of_B = 1 - 0.348 = 0.652
iter_2:
lost_customers = 0.348 * 0.1 * 0.2 = 0.00696
gained_customers = 0.652 * 0.9 * 0.1 = 0.05868
market_share_of_A = 0.348 - 0.00696 + 0.05868 = 0.39972
marker_share_of_B = 1 - 0.32928 = 0.60028
my answer: [0.39972, 0.60028]
As stated earlier, expected answers are [0.3333 0.6667].
I do not understand where am I wrong? If something is wrong, it has to be my understanding of the question. Please provide your thoughts.
In the example, they demonstrated an easy case that there were only two competitors. What if there are more? Let us say three - A, B, C. I think input has to provide switch probabilities in the form [[0.1, 0.3, 0.6]..] because A can lose its customers to B as well as C and there would be many instances of that. Now, I will have to compute at least two companies market share, third one will be (1-sum_of_all). And while computing B's market share, I will have to compute it's lost customers as well as gained and formula would be (current - lost + gained). Gained will be sum of gain_from_A and gain_from_C. Is this correct?
Following on from my comment, this problem can be expressed as a matrix equation.
The elements of the "transition" matrix, T(i, j) (dimensions N x N) are defined as follows:
i = j (diagonal): the probability of a customer staying with chain i
i != j (off-diagonal): the probability of a customer of chain j transferring to chain i
What is the physical meaning of this matrix? Let the market share state be represented by a vector P(i) of size N, whose i-th value is the market share of chain i. The vector P' = T * P is the next share state after each day.
With that in mind, the equilibrium equation is given by T * P = P, i.e. the final state is invariant under transition T:
| T(1, 1) T(1, 2) T(1, 3) ... T(1, N) | | P(1) | | P(1) |
| T(2, 1) T(2, 2) ... | | P(2) | | P(2) |
| T(3, 1) ... | | P(3) | | P(3) |
| . . | * | . | = | . |
| . . | | . | | . |
| . . | | . | | . |
| T(N, 1) T(N, N) | | P(N) | | P(N) |
However, this is unsolvable by itself - P can only be determined up to a number of ratios between its elements (the technical name for this situation escapes me - as MBo suggests it is due to degeneracy). There is an additional constraint that the shares add up to 1:
P(1) + P(2) + ... P(N) = 1
We can choose an arbitrary share value (say, the Nth one) and replace it with this expression. Multiplying out, the first row of the equation is:
T(1, 1) P(1) + T(1, 2) P(2) + ... T(1, N) (1 - [P(1) + P(2) + ... P(N - 1)]) = P(1)
--> [T(1, 1) - T(1, N) - 1] P(1) + [T(1, 2) - T(1, N)] P(2) + ... "P(N - 1)" = -T(1, N)
The equivalent equation for the second row is:
[T(2, 1) - T(2, N)] P(1) + [T(2, 2) - T(2, N) - 1] P(2) + ... = -T(2, N)
To summarize the general pattern, we define:
A matrix S(i, j) (dimensions [N - 1] x [N - 1]):
- S(i, i) = T(i, i) - T(i, N) - 1
- S(i, j) = T(i, j) - T(i, N) (i != j)
A vector Q(i) of size N - 1 containing the first N - 1 elements of P(i)
A vector R(i) of size N - 1, such that R(i) = -T(i, N)
The equation then becomes S * Q = R:
| S(1, 1) S(1, 2) S(1, 3) ... S(1, N-1) | | Q(1) | | R(1) |
| S(2, 1) S(2, 2) ... | | Q(2) | | R(2) |
| S(3, 1) ... | | Q(3) | | R(3) |
| . . | * | . | = | . |
| . . | | . | | . |
| . . | | . | | . |
| S(N-1, 1) S(N-1, N-1) | | Q(N-1) | | R(N-1) |
Solving the above equation gives Q, which gives the first N - 1 share values (and of course the last one too from the constraint). Methods for doing so include Gaussian elimination and LU decomposition, both of which are more efficient than the naive route of directly computing Q = inv(S) * R.
Note that you can flip the signs in S and R for slightly more convenient evaluation.
The toy example given above turns out to be quite trivial:
| 0.8 0.1 | | P1 | | P1 |
| | * | | = | |
| 0.2 0.9 | | P2 | | P2 |
--> S = | -0.3 |, R = | -0.1 |
--> Q1 = P1 = -1.0 / -0.3 = 0.3333
P2 = 1 - P1 = 0.6667
An example for N = 3:
| 0.1 0.2 0.3 | | -1.2 -0.1 | | -0.3 |
T = | 0.4 0.7 0.3 | --> S = | | , R = | |
| 0.5 0.1 0.4 | | 0.1 -0.6 | | -0.3 |
| 0.205479 |
--> Q = | | , P3 = 0.260274
| 0.534247 |
Please forgive the Robinson Crusoe style formatting - I'll try to write these in LaTeX later for readability.

Kernel density estimation julia

I am trying to implement a kernel density estimation. However my code does not provide the answer it should. It is also written in julia but the code should be self explanatory.
Here is the algorithm:
where
So the algorithm tests whether the distance between x and an observation X_i weighted by some constant factor (the binwidth) is less then one. If so, it assigns 0.5 / (n * h) to that value, where n = #of observations.
Here is my implementation:
#Kernel density function.
#Purpose: estimate the probability density function (pdf)
#of given observations
##param data: observations for which the pdf should be estimated
##return: returns an array with the estimated densities
function kernelDensity(data)
|
| #Uniform kernel function.
| ##param x: Current x value
| ##param X_i: x value of observation i
| ##param width: binwidth
| ##return: Returns 1 if the absolute distance from
| #x(current) to x(observation) weighted by the binwidth
| #is less then 1. Else it returns 0.
|
| function uniformKernel(x, observation, width)
| | u = ( x - observation ) / width
| | abs ( u ) <= 1 ? 1 : 0
| end
|
| #number of observations in the data set
| n = length(data)
|
| #binwidth (set arbitraily to 0.1
| h = 0.1
|
| #vector that stored the pdf
| res = zeros( Real, n )
|
| #counter variable for the loop
| counter = 0
|
| #lower and upper limit of the x axis
| start = floor(minimum(data))
| stop = ceil (maximum(data))
|
| #main loop
| ##linspace: divides the space from start to stop in n
| #equally spaced intervalls
| for x in linspace(start, stop, n)
| | counter += 1
| | for observation in data
| | |
| | | #count all observations for which the kernel
| | | #returns 1 and mult by 0.5 because the
| | | #kernel computed the absolute difference which can be
| | | #either positive or negative
| | | res[counter] += 0.5 * uniformKernel(x, observation, h)
| | end
| | #devide by n times h
| | res[counter] /= n * h
| end
| #return results
| res
end
#run function
##rand: generates 10 uniform random numbers between 0 and 1
kernelDensity(rand(10))
and this is being returned:
> 0.0
> 1.5
> 2.5
> 1.0
> 1.5
> 1.0
> 0.0
> 0.5
> 0.5
> 0.0
the sum of which is: 8.5 (The cumulative distibution function. Should be 1.)
So there are two bugs:
The values are not properly scaled. Each number should be around one tenth of their current values. In fact, if the number of observation increases by 10^n n = 1, 2, ... then the cdf also increases by 10^n
For example:
> kernelDensity(rand(1000))
> 953.53
They don't sum up to 10 (or one if it were not for the scaling error). The error becomes more evident as the sample size increases: there are approx. 5% of the observations not being included.
I believe that I implemented the formula 1:1, hence I really don't understand where the error is.
I'm not an expert on KDEs, so take all of this with a grain of salt, but a very similar (but much faster!) implementation of your code would be:
function kernelDensity{T<:AbstractFloat}(data::Vector{T}, h::T)
res = similar(data)
lb = minimum(data); ub = maximum(data)
for (i,x) in enumerate(linspace(lb, ub, size(data,1)))
for obs in data
res[i] += abs((obs-x)/h) <= 1. ? 0.5 : 0.
end
res[i] /= (n*h)
end
sum(res)
end
If I'm not mistaken, the density estimate should integrate to 1, that is we would expect kernelDensity(rand(100), 0.1)/100 to get at least close to 1. In the implementation above I'm getting there, give or take 5%, but then again we don't know that 0.1 is the optimal bandwith (using h=0.135 instead I'm getting there to within 0.1%), and the uniform Kernel is known to only be about 93% "efficient".
In any case, there's a very good Kernel Density package in Julia available here, so you probably should just do Pkg.add("KernelDensity") instead of trying to code your own Epanechnikov kernel :)
To point out the mistake: You have n bins B_i of size 2h covering [0,1], a random point X lands in expected number of bins. You divide by 2 n h.
For n points, the expected value of your function is .
Actually, you have some bins of size < 2h. (for example if start = 0, half of first the bin is outside of [0,1]), factoring this in gives the bias.
Edit: Btw, the bias is easy to calculate if you assume that the bins have random locations in [0,1]. Then the bins are on average missing h/2 = 5% of their size.

How to find the maximum number of matching data?

Given a bidimensionnal array such as:
-----------------------
| | 1 | 2 | 3 | 4 | 5 |
|-------------------|---|
| 1 | X | X | O | O | X |
|-------------------|---|
| 2 | O | O | O | X | X |
|-------------------|---|
| 3 | X | X | O | X | X |
|-------------------|---|
| 4 | X | X | O | X | X |
-----------------------
I have to find the largest set of cells currently containing O with a maximum of one cell per row and one per column.
For instance, in the previous example, the optimal answer is 3, when:
row 1 goes with column 4;
row 2 goes with column 1 (or 2);
row 3 (or 4) goes with column 3.
It seems that I have to find an algorithm in O(CR) (where C is the number of columns and R the number of rows).
My first idea was to sort the rows in ascending order according to its number on son. Here is how the algorithm would look like:
For i From 0 To R
For j From 0 To N
If compatible(i, j)
add(a[j], i)
Sort a according to a[j].size
result = 0
For i From 0 To N
For j From 0 to a[i].size
if used[a[i][j]] = false
used[a[i][j]] = true
result = result + 1
break
Print result
Altough I didn't find any counterexample, I don't know whether it always gives the optimal answer.
Is this algorithm correct? Is there any better solution?
Going off Billiska's suggestion, I found a nice implementation of the "Hopcroft-Karp" algorithm in Python here:
http://code.activestate.com/recipes/123641-hopcroft-karp-bipartite-matching/
This algorithm is one of several that solves the maximum bipartite matching problem, using that code exactly "as-is" here's how I solved example problem in your post (in Python):
from collections import defaultdict
X=0; O=1;
patterns = [ [ X , X , O , O , X ],
[ O , O , O , X , X ],
[ X , X , O , X , X ],
[ X , X , O , X , X ]]
G = defaultdict(list)
for i, x in enumerate(patterns):
for j, y in enumerate(patterns):
if( patterns[i][j] ):
G['Row '+str(i)].append('Col '+str(j))
solution = bipartiteMatch(G) ### function defined in provided link
print len(solution[0]), solution[0]

Counting the ways to build a wall with two tile sizes [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
You are given a set of blocks to build a panel using 3”×1” and 4.5”×1" blocks.
For structural integrity, the spaces between the blocks must not line up in adjacent rows.
There are 2 ways in which to build a 7.5”×1” panel, 2 ways to build a 7.5”×2” panel, 4 ways to build a 12”×3” panel, and 7958 ways to build a 27”×5” panel. How many different ways are there to build a 48”×10” panel?
This is what I understand so far:
with the blocks 3 x 1 and 4.5 x 1
I've used combination formula to find all possible combinations that the 2 blocks can be arranged in a panel of this size
C = choose --> C(n, k) = n!/r!(n-r)! combination of group n at r at a time
Panel: 7.5 x 1 = 2 ways -->
1 (3 x 1 block) and 1 (4.5 x 1 block) --> Only 2 blocks are used--> 2 C 1 = 2 ways
Panel: 7.5 x 2 = 2 ways
I used combination here as well
1(3 x 1 block) and 1 (4.5 x 1 block) --> 2 C 1 = 2 ways
Panel: 12 x 3 panel = 2 ways -->
2(4.5 x 1 block) and 1(3 x 1 block) --> 3 C 1 = 3 ways
0(4.5 x 1 block) and 4(3 x 1 block) --> 4 C 0 = 1 way
3 ways + 1 way = 4 ways
(This is where I get confused)
Panel 27 x 5 panel = 7958 ways
6(4.5 x 1 block) and 0(3 x 1) --> 6 C 0 = 1 way
4(4.5 x 1 block) and 3(3 x 1 block) --> 7 C 3 = 35 ways
2(4.5 x 1 block) and 6(3 x 1 block) --> 8 C 2 = 28 ways
0(4.5 x 1 block) and 9(3 x 1 block) --> 9 C 0 = 1 way
1 way + 35 ways + 28 ways + 1 way = 65 ways
As you can see here the number of ways is nowhere near 7958. What am I doing wrong here?
Also how would I find how many ways there are to construct a 48 x 10 panel?
Because it's a little difficult to do it by hand especially when trying to find 7958 ways.
How would write a program to calculate an answer for the number of ways for a 7958 panel?
Would it be easier to construct a program to calculate the result? Any help would be greatly appreciated.
I don't think the "choose" function is directly applicable, given your "the spaces between the blocks must not line up in adjacent rows" requirement. I also think this is where your analysis starts breaking down:
Panel: 12 x 3 panel = 2 ways -->
2(4.5 x 1 block) and 1(3 x 1 block)
--> 3 C 1 = 3 ways
0(4.5 x 1 block) and 4(3 x 1 block)
--> 4 C 0 = 1 way
3 ways + 1 way = 4 ways
...let's build some panels (1 | = 1 row, 2 -'s = 1 column):
+---------------------------+
| | | | |
| | | | |
| | | | |
+---------------------------+
+---------------------------+
| | | |
| | | |
| | | |
+---------------------------+
+---------------------------+
| | | |
| | | |
| | | |
+---------------------------+
+---------------------------+
| | | |
| | | |
| | | |
+---------------------------+
Here we see that there are 4 different basic row types, but none of these are valid panels (they all violate the "blocks must not line up" rule). But we can use these row types to create several panels:
+---------------------------+
| | | | |
| | | | |
| | | |
+---------------------------+
+---------------------------+
| | | | |
| | | | |
| | | |
+---------------------------+
+---------------------------+
| | | | |
| | | | |
| | | |
+---------------------------+
+---------------------------+
| | | |
| | | |
| | | | |
+---------------------------+
+---------------------------+
| | | |
| | | |
| | | |
+---------------------------+
+---------------------------+
| | | |
| | | |
| | | |
+---------------------------+
...
But again, none of these are valid. The valid 12x3 panels are:
+---------------------------+
| | | | |
| | | |
| | | | |
+---------------------------+
+---------------------------+
| | | |
| | | | |
| | | |
+---------------------------+
+---------------------------+
| | | |
| | | |
| | | |
+---------------------------+
+---------------------------+
| | | |
| | | |
| | | |
+---------------------------+
So there are in fact 4 of them, but in this case it's just a coincidence that it matches up with what you got using the "choose" function. In terms of total panel configurations, there are quite more than 4.
Find all ways to form a single row of the given width. I call this a "row type". Example 12x3: There are 4 row types of width 12: (3 3 3 3), (4.5 4.5 3), (4.5 3 4.5), (3 4.5 4.5). I would represent these as a list of the gaps. Example: (3 6 9), (4.5 9), (4.5 7.5), (3 7.5).
For each of these row types, find which other row types could fit on top of it.
Example:
a. On (3 6 9) fits (4.5 7.5).
b. On (4.5 9) fits (3 7.5).
c: On (4.5 7.5) fits (3 6 9).
d: On (3 7.5) fits (4.5 9).
Enumerate the ways to build stacks of the given height from these rules. Dynamic programming is applicable to this, as at each level, you only need the last row type and the number of ways to get there.
Edit: I just tried this out on my coffee break, and it works. The solution for 48x10 has 15 decimal digits, by the way.
Edit: Here is more detail of the dynamic programming part:
Your rules from step 2 translate to an array of possible neighbours. Each element of the array corresponds to a row type, and holds that row type's possible neighbouring row types' indices.
0: (2)
1: (3)
2: (0)
3: (1)
In the case of 12×3, each row type has only a single possible neighbouring row type, but in general, it can be more.
The dynamic programming starts with a single row, where each row type has exactly one way of appearing:
1 1 1 1
Then, the next row is formed by adding for each row type the number of ways that possible neighbours could have formed on the previous row. In the case of a width of 12, the result is 1 1 1 1 again. At the end, just sum up the last row.
Complexity:
Finding the row types corresponds to enumerating the leaves of a tree; there are about (/ width 3) levels in this tree, so this takes a time of O(2w/3) = O(2w).
Checking whether two row types fit takes time proportional to their length, O(w/3). Building the cross table is proportional to the square of the number of row types. This makes step 2 O(w/3·22w/3) = O(2w).
The dynamic programming takes height times the number of row types times the average number of neighbours (which I estimate to be logarithmic to the number of row types), O(h·2w/3·w/3) = O(2w).
As you see, this is all dominated by the number of row types, which grow exponentially with the width. Fortunately, the constant factors are rather low, so that 48×10 can be solved in a few seconds.
This looks like the type of problem you could solve recursively. Here's a brief outline of an algorithm you could use, with a recursive method that accepts the previous layer and the number of remaining layers as arguments:
Start with the initial number of layers (e.g. 27x5 starts with remainingLayers = 5) and an empty previous layer
Test all possible layouts of the current layer
Try adding a 3x1 in the next available slot in the layer we are building. Check that (a) it doesn't go past the target width (e.g. doesn't go past 27 width in a 27x5) and (b) it doesn't violate the spacing condition given the previous layer
Keep trying to add 3x1s to the current layer until we have built a valid layer that is exactly (e.g.) 27 units wide
If we cannot use a 3x1 in the current slot, remove it and replace with a 4.5x1
Once we have a valid layer, decrement remainingLayers and pass it back into our recursive algorithm along with the layer we have just constructed
Once we reach remainingLayers = 0, we have constructed a valid panel, so increment our counter
The idea is that we build all possible combinations of valid layers. Once we have (in the 27x5 example) 5 valid layers on top of each other, we have constructed a complete valid panel. So the algorithm should find (and thus count) every possible valid panel exactly once.
This is a '2d bin packing' problem. Someone with decent mathematical knowledge will be able to help or you could try a book on computational algorithms. It is known as a "combinatorial NP-hard problem". I don't know what that means but the "hard" part grabs my attention :)
I have had a look at steel cutting prgrams and they mostly use a best guess. In this case though 2 x 4.5" stacked vertically can accommodate 3 x 3" inch stacked horizontally. You could possibly get away with no waste. Gets rather tricky when you have to figure out the best solution --- the one with minimal waste.
Here's a solution in Java, some of the array length checking etc is a little messy but I'm sure you can refine it pretty easily.
In any case, I hope this helps demonstrate how the algorithm works :-)
import java.util.Arrays;
public class Puzzle
{
// Initial solve call
public static int solve(int width, int height)
{
// Double the widths so we can use integers (6x1 and 9x1)
int[] prev = {-1}; // Make sure we don't get any collisions on the first layer
return solve(prev, new int[0], width * 2, height);
}
// Build the current layer recursively given the previous layer and the current layer
private static int solve(int[] prev, int[] current, int width, int remaining)
{
// Check whether we have a valid frame
if(remaining == 0)
return 1;
if(current.length > 0)
{
// Check for overflows
if(current[current.length - 1] > width)
return 0;
// Check for aligned gaps
for(int i = 0; i < prev.length; i++)
if(prev[i] < width)
if(current[current.length - 1] == prev[i])
return 0;
// If we have a complete valid layer
if(current[current.length - 1] == width)
return solve(current, new int[0], width, remaining - 1);
}
// Try adding a 6x1
int total = 0;
int[] newCurrent = Arrays.copyOf(current, current.length + 1);
if(current.length > 0)
newCurrent[newCurrent.length - 1] = current[current.length - 1] + 6;
else
newCurrent[0] = 6;
total += solve(prev, newCurrent, width, remaining);
// Try adding a 9x1
if(current.length > 0)
newCurrent[newCurrent.length - 1] = current[current.length - 1] + 9;
else
newCurrent[0] = 9;
total += solve(prev, newCurrent, width, remaining);
return total;
}
// Main method
public static void main(String[] args)
{
// e.g. 27x5, outputs 7958
System.out.println(Puzzle.solve(27, 5));
}
}

Resources