Design and explain a recursive divide-and-conquer algorithm. Anyone has ideas?
Given an isosceles right triangular grid for some k ≥ 2 as shown in Figure 1(b), this problem asks you to completely cover it using the tiles given in Figure 1(a). The bottom-left corner of the grid must not be covered. No two tiles can overlap and all tiles must remain completely inside the given triangular grid. You must use all four types of tiles shown in Figure 1(a), and no tile type can be used to cover more than 40% of the total grid area. You are allowed to rotate the tiles, as needed, before putting them on the grid.
This is the idea of induction indeed, and is similar to the famous example "L-Tile" covering
As you said, you have solved the problem for k = 2, it's a good and correct starting point to solving small example first, yet I think this problem there is a bit tricky for k = 2 case, mainly due to the each type cannot exceed 40% constrain.
Then for k>2, say k = 3 in your example, we try to make use of what you have solved, i.e. the case k = 2
With very simple observation, one may notice that for k = n, it can actually be made up of 4 k=n-1 cases (see image below)
Now the shaded part in the middle form a hole that can filled by 1 type B, so we can first filled the 4 small n-1 case and fill the hole with type B...
But then this construction face a problem: type B will exceed 40% of the area!
Consider k = 2, no matter how you fill the area, 2 type B must be used, I do not have a strong proof but by some brute force trail & error you should be convinced. Then for k = 3, we have 4 small triangles meaning we have 2*4 = 8 Type B, plus 1 more to fill the hole will gives us 9 Type B, each uses 1.5 sq units, which total uses up 13.5 sq units.
As k = 3, the total area is (2^3)^2 / 2 = 32 sq units
13.5/32 = 0.42.... which violate the constrain!
So what to do? Here is the reason why we have to use a trick to handle the k = 2 case (I assume you have go through this part as you said you know how to do k = 2 case)
First, we know that using our constructive method to build a large triangle from 4 smaller triangles, only Type B will violate this constrain (i.e. the 40% area), you can verify yourself. So we want to reduce the total number of Type B used, yet each smaller triangle must use at least 2 Type B, so the only place we may reduce is the empty hole in the middle of the large triangle, can we use other Type instead of Type B? At the same time, we want the other parts of the small triangle remain unchanged so that we can use same argument to do an induction (i.e. in general speaking, form 2^n triangle from 4 2^(n-1) triangles using same construction method)
The answer is YES if we special design the k = 2 case
See my construction below: (There maybe other construction works too, but I only need to know one)
The trick is I intentionally move 1 Type B next to the empty(gray) triangle
Let's stop right here for a bit, and do some verification:
To construct a k = 2 case, we use
2 Type A = 2 sq.units < 40%
2 Type B = 3 sq.units < 40%
1 Type C = 1.5 sq.units < 40%
1 Type D = 1 sq.unit < 40%
Total use 7.5 sq.units, good
Now imagine we use exactly the same method to construct those 4 triangles to make a large one, the middle one still be an empty hole with shape of Type B, but now instead of filling it with 1 Type B, we fill the hole TOGETHER WITH the 3 Type B just next to them (look back the k = 2 case), using Type A & D
(I use same color scheme as above for easy understanding), we do this for all 3 small triangles which made up the hole in the middle.
Here is the last part (I know it's long...)
We have reduce the number of Type B used when constructing a large triangle from smaller ones, but at the same time we increase the number of Type A & D used! So is this new construction method valid?
First notice that it does not change any parts of the small triangles except the Type B next to the gray triangle, i.e. If the 40% constrain is fulfilled, this method is inductive and recursive to fill a 2^n side triangle
Then let's count again the number of each Type we used.
For k = 3, total units is 32, we uses:
2*4+3 = 11 Type A = 11 sq.units < 40%
2*4-3 = 5 Type B = 7.5 sq.units < 40%
1*4 = 4 Type C = 6 sq.units < 40%
1*4+3 = 7 Type D = 7 sq.unit < 40%
Total we cover 31.5 units, good, now let's proof the 40% constrain is satisfied for k = n > 3
Let FA(n-1) be the total area of Type A used to fill 2^n-1 triangles using our new method, likewise, FB(n-1), FC(n-1), FD(n-1) with similar definitions
Assume F*(n-1) is true, i.e. not exceeding 40% of total area, we proof that F*(n) is true.
We got
FA(n) = FA(n-1)*4 + 3*1
FB(n) = FB(n-1)*4 - 3*1.5
FC(n) = FC(n-1)*4
FD(n) = FD(n-1)*4 + 3*1
We only show the proof for FD(n), other three should be proofed with similar method (M.I.)
Using method of substitution, FD(n) = 2*(4^(n-2)) - 1 for n>=3 (You should at least try to come up with this equation yourself)
We want to show FD(n)/(2^2(n)/2) < 0.4
i.e. 2FD(n)/4^n < 0.4
Consider LHS,
LHS = (4*(4^(n-2)) - 1)/4^n
< 4^(n-1)/4^n = 1/4 < 0.4 Q.E.D
That means using this method, all Type A-D will not exceed 40% of total area for any 2^k sided triangle, for k >= 3, finally we show that inductively, there is a method satisfy all constrains to construct such a triangle.
TL;DR
The hard part is to satisfy the 40% area constrain
Use a special construction on k = 2 case first, try to use it to build k = 3 case (then k = 4, k = 5...idea of induction!)
When using k=n-1 case to build k=n case, write down the formula of total area consumed by each type, and show that they would not exceed 40% of total areas
Combined point 2 & 3, it's an induction method to show that for any k >= 2, there is a method (which we described) to fill the 2^k sided triangle without breaking any constrains
Related
This grade 11 problem has been bothering me since 2010 and I still can't figure out/find a solution even after university.
Problem Description
There is a very unusual street in your neighbourhood. This street
forms a perfect circle, and the circumference of the circle is
1,000,000. There are H (1 ≤ H ≤ 1000) houses on the street. The
address of each house is the clockwise arc-length from the
northern-most point of the circle. The address of the house at the
northern-most point of the circle is 0. You also have special firehoses
which follow the curve of the street. However, you wish to keep the
length of the longest hose you require to a minimum. Your task is to
place k (1 ≤ k ≤ 1000) fire hydrants on this street so that the maximum
length of hose required to connect a house to a fire hydrant is as
small as possible.
Input Specification
The first line of input will be an integer H, the number of houses. The
next H lines each contain one integer, which is the address of that
particular house, and each house address is at least 0 and less than
1,000,000. On the H + 2nd line is the number k, which is the number of
fire hydrants that can be placed around the circle. Note that a fire
hydrant can be placed at the same position as a house. You may assume
that no two houses are at the same address. Note: at least 40% of the
marks for this question have H ≤ 10.
Output Specification
On one line, output the length of hose required
so that every house can connect to its nearest fire hydrant with that
length of hose.
Sample Input
4
0
67000
68000
77000
2
Output for Sample Input
5000
Link to original question
I can't even come up with a brutal force algorithm since the placement might be float number. For example if the houses are located in 1 and 2, then the hydro should be placed at 1.5 and the distance would be 0.5
Here is quick outline of an answer.
First write a function that can figures out whether you can cover all of the houses with a given maximum length per hydrant. (The maximum hose will be half that length.) It just starts at a house, covers all of the houses it can, jumps to the next, and ditto, and sees whether you stretch. If you fail it tries starting at the next house instead until it has gone around the circle. This will be a O(n^2) function.
Second create a sorted list of the pairwise distances between houses. (You have to consider it going both ways around for a single hydrant, you can only worry about the shorter way if you have 2+ hydrants.) The length covered by a hydrant will be one of those. This takes O(n^2 log(n)).
Now do a binary search to find the shortest length that can cover all of the houses. This will require O(log(n)) calls to the O(n^2) function that you wrote in the first step.
The end result is a O(n^2 log(n)) algorithm.
And here is working code for all but the parsing logic.
#! /usr/bin/env python
def _find_hoses_needed (circle_length, hose_span, houses):
# We assume that houses is sorted.
answers = [] # We can always get away with one hydrant per house.
for start in range(len(houses)):
needed = 1
last_begin = start
current_house = start + 1 if start + 1 < len(houses) else 0
while current_house != start:
pos_begin = houses[last_begin]
pos_end = houses[current_house]
length = pos_end - pos_begin if pos_begin <= pos_end else circle_length + pos_begin - pos_end
if hose_span < length:
# We need a new hose.
needed = needed + 1
last_begin = current_house
current_house = current_house + 1
if len(houses) <= current_house:
# We looped around the circle.
current_house = 0
answers.append(needed)
return min(answers)
def find_min_hose_coverage (circle_length, hydrant_count, houses):
houses = sorted(houses)
# First we find all of the possible answers.
is_length = set()
for i in range(len(houses)):
for j in range(i, len(houses)):
is_length.add(houses[j] - houses[i])
is_length.add(houses[i] - houses[j] + circle_length)
possible_answers = sorted(is_length)
# Now we do a binary search.
lower = 0
upper = len(possible_answers) - 1
while lower < upper:
mid = (lower + upper) / 2 # Note, we lose the fraction here.
if hydrant_count < _find_hoses_needed(circle_length, possible_answers[mid], houses):
# We need a strictly longer coverage to make it.
lower = mid + 1
else:
# Longer is not needed
upper = mid
return possible_answers[lower]
print(find_min_hose_coverage(1000000, 2, [0, 67000, 68000, 77000])/2.0)
Recently in a coding competition I came across this question.
We have a 1000 tiles where each tile is a 3x3 matrix. Each cell in the
matrix has an integer value from 0 to 9 which signifies the elevation
of the cell. The problem was to find the maximum pairs of tiles such
that they fit in perfectly. The tiles may be rotated to fit in. By fit
in it means that for tile A and tile B
A[i]+B[i]=const for i=0 to 8
The approach I thought for this problem was that I could maintain a hash value corresponding to each tile. Then I would find the possible combinations of tiles that would be
a possible fit and look it up in the hashtable.
Ex. For the tile below
5 3 2 4 6 7 5 7 8
4 8 9 matches with 5 1 0 for const = 9 & with 6 2 1 for const=10
1 4 5 8 5 4 9 6 5
for this tile the 'const' would range from 9(adding 0 to the maximum element) to 10(adding 9 to the minimum element).
So I would get two possible combinations for tiles which i would look up in the table.
But this method is greedy and does not give the desired answer and also I was unable to think of a proper hash function which would consider of all possible rotations.
So what would be a good approach for solving this problem?
I am sure there is a brute force way to solve this problem but I was actually wondering whether a viable solution to the problem exists on the lines of "pairwise equal to k" problem.
For n=1000 I would stick with the O(n^2) brute force solution. However an O(n log n) algorithm is described below.
The lexicographicalish ordering is defined by the following less-than operator:
Given two matrices M1, M2, define M1' as M1 if M1[1] is positive and -M1 if M1[1] is negative, and likewise or M2'. We say that M1<M2 if M1'[1]<M2'[1], or if M1'[1] == M2'[1] and M1'[2] < M2'[2], or if M1'[1] == M2'[1] and M1'[2] == M2'[2] and M1'[3] < M2'[3] etc.
Subtract the middle element of each matrix from the rest of the elements of the matrix i.e. A'[5] = A[5] and A'[i] = A[i] - A[5]. Then A' fits with B' if A'[i] +B'[i] = 0 for i!=5, and the elevation is A'[5] + B'[5].
Create an array of matrices and a dictionary. Rotate each matrix so that the top left corner has minimal absolute value before adding it to the array. If there are multiple corners with the same absolute value then duplicate the matrix and store both rotations in the array.
If some rotation of a matrix fits with itself and i,j are indices of rotations of this matrix, add the key-value pairs (i,j) and (j, i) to the dictionary.
Create an array S of indices 1,2... and sort S using the lexicographicalish ordering.
Instead of needing O(n^2) operations to check all possible pairs of matrices, it is only necessary to check all pairs of matrices with indices are S_i and S_(i+1). If a pair of matrices fits, use the dictionary to check that the two matrices are not rotations of the same original matrix before calculating the elevation of the pair.
Not sure if this is the most efficient way for doing this, but it sure works.
What I would do is:
Go over all tiles and check the maximum and minimum value of each tile and save it in a different array.
Check all possible pairs.
If min(A) + max(B) == min(B) + max(A) then check if some rotation of B fits perfectly on A. If it does, add 1 to your count.
Else, it does not fit so you can skip the checking for this pair.
Note: The reason for saving both maximum and minimum for each tile is that it might save us unnecessary calculations and checking rotations as in O(1) we can check if it doesn't fit.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I'm trying to find a way to find similarities in two arrays of different points. I drew circles around points that have similar patterns and I would like to do some kind of auto comparison in intervals of let's say 100 points and tell what coefficient of similarity is for that interval. As you can see it might not be perfectly aligned also so point-to-point comparison would not be a good solution also (I suppose). Patterns that are slightly misaligned could also mean that they are matching the pattern (but obviously with a smaller coefficient)
What similarity could mean (1 coefficient is a perfect match, 0 or less - is not a match at all):
Points 640 to 660 - Very similar (coefficient is ~0.8)
Points 670 to 690 - Quite similar (coefficient is ~0.5-~0.6)
Points 720 to 780 - Let's say quite similar (coefficient is ~0.5-~0.6)
Points 790 to 810 - Perfectly similar (coefficient is 1)
Coefficient is just my thoughts of how a final calculated result of comparing function could look like with given data.
I read many posts on SO but it didn't seem to solve my problem. I would appreciate your help a lot. Thank you
P.S. Perfect answer would be the one that provides pseudo code for function which could accept two data arrays as arguments (intervals of data) and return coefficient of similarity.
Click here to see original size of image
I also think High Performance Mark has basically given you the answer (cross-correlation). In my opinion, most of the other answers are only giving you half of what you need (i.e., dot product plus compare against some threshold). However, this won't consider a signal to be similar to a shifted version of itself. You'll want to compute this dot product N + M - 1 times, where N, M are the sizes of the arrays. For each iteration, compute the dot product between array 1 and a shifted version of array 2. The amount you shift array 2 increases by one each iteration. You can think of array 2 as a window you are passing over array 1. You'll want to start the loop with the last element of array 2 only overlapping the first element in array 1.
This loop will generate numbers for different amounts of shift, and what you do with that number is up to you. Maybe you compare it (or the absolute value of it) against a threshold that you define to consider two signals "similar".
Lastly, in many contexts, a signal is considered similar to a scaled (in the amplitude sense, not time-scaling) version of itself, so there must be a normalization step prior to computing the cross-correlation. This is usually done by scaling the elements of the array so that the dot product with itself equals 1. Just be careful to ensure this makes sense for your application numerically, i.e., integers don't scale very well to values between 0 and 1 :-)
i think HighPerformanceMarks's suggestion is the standard way of doing the job.
a computationally lightweight alternative measure might be a dot product.
split both arrays into the same predefined index intervals.
consider the array elements in each intervals as vector coordinates in high-dimensional space.
compute the dot product of both vectors.
the dot product will not be negative. if the two vectors are perpendicular in their vector space, the dot product will be 0 (in fact that's how 'perpendicular' is usually defined in higher dimensions), and it will attain its maximum for identical vectors.
if you accept the geometric notion of perpendicularity as a (dis)similarity measure, here you go.
caveat:
this is an ad hoc heuristic chosen for computational efficiency. i cannot tell you about mathematical/statistical properties of the process and separation properties - if you need rigorous analysis, however, you'll probably fare better with correlation theory anyway and should perhaps forward your question to math.stackexchange.com.
My Attempt:
Total_sum=0
1. For each index i in the range (m,n)
2. sum=0
3. k=Array1[i]*Array2[i]; t1=magnitude(Array1[i]); t2=magnitude(Array2[i]);
4. k=k/(t1*t2)
5. sum=sum+k
6. Total_sum=Total_sum+sum
Coefficient=Total_sum/(m-n)
If all values are equal, then sum would return 1 in each case and total_sum would return (m-n)*(1). Hence, when the same is divided by (m-n) we get the value as 1. If the graphs are exact opposites, we get -1 and for other variations a value between -1 and 1 is returned.
This is not so efficient when the y range or the x range is huge. But, I just wanted to give you an idea.
Another option would be to perform an extensive xnor.
1. For each index i in the range (m,n)
2. sum=1
3. k=Array1[i] xnor Array2[i];
4. k=k/((pow(2,number_of_bits))-1) //This will scale k down to a value between 0 and 1
5. sum=(sum+k)/2
Coefficient=sum
Is this helpful ?
You can define a distance metric for two vectors A and B of length N containing numbers in the interval [-1, 1] e.g. as
sum = 0
for i in 0 to 99:
d = (A[i] - B[i])^2 // this is in range 0 .. 4
sum = (sum / 4) / N // now in range 0 .. 1
This now returns distance 1 for vectors that are completely opposite (one is all 1, another all -1), and 0 for identical vectors.
You can translate this into your coefficient by
coeff = 1 - sum
However, this is a crude approach because it does not take into account the fact that there could be horizontal distortion or shift between the signals you want to compare, so let's look at some approaches for coping with that.
You can sort both your arrays (e.g. in ascending order) and then calculate the distance / coefficient. This returns more similarity than the original metric, and is agnostic towards permutations / shifts of the signal.
You can also calculate the differentials and calculate distance / coefficient for those, and then you can do that sorted also. Using differentials has the benefit that it eliminates vertical shifts. Sorted differentials eliminate horizontal shift but still recognize different shapes better than sorted original data points.
You can then e.g. average the different coefficients. Here more complete code. The routine below calculates coefficient for arrays A and B of given size, and takes d many differentials (recursively) first. If sorted is true, the final (differentiated) array is sorted.
procedure calc(A, B, size, d, sorted):
if (d > 0):
A' = new array[size - 1]
B' = new array[size - 1]
for i in 0 to size - 2:
A'[i] = (A[i + 1] - A[i]) / 2 // keep in range -1..1 by dividing by 2
B'[i] = (B[i + 1] - B[i]) / 2
return calc(A', B', size - 1, d - 1, sorted)
else:
if (sorted):
A = sort(A)
B = sort(B)
sum = 0
for i in 0 to size - 1:
sum = sum + (A[i] - B[i]) * (A[i] - B[i])
sum = (sum / 4) / size
return 1 - sum // return the coefficient
procedure similarity(A, B, size):
sum a = 0
a = a + calc(A, B, size, 0, false)
a = a + calc(A, B, size, 0, true)
a = a + calc(A, B, size, 1, false)
a = a + calc(A, B, size, 1, true)
return a / 4 // take average
For something completely different, you could also run Fourier transform using FFT and then take a distance metric on the returning spectra.
I'm trying to solve following DP problem:
You have 4 types of lego blocks, of sizes 1 * 1 * 1, 1 * 1 * 2, 1 * 1
* 3 and 1 * 1 * 4. Assume you have infinite number of blocks of each type.
You want to make a wall of height H and width M out of these blocks.
The wall should not have any holes in it. The wall you build should be
one solid structure. A solid structure means that it should not be
possible to separate the wall along any vertical line without cutting
any lego block used to build the wall. The blocks can only be placed
horizontally. In how many ways can the wall be built?
Here is how I'm attempting it:
Representing 1 * 1 * 1, 1 * 1 * 2, 1 * 1 * 3 and 1 * 1 * 4 blocks with a b c d
. Valid patterns are indicated in bold. Invalid patterns are which can be broken by vertical line.
H=1 & W=3 #valid pattern=1 aa ab ba c
H=2 & W=3 #valid pattern=9
I'm trying to find recurrence pattern either to extend this by height or width.i.e to find values for H=3 & W=3 or H=2&W=4.
Any inputs on how to formula-ize growth for this by height or weight?
P.S. The wall be always H*W*1.
First, let's see how many M*N walls can we build if we neglect the need to keep them connected:
We can treat each row separately, and then multiply the counts since they are independent.
There is only one way to tile a 0*1 or a 1*1 wall, and the number of ways to tile an n*1 is the total of the number of ways to tile {n-1}*1...{n-4}*1-sized walls, the reason being these walls can be obtained by removing the last tile of the n*1 wall.
This gives rise to a tetranacci sequence, OEIS A000078.
The number of all W*H walls is a(w,h)=T(w)^h.
Now, to count the number of solid walls. MBo's answer already contains the basic premise:
Branch on the leftmost place where the wall is not connected. The number of All W*H walls is the number of Solid X*H walls times the number of All {W-X}*H walls, summed across all possible values of X, plus the number of Solid W*H walls:
A(W,H) = sum{X=1..{W-1}}(S(X,H)*A(W-X,H)) + S(W,H)
As a last step, we separate S(M,H) term, which is the value we want to calculate, and repeat the previous formulas:
S(W,H) = A(W,H) - sum_x( S(X,H)*A(W-X,H) ) //implicitly, S(1,H)=1
A(W,H) = T(W)^H
T(X) = X > 0: T(X-1)+T(X-2)+T(X-3)+T(X-4)
X = 0: 1
X < 0: 0
(proving MBo's formula correct).
This also provides an O(W^2) algorithm to compute S (assuming proper memoization and constant-time arithmetic operations)
It is not hard to find a number of 1xW stripes (let it is N(1,W)).
Then you can find a number of all (including non-solid) HxW walls - it is A(H,W) = N(1,W)^H
Any non-solid wall consists of left H*L wall and right H*(W-L) wall. It seems that number of solid walls is
S(H,W) = A(H,W) - Sum(S(H, L) * A(H, W-L)) [L=1..W-1]
S(H, L) * A(H, W-L) is number of non-solid walls with the leftmost break at L vertical position. First factor is the number of solid walls - to eliminate counting of repetitive variants.
My Python 3 implementation
def tetranacci(n):
arr = [1, 2, 4, 8]
if n <= 4:
return arr[:n]
else:
for i in range(4, n):
arr.append(sum(arr[i-4:i])%(10**9 + 7))
return arr
def legoBlocks(n, m):
MOD = (10**9 +7)
a, s = [(v**n)%MOD for v in tetranacci(m)], [1]
for i in range(1, len(a)):
sums = sum([x*y for x,y in zip(a[:i], s[::-1])])
s.append( (a[i]-sums)%MOD)
return s[-1]
In short: How to hash a free polyomino?
This could be generalized into: How to efficiently hash an arbitrary collection of 2D integer coordinates, where a set contains unique pairs of non-negative integers, and a set is considered unique if and only if no translation, rotation, or flip can map it identically to another set?
For impatient readers, please note I'm fully aware of a brute force approach. I'm looking for a better way -- or a very convincing proof that no other way can exist.
I'm working on some different algorithms to generate random polyominos. I want to test their output to determine how random they are -- i.e. are certain instances of a given order generated more frequently than others. Visually, it is very easy to identify different orientations of a free polyomino, for example the following Wikipedia illustration shows all 8 orientations of the "F" pentomino (Source):
How would one put a number on this polyomino - that is, hash a free polyomino? I don't want to depend on a prepolulated list of "named" polyominos. Broadly agreed-upon names only exists for orders 4 and 5, anyway.
This is not necessarily equavalent to enumerating all free (or one-sided, or fixed) polyominos of a given order. I only want to count the number of times a given configuration appears. If a generating algorithm never produces a certain polyomino it will simply not be counted.
The basic logic of the counting is:
testcount = 10000 // Arbitrary
order = 6 // Create hexominos in this test
hashcounts = new hashtable
for i = 1 to testcount
poly = GenerateRandomPolyomino(order)
hash = PolyHash(poly)
if hashcounts.contains(hash) then
hashcounts[hash]++
else
hashcounts[hash] = 1
What I'm looking for is an efficient PolyHash algorithm. The input polyominos are simply defined as a set of coordinates. One orientation of the T tetronimo could be, for example:
[[1,0], [0,1], [1,1], [2,1]]:
|012
-+---
0| X
1|XXX
You can assume that that input polyomino will already be normalized to be aligned against the X and Y axes and have only positive coordinates. Formally, each set:
Will have at least 1 coordinate where the x value is 0
Will have at least 1 coordinate where the y value is 0
Will not have any coordinates where x < 0 or y < 0
I'm really looking for novel algorithms that avoid the increasing number of integer operations required by a general brute force approach, described below.
Brute force
A brute force solution suggested here and here consists of hashing each set as an unsigned integer using each coordinate as a binary flag, and taking the minimum hash of all possible rotations (and in my case flips), where each rotation / flip must also be translated to the origin. This results in a total of 23 set operations for each input set to get the "free" hash:
Rotate (6x)
Flip (1x)
Translate (7x)
Hash (8x)
Find minimum of computed hashes (1x)
Where the sequence of operations to obtain each hash is:
Hash
Rotate, Translate, Hash
Rotate, Translate, Hash
Rotate, Translate, Hash
Flip, Translate, Hash
Rotate, Translate, Hash
Rotate, Translate, Hash
Rotate, Translate, Hash
Well, I came up with a completely different approach. (Also thanks to corsiKa for some helpful insights!) Rather than hashing / encoding the squares, encode the path around them. The path consists of a sequence of 'turns' (including no turn) to perform before drawing each unit segment. I think an algorithm for getting the path from the coordinates of the squares is outside the scope of this question.
This does something very important: it destroys all location and orientation information, which we don't need. It is also very easy to get the path of the flipped object: you do so by simply reversing the order of the elements. Storage is compact because each element requires only 2 bits.
It does introduce one additional constraint: the polyomino must not have fully enclosed holes. (Formally, it must be simply connected.) Most discussions of polyominos consider a hole to exist even if it is sealed only by two touching corners, as this prevents tiling with any other non-trivial polyomino. Tracing the edges is not hindered by touching corners (as in the single heptomino with a hole), but it cannot leap from one outer loop to an inner one as in the complete ring-shaped octomino:
It also produces one additional challenge: finding the minumum ordering of the encoded path loop. This is because any rotation of the path (in the sense of string rotation) is a valid encoding. To always get the same encoding we have to find the minimal (or maximal) rotation of the path instructions. Thankfully this problem has already been solved: see for example http://en.wikipedia.org/wiki/Lexicographically_minimal_string_rotation.
Example:
If we arbitrarily assign the following values to the move operations:
No turn: 1
Turn right: 2
Turn left: 3
Here is the F pentomino traced clockwise:
An arbitrary initial encoding for the F pentomino is (starting at the bottom right corner):
2,2,3,1,2,2,3,2,2,3,2,1
The resulting minimum rotation of the encoding is
1,2,2,3,1,2,2,3,2,2,3,2
With 12 elements, this loop can be packed into 24 bits if two bits are used per instruction or only 19 bits if instructions are encoded as powers of three. Even with the 2-bit element encoding can easily fit that in a single unsigned 32 bit integer 0x6B6BAE:
1- 2- 2- 3- 1- 2- 2- 3- 2- 2- 3- 2
= 01-10-10-11-01-10-10-11-10-10-11-10
= 00000000011010110110101110101110
= 0x006B6BAE
The base-3 encoding with the start of the loop in the most significant powers of 3 is 0x5795F:
1*3^11 + 2*3^10 + 2*3^9 + 3*3^8 + 1*3^7 + 2*3^6
+ 2*3^5 + 3*3^4 + 2*3^3 + 2*3^2 + 3*3^1 + 2*3^0
= 0x0005795F
The maximum number of vertexes in the path around a polyomino of order n is 2n + 2. For 2-bit encoding the number of bits is twice the number of moves, so the maximum bits needed is 4n + 4. For base-3 encoding it's:
Where the "gallows" is the ceiling function. Accordingly any polyomino up to order 9 can be encoded in a single 32 bit integer. Knowing this you can choose your platform-specific data structure accordingly for the fastest hash comparison given the maximum order of the polyominos you'll be hashing.
You can reduce it down to 8 hash operations without the need to flip, rotate, or re-translate.
Note that this algorithm assumes you are operating with coordinates relative to itself. That is to say it's not in the wild.
Instead of applying operations that flip, rotate, and translate, instead simply change the order in which you hash.
For instance, let us take the F pent above. In the simple example, let us presume the hash operation was something like this:
int hashPolySingle(Poly p)
int hash = 0
for x = 0 to p.width
fory = 0 to p.height
hash = hash * 31 + p.contains(x,y) ? 1 : 0
hashPolySingle = hash
int hashPoly(Poly p)
int hash = hashPolySingle(p)
p.rotateClockwise() // assume it translates inside
hash = hash * 31 + hashPolySingle(p)
// keep rotating for all 4 oritentations
p.flip()
// hash those 4
Instead of applying the function to all 8 different orientations of the poly, I would apply 8 different hash functions to 1 poly.
int hashPolySingle(Poly p, bool flip, int corner)
int hash = 0
int xstart, xstop, ystart, ystop
bool yfirst
switch(corner)
case 1: xstart = 0
xstop = p.width
ystart = 0
ystop = p.height
yfirst = false
break
case 2: xstart = p.width
xstop = 0
ystart = 0
ystop = p.height
yfirst = true
break
case 3: xstart = p.width
xstop = 0
ystart = p.height
ystop = 0
yfirst = false
break
case 4: xstart = 0
xstop = p.width
ystart = p.height
ystop = 0
yfirst = true
break
default: error()
if(flip) swap(xstart, xstop)
if(flip) swap(ystart, ystop)
if(yfirst)
for y = ystart to ystop
for x = xstart to xstop
hash = hash * 31 + p.contains(x,y) ? 1 : 0
else
for x = xstart to xstop
for y = ystart to ystop
hash = hash * 31 + p.contains(x,y) ? 1 : 0
hashPolySingle = hash
Which is then called in the 8 different ways. You could also encapsulate hashPolySingle in for loop around the corner, and around the flip or not. All the same.
int hashPoly(Poly p)
// approach from each of the 4 corners
int hash = hashPolySingle(p, false, 1)
hash = hash * 31 + hashPolySingle(p, false, 2)
hash = hash * 31 + hashPolySingle(p, false, 3)
hash = hash * 31 + hashPolySingle(p, false, 4)
// flip it
hash = hash * 31 + hashPolySingle(p, true, 1)
hash = hash * 31 + hashPolySingle(p, true, 2)
hash = hash * 31 + hashPolySingle(p, true, 3)
hash = hash * 31 + hashPolySingle(p, true, 4)
hashPoly = hash
In this way, you're implicitly rotating the poly from each direction, but you're not actually performing the rotation and translation. It performs the 8 hashes, which seem to be entirely necessary in order to accurately hash all 8 orientations, but wastes no passes over the poly that are not actually doing hashes. This seems to me to be the most elegant solution.
Note that there may be a better hashPolySingle() algorithm to use. Mine uses a Cartesian exhaustion algorithm that is on the order of O(n^2). Its worst case scenario is an L shape, which would cause there to be an N/2 * (N-1)/2 sized square for only N elements, or an efficiency of 1:(N-1)/4, compared to an I shape which would be 1:1. It may also be that the inherent invariant imposed by the architecture would actually make it less efficient than the naive algorithm.
My suspicion is that the above concern can be alleviated by simulating the Cartesian exhaustion by converting the set of nodes into an bi-directional graph that can be traversed, causing the nodes to be hit in the same order as my much more naive hashing algorithm, ignoring the empty spaces. This will bring the algorithm down to O(n) as the graph should be able to be constructed in O(n) time. Because I haven't done this, I can't say for sure, which is why I say it's only a suspicion, but there should be a way to do it.
Here's my DFS (depth first search) explained:
Start with the top-most cell (left-most as a tiebreaker). Mark it as visited. Every time you visit a cell, check all four directions for unvisited neighbors. Always check the four directions in this order: up, left, down, right.
Example
In this example, up and left fail, but down succeeds. So far our output is 001, and we recursively search the "down" cell.
We mark our new current cell as visited (and we'll finish searching the original cell when we finish searching this cell). Here, up=0, left=1.
We search the left-most cell and there are no unvisted neighbors (up=0, left=0, down=0, right=0). Our total output so far is 001010000.
We continue our search of the second cell. down=0, right=1. We search the cell to the right.
up=0, left=0, down=1. Search the down cell: all 0s. Total output so far is 001010000010010000. Then, we return from the down cell...
right=0, return. return. (Now, we are at the starting cell.) right=0. Done!
So, the total output is 20 (N*4) bits: 00101000001001000000.
Encoding improvement
But, we can save some bits.
The last visited cell will always encode 0000 for its four directions. So, don't encode the last visited cell to save 4 bits.
Another improvement: if you reached a cell by moving left, don't check that cells right-side. So, we only need 3 bits per cell, except 4 bits for the first cell, and 0 for the last cell.
The first cell will never have an up, or left neighbor, so omit these bits. So the first cell takes 2 bits.
So, with these improvements, we use only N*3-4 bits (e.g. 5 cells -> 11 bits; 9 cells -> 23 bits).
If you really want, you can compact a little more by noting that exactly N-1 bits will be "1".
Caveat
Yes, you'll need to encode all 8 rotations/flips of the polyomino and choose the least to get a canonical encoding.
I suspect this will still be faster than the outline approach. Also, holes in the polyomino shouldn't be a problem.
I worked on the same problem recently. I solved the problem fairly simply by
(1) generate a unique ID for a polyomino, such that each identical poly would have the same UID. For example, find the bounding box, normalize the corner of the bounding box, and collect the set of non-empty cells.
(2) generate all possible permutations by rotating (and flipping, if appropriate) a polyomino, and look for duplicates.
The advantage of this brute approach, other than it's simplicity, is that it still works if the
polys are distinguishable in some other way, for example if some of them are colored or numbered.
You can set up something like a trie to uniquely identify (and not just hash) your polyomino. Take your normalized polyomino and set up a binary search tree, where the root branches on whether (0,0) is has a set pixel, the next level branches on whether (0,1) has a set pixel, and so on. When you look up a polyomino, simply normalize it and then walk the tree. If you find it in the trie, then you're done. If not, assign that polyomino a unique id (just increment a counter), generate all 8 possible rotations and flips, then add those 8 to the trie.
On a trie miss, you'll have to generate all the rotations and reflections. But on a trie hit it should cost less (O(k^2) for k-polyominos).
To make lookups even more efficient, you could use a couple bits at a time and use a wider tree instead of a binary tree.
A valid hash function, if you're really afraid of hash collisions, is to make a hash function x + order * y for coordinates and then loop trough all the coordinates of a piece, adding (order ^ i) * hash(coord[i]) to the piece hash. That way, you can guarantee you won't get any hash collisions.