I have an algorithmic problem in which I have derived a transfer matrix between a lot of states. The next step is to exponentiate it, but it is very large, so I need to do some reductions on it. Specifically it contains a lot of symmetry. Below are some examples on how many nodes can be eliminated by simple observations.
My question is whether there is an algorithm to efficiently eliminate symmetry in digraphs, similarly to the way I've done it manually below.
In all cases the initial vector has the same value for all nodes.
In the first example we see that b, c, d and e all receive values from a and one of each other. Hence they will always contain an identical value, and we can merge them.
In this example we quickly spot, that the graph is identical from the point of view of a, b, c and d. Also for their respective sidenodes, it doesn't matter to which inner node it is attached. Hence we can reduce the graph down to only two states.
Update: Some people were reasonable enough not quite sure what was meant by "State transfer matrix". The idea here is, that you can split a combinatorial problem up into a number of state types for each n in your recurrence. The matrix then tell you how to get from n-1 to n.
Usually you are only interested about the value of one of your states, but you need to calculate the others as well, so you can always get to the next level. In some cases however, multiple states are symmetrical, meaning they will always have the same value. Obviously it's quite a waste to calculate all of these, so we want to reduce the graph until all nodes are "unique".
Below is an example of the transfer matrix for the reduced graph in example 1.
[S_a(n)] [1 1 1] [S_a(n-1)]
[S_f(n)] = [1 0 0]*[S_f(n-1)]
[S_B(n)] [4 0 1] [S_B(n-1)]
Any suggestions or references to papers are appreciated.
Brendan McKay's nauty ( http://cs.anu.edu.au/~bdm/nauty/) is the best tool I know of for computing automorphisms of graphs. It may be too expensive to compute the whole automorphism group of your graph, but you might be able to reuse some of the algorithms described in McKay's paper "Practical Graph Isomorphism" (linked from the nauty page).
I'll just add an extra answer building on what userOVER9000 suggested, if anybody else are interested.
The below is an example of using nauty on Example 2, through the dreadnaut tool.
$ ./dreadnaut
Dreadnaut version 2.4 (64 bits).
> n=8 d g -- Starting a new 8-node digraph
0 : 1 3 4; -- Entering edge data
1 : 0 2 5;
2 : 3 1 6;
3 : 0 2 7;
4 : 0;
5 : 1;
6 : 2;
7 : 3;
> cx -- Calling nauty
(1 3)(5 7)
level 2: 6 orbits; 5 fixed; index 2
(0 1)(2 3)(4 5)(6 7)
level 1: 2 orbits; 4 fixed; index 4
2 orbits; grpsize=8; 2 gens; 6 nodes; maxlev=3
tctotal=8; canupdates=1; cpu time = 0.00 seconds
> o -- Output "orbits"
0:3; 4:7;
Notice it suggests joining nodes 0:3 which are a:d in Example 2 and 4:7 which are e:h.
The nauty algorithm is not well documented, but the authors describe it as exponential worst case, n^2 average.
Computing symmetries seems to be a bit of a second order problem. Taking just a,b,c and d in your second graph, the symmetry would have to be expressed
a(b,c,d) = b(a,d,c)
and all its permutations, or some such. Consider a second subgraph a', b', c', d' added to it. Again, we have the symmetries, but parameterised differently.
For computing people (rather than math people), could we express the problem like so?
Each graph node contains a set of letters. At each iteration, all of the letters in each node are copied to its neighbours by the arrows (some arrows take more than one iteration and can be treated as a pipe of anonymous nodes).
We are trying to find efficient ways of determining things such as
* what letters each set/node contains after N iterations.
* for each node the N after which its set no longer changes.
* what sets of nodes wind up containing the same sets of letters (equivalence class)
?
Related
Problem:
Consider a patient suffering from skin infection and germs are spreading all over rapidly. Assume that skin surface is scaled as a rectangular grid of size MxN and cells are marked by 0 and 1 where 0 represents non affected region on skin and 1 represents affected region on skin. Germs can move from one cell of grid to another in 4 possible directions (right, left, up, down) but can move to only one cell at a time in one direction and affect that cell in 1 sec. Doctor currently who is treating the patient see's status and wants to know the time left for him to save him before the germs spread all over the skin and patient dies. Can you help to estimate the minimum time taken for the germs to completely occupy skin surface?
Input: : Current status of skin. (A matrix of size MxN with 1's and 0's which represents affected and non affected area)
Output: : Min time in sec to cover all over the grid.
Example:
Input:
[1 1 0 0 1]
[0 1 1 0 0]
[0 0 0 0 1]
[0 1 0 0 0]
Output: 2 seconds
Explanation:
After 1 sec from input, matrix could be as below
[1 1 1 0 1]
[1 1 1 0 1]
[0 1 1 0 1]
[0 1 1 0 1]
In next sec, matrix is completely filled by 1's
I will not present a detailed solution here, but some thoughts that hopefully may help you to write your own program.
First step is to determine the kind of algorithm to implement. The optimal way would be to find a simple and fast ad hoc solution for this problem. In the absence of such a solution, for this kind of problems, classical candidates are DFS, BFS, A* ...
As the goal is to find the shortest solution, it seems natural to consider BFS first, as once BFS finds a solution, we know that it is the shortest ones and we can stop the search. However, then, we have to consider avoiding inflation of the nodes, as it would lead not only to a huge calculation time, but also a huge memory.
First idea to avoid node inflation is to consider that some 1 cells can only be expended in one another cell. In the posted diagram, for example the cell (0, 0) (top left) can only be expended to cell (1, 0). Then, after this expansion, cell (1, 1) can only move to cell (2, 1). Therefore, we know it would be suboptimal to move cell (1,1) to cell (1,0). Therefore: move such cells first.
In a similar way, once an infected cell is surrounded by other infected cells only, it is no longer necessary to consider it for next moves.
At the end, it would be convenient to have a list of infected cells, together with the number of non-infected cells that each such cell can move to.
Another idea to limit the number of nodes is to detect duplicates, as it is likely here that many of them will exist. For that, we have to define a kind of hashing. The used hash function does not need to be 100% efficient, but need to be calculated rapidly, and if possible in a recursive manner. If we obtain B diagram from A diagram by adding a 1-cell at position (i, j), then I propose something like
H(B) = H(A)^f(i, j)
f(i, j) = a*(1024*i+j)%b
Here, I used the fact that N and M are less than 1000.
Each time a new diagram is consider, we have to calculate the corresponding H value and check if it exists already in the set of past diagrams.
I'm not sure how far I would get with this in an interview situation. After some thought, rather than considering solutions that store more than one full board state, I would rather consider a greedy priority queue since a strong heuristic for the next zero-cell candidates to fill seems to be:
(1) healthy cells that have the least neighbouring infected cells (but at least one, of course),
e.g., choose A over B
1 1 B 0 1
0 1 1 0 0
0 0 A 0 1
0 1 0 0 0
and (2) break ties by choosing first the healthy cells that when infected will block the least infected cells.
e.g., choose A over B
1 1 1 0 1
1 B 1 0 A
0 0 0 0 1
0 1 0 0 0
An interesting observation is that any healthy cell destination can technically be reached in time Manhattan-distance from the nearest infected cell, where the cell leading such a "crawl" continually chooses the single move that brings us closer to the destination. We know that at the same time, though, this same infected-cell "snake" produces new "crawlers" that could reach any equally far or closer neighbours. This makes me wonder if there may be a more efficient way to determine the lower-bound, based on counts of the farthest healthy cells.
This is a variant of the multi-agent pathfinding problem (MAPF). There is a ton of recent work on this topic, but earlier modern work is a good starting point for finding optimal solutions to this problem - for instance the operator decomposition approach.
To do this you would order the agents (germs) 1..k. Then, you would start a search where you generate all possible first moves for germ 1, followed by all possible first moves for germ 2, and so on, where moves for an agent are to stay in place, or to spread to an adjacent unoccupied location. With 4 possible actions for each germ, there are up to 4^k possible actions between complete states. (Partial states occur when you haven't yet assigned actions to all k agents.) The number of actions is exponential, meaning you may run up against resource constraints (time or space) fairly quickly. But, there are only 2^(MxN) states possible. (Since agents don't go away, it's actually 2^(MxN-i) where i is the number of initial germs.)
Every time all (k) germs have considered a possible action, you have a new complete state. (And k then increases for the next iteration.) The minimum time left comes from the shallowest complete state which has the grid filled. A bit of brute-force computation will find the shortest solution. (Quite a bit in the case of large grids.)
You could use a BFS to find the first state that is completely filled. But, A* might do much better. As a heuristic, you could consider that all adjacent locations of all cells were filled in each step, and then compute the number of steps required to fill the grid under that model. That gives a lower bound on the time required to fill the full grid.
But, there are many more optimizations. The reason to do operator decomposition is that you could order the moves to take the best moves first and not consider the weaker possibilities (eg all germs don't spread). You could also use a partial-expansion approach (EPEA*) to avoid generating a lot of clearly suboptimal policies for the germs.
If I was asking this as an interview questions I might be looking to see someone formulate the problem (what are actions, what are states), come up with the lower bound on the solution (every germ expands to every adjacent cell), come up with an algorithm, and perhaps analyze how hard the problem is, in order of increasing difficulty.
Can anyone please suggest me algorithm for this.
You are given starting and the ending points of N segments over the x-axis.
How many of these segments can be touched, even on their edges, by exactly two lines perpendicular to them?
Sample Input :
3
5
2 3
1 3
1 5
3 4
4 5
5
1 2
1 3
2 3
1 4
1 5
3
1 2
3 4
5 6
Sample Output :
Case 1: 5
Case 2: 5
Case 3: 2
Explanation :
Case 1: We will draw two lines (parallel to Y-axis) crossing X-axis at point 2 and 4. These two lines will touch all the five segments.
Case 2: We can touch all the points even with one line crossing X-axis at 2.
Case 3: It is not possible to touch more than two points in this case.
Constraints:
1 ≤ N ≤ 10^5
0 ≤ a < b ≤ 10^9
Let assume that we have a data structure that supports the following operations efficiently:
Add a segment.
Delete a segment.
Return the maximum number of segments that cover one point(that is, the "best" point).
If have such a structure, we can get use the initial problem efficiently in the following manner:
Let's create an array of events(one event for the start of each segment and one for the end) and sort by the x-coordinate.
Add all segments to the magical data structure.
Iterate over all events and do the following: when a segment start, add one to the number of currently covered segments and remove it from that data structure. When a segment ends, subtract one from the number of currently covered segment and add this segment to the magical data structure. After each event, update the answer with the value of the number of currently covered segments(it shows how many segments are covered by the point which corresponds to the current event) plus the maximum returned by the data structure described above(it shows how we can choose another point in the best possible way).
If this data structure can perform all given operations in O(log n), then we have an O(n log n) solution(we sort the events and make one pass over the sorted array making a constant number of queries to this data structure for each event).
So how can we implement this data structure? Well, a segment tree works fine here. Adding a segment is adding one to a specific range. Removing a segment is subtracting one from all elements in a specific range. Get ting the maximum is just a standard maximum operation on a segment tree. So we need a segment tree that supports two operations: add a constant to a range and get maximum for the entire tree. It can be done in O(log n) time per query.
One more note: a standard segment tree requires coordinates to be small. We may assume that they never exceed 2 * n(if it is not the case, we can compress them).
An O(N*max(logN, M)) solution, where M is the medium segment size, implemented in Common Lisp: touching-segments.lisp.
The idea is to first calculate from left to right at every interesting point the number of segments that would be touched by a line there (open-left-to-right on the lisp code). Cost: O(NlogN)
Then, from right to left it calculates, again at every interesting point P, the best location for a line considering segments fully to the right of P (open-right-to-left on the lisp code). Cost O(N*max(logN, M))
Then it is just a matter of looking for the point where the sum of both values tops. Cost O(N).
The code is barely tested and may contain bugs. Also, I have not bothered to handle edge cases as when the number of segments is zero.
The problem can be solved in O(Nlog(N)) time per test case.
Observe that there is an optimal placement of two vertical lines each of which go through some segment endpoints
Compress segments' coordinates. More info at What is coordinate compression?
Build a sorted set of segment endpoints X
Sort segments [a_i,b_i] by a_i
Let Q be a priority queue which stores right endpoints of segments processed so far
Let T be a max interval tree built over x-coordinates. Some useful reading atWhat are some sources (books, etc.) from where I can learn about Interval, Segment, Range trees?
For each segment make [a_i,b_i]-range increment-by-1 query to T. It allows to find maximum number of segments covering some x in [a,b]
Iterate over elements x of X. For each x process segments (not already processed) with x >= a_i. The processing includes pushing b_i to Q and making [a_i,b_i]-range increment-by-(-1) query to T. After removing from Q all elements < x, A= Q.size is equal to number of segments covering x. B = T.rmq(x + 1, M) returns maximum number of segments that do not cover x and cover some fixed y > x. A + B is a candidate for an answer.
Source:
http://www.quora.com/What-are-the-intended-solutions-for-the-Touching-segments-and-the-Smallest-String-and-Regex-problems-from-the-Cisco-Software-Challenge-held-on-Hackerrank
I have encoutered an algorithm question:
Fully Connection
Given n cities which spreads along a line, let Xi be the position of city i and Pi be its population.
Now we begin to lay cables between every two of the cities based on their distance and population. Given two cities i and j, the cost to lay cable between them is |Xi-Xj|*max(Pi,Pj). How much does it cost to lay all the cables?
For example, given:
i Xi Pi
- -- --
1 1 4
2 2 5
3 3 6
Then the total cost can be calculated as:
i j |Xi-Xj| max(Pi, Pj) Segment Cost
- - ------ ----------- ------------
1 2 1 5 5
2 3 1 6 6
1 3 2 6 12
So that the total cost is 5+6+12 = 23.
While this can clearly be done in O(n2) time, can it be done in asymptotically less time?
I can think of faster solution. If I am not wrong it goes to O(n*logn). Now let's first sort all the cities according to Pi. This is O(n* log n). Then we start processing the cities in increasing order of Pi. the reason being - you always know you have max (Pi, Pj) = Pi in this case. We only want to add all the segments that come from relations with Pi. Those that will connect with larger indexes will be counted when they will be processed.
Now the thing I was able to think of was to use several index trees in order to reduce the complexity of the algorithm. First index tree is counting the number of nodes and can process queries of the kind: how many nodes are to the right of xi in logarithmic time. Lets call this number NR. The second index tree can process queries of the kind: what is the sum of distances from all the points to the right of a given x. The distances are counted towards a fixed point that is guaranteed to be to the right of the rightmost point, lets call its x XR.Lets call this number SUMD. Then the sum of the distances to all points to the right of our point can be found that way: NR * dist(Xi, XR) - SUMD. Then all these contribute (NR * dist(Xi, XR) - SUMD) *Pi to the result. The same for the left points and you get the answer. After you process the ith point you add it to the index trees and can go on.
Edit: Here is one article about Biary index trees: http://community.topcoder.com/tc?module=Static&d1=tutorials&d2=binaryIndexedTrees
This is the direct connections problem from codesprint 2.
They will be posting worked solutions to all problems within a week on their website.
(They have said "Now that the contest is over, we're totally cool with everyone discussing their solutions to the problems.")
I have a graph-theoretic (which is also related to combinatorics) problem that is illustrated below, and wonder what is the best approach to design an algorithm to solve it.
Given 4 different graphs of 6 nodes (by different, I mean different structures, e.g. STAR, LINE, COMPLETE, etc), and 24 unique objects, design an algorithm to assign these objects to these 4 graphs 4 times, so that the number of repeating neighbors on the graphs over the 4 assignments is minimized. For example, if object A and B are neighbors on 1 of the 4 graphs in one assignment, then in the best case, A and B will not be neighbors again in the other 3 assignments.
Obviously, the degree to which such minimization can go is dependent on the specific graph structures given. But I am more interested in a general solution here so that given any 4 graph structures, such minimization is guaranteed as the result of the algorithm.
Any suggestion/idea of solving this problem is welcome, and some pseudo-code may well be sufficient to illustrate the design. Thank you.
Representation:
You have 24 elements, I will name this elements from A to X (24 first letters).
Each of these elements will have a place in one of the 4 graphs. I will assign a number to the 24 nodes of the 4 graphs from 1 to 24.
I will identify the position of A by a 24-uple =(xA1,xA2...,xA24), and if I want to assign A to the node number 8 for exemple, I will write (xa1,Xa2..xa24) = (0,0,0,0,0,0,0,1,0,0...0), where 1 is on position 8.
We can say that A =(xa1,...xa24)
e1...e24 are the unit vectors (1,0...0) to (0,0...1)
note about the operator '.':
A.e1=xa1
...
X.e24=Xx24
There are some constraints on A,...X with these notations :
Xii is in {0,1}
and
Sum(Xai)=1 ... Sum(Xxi)=1
Sum(Xa1,xb1,...Xx1)=1 ... Sum(Xa24,Xb24,... Xx24)=1
Since one element can be assign to only one node.
I will define a graph by defining the neighbors relation of each node, lets say node 8 has neighbors node 7 and node 10
to check that A and B are neighbors on node 8 for exemple I nedd:
A.e8=1 and B.e7 or B.e10 =1 then I just need A.e8*(B.e7+B.e10)==1
in the function isNeighborInGraphs(A,B) I test that for every nodes and I get one or zero depending on the neighborhood.
Notations:
4 graphs of 6 nodes, the position of each element is defined by an integer from 1 to 24.
(1 to 6 for first graph, etc...)
e1... e24 are the unit vectors (1,0,0...0) to (0,0...1)
Let A, B ...X be the N elements.
A=(0,0...,1,...,0)=(xa1,xa2...xa24)
B=...
...
X=(0,0...,1,...,0)
Graph descriptions:
IsNeigborInGraphs(A,B)=A.e1*B.e2+...
//if 1 and 2 are neigbors in one graph
for exemple
State of the system:
L(A)=[B,B,C,E,G...] // list of
neigbors of A (can repeat)
actualise(L(A)):
for element in [B,X]
if IsNeigbotInGraphs(A,Element)
L(A).append(Element)
endIf
endfor
Objective functions
N(A)=len(L(A))+Sum(IsneigborInGraph(A,i),i in L(A))
...
N(X)= ...
Description of the algorithm
start with an initial position
A=e1... X=e24
Actualize L(A),L(B)... L(X)
Solve this (with a solveur, ampl for
exemple will work I guess since it's
a nonlinear optimization
problem):
Objective function
min(Sum(N(Z),Z=A to X)
Constraints:
Sum(Xai)=1 ... Sum(Xxi)=1
Sum(Xa1,xb1,...Xx1)=1 ...
Sum(Xa24,Xb24,... Xx24)=1
You get the best solution
4.Repeat step 2 and 3, 3 more times.
If all four graphs are K_6, then the best you can do is choose 4 set partitions of your 24 objects into 4 sets each of cardinality 6 so that the pairwise intersection of any two sets has cardinality at most 2. You can do this by choosing set partitions that are maximally far apart in the Hasse diagram of set partitions with partial order given by refinement. The general case is much harder, but perhaps you can still begin with this crude approximation of a solution and then be clever with which vertex is assigned which object in the four assignments.
Assuming you don't want to cycle all combinations and calculate the sum every time and choose the lowest, you can implement a minimum problem (solved depending on your constraints using either a linear programming solver i.e. symplex algorithm engines or a non-linear solver, much harder talking in terms of time) with constraints on your variables (24) depending on the shape of your path. You can also use free software like LINGO/LINDO to create rapidly a decision theory model and test its correctness (you need decision theory notions though)
If this has anything to do with the real world, then it's unlikely that you absolutely must have a solution that is the true minimum. Close to the minimum should be good enough, right? If so, you could repeatedly randomly make the 4 assignments and check the results until you either run out of time or have a good-enough solution or appear to have stopped improving your best solution.
I've encountered an interesting problem while programming a random level generator for a tile-based game. I've implemented a brute-force solver for it but it is exponentially slow and definitely unfit for my use case. I'm not necessarily looking for a perfect solution, I'll be satisfied with a “good enough” solution that performs well.
Problem Statement:
Say you have all or a subset of the following tiles available (this is the combination of all possible 4-bit patterns mapped to the right, up, left and down directions):
alt text http://img189.imageshack.us/img189/3713/basetileset.png
You are provided a grid where some cells are marked (true) and others not (false). This could be generated by a perlin noise algorithm, for example. The goal is to fill this space with tiles so that there are as many complex tiles as possible. Ideally, all tiles should be connected. There might be no solution for some input values (available tiles + pattern). There is always at least one solution if the top-left, unconnected tile is available (that is, all pattern cells can be filled with that tile).
Example:
Images left to right: tile availability (green tiles can be used, red cannot), pattern to fill and a solution
alt text http://img806.imageshack.us/img806/2391/sampletileset.png + alt text http://img841.imageshack.us/img841/7/samplepattern.png = alt text http://img690.imageshack.us/img690/2585/samplesolution.png
What I tried:
My brute-force implementation attempts every possible tile everywhere and keeps track of the solutions that were found. Finally, it chooses the solution that maximizes the total number of connections outgoing from each of the tiles. The time it takes is exponential with regard to the number of tiles in the pattern. A pattern of 12 tiles takes a few seconds to solve.
Notes:
As I said, performance is more important than perfection. However, the final solution must be properly connected (no tile pointing to a tile which doesn't point to the original tile). To give an idea of scope, I'd like to handle a pattern of 100 tiles under about 2 seconds.
For 100-tile instances, I believe that a dynamic program based on a carving decomposition of the input graph could fit the bill.
Carving decomposition
In graph theory, a carving decomposition of a graph is a recursive binary partition of its vertices. For example, here's a graph
1--2--3
| |
| |
4--5
and one of its carving decompositions
{1,2,3,4,5}
/ \
{1,4} {2,3,5}
/ \ / \
{1} {4} {2,5} {3}
/ \
{2} {5}.
The width of a carving decomposition is the maximum number of edges leaving one of its partitions. In this case, {2,5} has outgoing edges 2--1, 2--3, and 5--4, so the width is 3. The width of a kd-tree-style partition of a 10 x 10 grid is 13.
The carving-width of a graph is the minimum width of a carving decomposition. It is known that planar graphs (in particular, subgraphs of grid graphs) with n vertices have carving-width O(√n), and the big-O constant is relatively small.
Dynamic program
Given an n-vertex input graph and a carving decomposition of width w, there is an O(2w n)-time algorithm to compute the optimal tile choice. This running time grows rapidly in w, so you should try decomposing some sample inputs by hand to get an idea of what kind of performance to expect.
The algorithm works on the decomposition tree from the bottom up. Let X be a partition, and let F be the set of edges that leave X. We make a table mapping each of 2|F| possibilities for the presence or absence of edges in F to the optimal sum on X under the specified constraints (-Infinity if there is no solution). For example, with the partition {1,4}, we have entries
{} -> ??
{1--2} -> ??
{4--5} -> ??
{1--2,4--5} -> ??
For the leaf partitions with only one vertex, the subset of F completely determines the tile, so it's easy to fill in the number of connections (if the tile is valid) or -Infinity otherwise. For the other partitions, when computing an entry of the table, try all different connectivity patterns for the edges that go between the two children.
For example, suppose we have pieces
|
. .- .- -. .
|
The table for {1} is
{} -> 0
{1--2} -> 1
{1--4} -> -Infinity
{1--2,1--4} -> 2
The table for {4} is
{} -> 0
{1--4} -> 1
{4--5} -> 1
{1--4,4--5} -> -Infinity
Now let's compute the table for {1,4}. For {}, without the edge 1--4 we have score 0 for {1} (entry {}) plus score 0 for {4} (entry {}). With edge 1--4 we have score -Infinity + 1 = -Infinity (entries {1--4}).
{} -> 0
For {1--2}, the scores are 1 + 0 = 1 without 1--4 and 2 + 1 = 3 with.
{1--2} -> 3
Continuing.
{4--5} -> 0 + 1 = 1 (> -Infinity = -Infinity + (-Infinity))
{1--2,4--5} -> 1 + 1 = 2 (> -Infinity = 2 + (-Infinity))
At the end we can use the tables to determine an optimal solution.
Finding a carving decomposition
There are sophisticated algorithms for finding good carving decompositions, but you might not need them. Try a simple binary space partitioning scheme.
As a base, take a look at an earlier answer I gave on searching. Hill-climbing search programs are a tool every programmer should have in their arsenal as they work much better than plain brute-force solvers.
Here, even a relatively bad search algorithm has as an advantage the fact that it won't generate illegal boards, greatly reducing the expected run time.
I think I may have a better idea. I didn't test it, but I'm pretty sure it will be faster than a purely brute-force-ish solution for large zones.
First, create an empty set (a "set" being a collection that only contains unique objects) of nodes. This collection will be used to identify which tiles have broken connections that need to be fixed.
Fill the data structures to represent the board with the pieces that are available, using the ones you see the most fit based on your personal criteria with no regard to the correctness of the solution. This will almost certainly lead you to an invalid state, but it's okay for now. Iterate through the board, and find all tiles that have connections leading to nowhere. Add them to the set of broken tiles.
Now, iterate through the set. Change the tiles it refers to by reducing their number of connections (otherwise you could get into an infinite loop) so they have no broken connection, respecting the currently available pieces. Check their neighbors again, and if you broke connections to other tiles, add these to the set of broken ones too.
Once the set of broken connections will be empty, you should have a fine-looking pattern. Note however that it has an important caveat: it might to tend to oversimplify patterns, since the "fixing" phase will always attempt to reduce the number of connections. You may have to be lucky to get interesting patterns since this could be greatly affected by first piece you put on each tile.