Greedy Algorithm for solving Horn formulas - algorithm

This is my assignment question I've been trying to understand for a couple of days and ultimately solve it. So far, I have got no success. So any guidance, help in understanding or solving the problem is appreciated.
You are given a set of m constraints over n Boolean variables
{x1, x2, ..., xn}.
The constraints are of two types:
equality constraints: xi = xj, for some i != j
inequality constraints: xi != xj, for some i != j
Design an efficient greedy algorithm that given the
set of equality and inequality constraints determines if it is
possible or not to satisfy all the constraints simultaneously.
If it
is possible to satisfy all the constraints, your algorithm should
output an assignment to the variables that satisfyes all the
constraints.
Choose a representation for the input to this problem
and state the problem formally using the notation Input: ..., Output:
....
Describe your greedy algorithm in plain English. In what
sense is your algorithm "greedy"?
Describe your greedy algorithm
in pseudocode.
Briefly justify the correctness of your algorithm.
State and justify the running time of your algorithm. The more
efficient algorithm the better.
What I've figured out so far is that this problem is related to the Boolean satisfiability (SAT) problem. I've tried setting all the variables to false first and then, by counter examples, prove that it cannot satisfy all the constraints at once.
I am getting confused between constraint satisfaction problems (CSP) and Horn SAT. I read certain articles on these to get a solution and this led me to confusion. My logic was to create a tree and apply DFS to check if constraints are satisfied, whereas Horn SAT solutions are leading me to mathematical proofs.
Any help is appreciated as this is my learning stage and I cannot master it all at once. :)

(informal) Classification:
So firstly, it's not the boolean SAT problem, because that's NP-complete. Your teacher has implied that this isn't NP-complete by asking for an efficient (ie. at most polynomial-time) way to always solve the problem.
Modelling (thinking about) the problem:
One way to think of this problem is as a graph, where inequalities represent one type of edge, while equalities represent another:
Thinking of this problem graphically helped me realise that it's a bit like a graph-colouring problem: we could set all nodes to ? (unset), then choose any node to set to true, then do a breadth-first search from that node to set all connecting nodes (setting them to either true or false), checking for any contradiction. If we complete this for a connected component of the graph, without finding contradictions, then we can ignore all nodes in that part and randomly set the value of another node, etc. If we do this until no connected components are left, and we still have no contradictions, then we've set the graph in a way that represents a legitimate solution.
Solution:
Because there's exactly n elements, we can make an associated "bucket" array of the equalities and another for the inequalities (each "bucket" could contain an array of what it equates to, but we could get even more efficient than this if we wanted [the complexity would remain the same]).
Your array of arrays for equalities could be imagined like this:
which would represent that:
0 == 1
1 == 2
3 == 4
Note that this is an irregular matrix, and requires 2*m space. We do the same thing for the an inequality matrix. Moreover, setting up both of these arrays (of arrays) uses O(m + n) space and time complexity.
Now, if there exists a solution, {x0, x1, x2, x3}, then {!x0, !x1, !x2, !x3} is also a solution. Proof:
(xi == xj) iff (!xi == !xj)
So it won't effect our solution if we set one of the elements randomly. Let's set xi to true, and set the others to ? [numerically we'll be dealing with three values: 0 (false), 1 (true), and 2 (unset)].
We'll call this array solution (even though it's not finished yet).
Now we can use recursion to consider all the consequences of setting our value:
(The below code is psuedo-code, as the questioner didn't specify a language. I've made it somewhat c++-style, but just to keep it generic and to use the pretty formatting colours.)
bool Set (int i, bool val) // i is the index
{
if (solution[i] != '?')
return (solution[i] == val);
solution[i] == val;
for (int j = 0; j < equalities[i].size(); j += 1)
{
bool success = Set(equalities[i][j], val);
if (!success)
return false; // Contradiction found
}
for (int j = 0; j < inequalities[i].size(); j += 1)
{
bool success = Set(inequalities[i][j], !val);
if (!success)
return false; // Contradiction found
}
return true; // No contradiction found
}
void Solve ()
{
for (int i = 0; i < solution.size(); i += 1)
solution[i] == '?';
for (int i = 0; i < solution.size(); i += 1)
{
if (solution[i] != '?')
continue; // value has already been set/checked
bool success = Set(i, true);
if (!success)
{
print "No solution";
return;
}
]
print "At least one solution exists. Here is a solution:";
print solution;
}
Because of the first if condition in the Set function, the function can only be executed (beyond the if statement) n times. The Set function can call itself only when passing the first if statement, which it does n times, 1 for each node value. Each time the Set function passes into the body of the function (beyond the if statement), the work it does is proportional to the number of edges associated with the corresponding node. The Solve function can call the Set function at most n times. Hence the number of times that the function can be called is O(m+n), which corresponds to the amount of work done during the solving process.
A trick here is to recognise that the Solve function will need to call the Set function C times, where C is the number of connected components of the graph. Note that each connected component is independent of each other, so the same rule applies: we can legitimately choose a value of one of its elements then consider the consequences.
The fastest solution would still need to read all of the constraints, O(m) and would need to output a solution when it's possible, O(n); therefore it's not possible to get a solution with better time complexity than O(m+n). The above is a greedy algorithm with O(m+n) time and space complexity.
It's probably possible to get better space complexity (while maintaining the O(m+n) time complexity), maybe even O(1), but I'm not sure.
As for Horn formulas, I'm embarrassed to admit that I know nothing about them, but this answer directly responds to everything that was asked of you in the assignment.

Let’s take an example 110 with constraints x1=x2 and x2!=x3
Remember since we are only given the constraints, the algorithm can also end up generating 001 as output as it satisfies the constraints too
One way to solve it would be
Have two lists one for each constraint type,
Each list holds a pair of i,j index.
Sort the lists based on the i index.
Now for each pair in equality constraint check if there’s no constraint in inequality that conflicts with it.
If it does then you can exit right away
Otherwise you have to check if there’s more pairs in equality constraint lists that have one of the pairs.
You can then assign one or zero to that and eventually you would be able to generate the complete output

Related

Subset-Sum in Linear Time

This was a question on our Algorithms final exam. It's verbatim because the prof let us take a copy of the exam home.
(20 points) Let I = {r1,r2,...,rn} be a set of n arbitrary positive integers and the values in I are distinct. I is not given in any sorted order. Suppose we want to find a subset I' of I such that the total sum of all elements in I' is exactly 100*ceil(n^.5) (each element of I can appear at most once in I'). Present an O(n) time algorithm for solving this problem.
As far as I can tell, it's basically a special case of the knapsack problem, otherwise known as the subset-sum problem ... both of which are in NP and in theory impossible to solve in linear time?
So ... was this a trick question?
This SO post basically explains that a pseudo-polynomial (linear) time approximation can be done if the weights are bounded, but in the exam problem the weights aren't bounded and either way given the overall difficulty of the exam I'd be shocked if the prof expected us to know/come up with an obscure dynamic optimization algorithm.
There are two things that make this problem possible:
The input can be truncated to size O(sqrt(n)). There are no negative inputs, so you can discard any numbers greater than 100*sqrt(n), and all inputs are distinct so we know there are at most 100*sqrt(n) inputs that matter.
The playing field has size O(sqrt(n)). Although there are O(2^sqrt(n)) ways to combine the O(sqrt(n)) inputs that matter, you don't have to care about combinations that either leave the 100*sqrt(n) range or redundantly hit a target you can already reach.
Basically, this problem screams dynamic programming with each input being checked against each part of the 'reached number' space somehow.
The solution ends up being a matter of ensuring numbers don't reach off of themselves (by scanning in the right direction), of only looking at each number once, and of giving ourselves enough information to reconstruct the solution afterwards.
Here's some C# code that should solve the problem in the given time:
int[] FindSubsetToImpliedTarget(int[] inputs) {
var target = 100*(int)Math.Ceiling(Math.Sqrt(inputs.Count));
// build up how-X-was-reached table
var reached = new int?[target+1];
reached[0] = 0; // the empty set reaches 0
foreach (var e in inputs) {
// we go backwards to avoid reaching off of ourselves
for (var i = target; i >= e; i--) {
if (reached[i-e].HasValue) {
reached[i] = e;
}
}
}
// was target even reached?
if (!reached[target].HasValue) return null;
// build result by back-tracking via the logged reached values
var result = new List<int>();
for (var i = target; reached[i] != 0; i -= reached[i].Value) {
result.Add(reached[i].Value);
}
return result.ToArray();
}
I haven't actually tested the above code, so beware typos and off-by-ones.
With the typical DP algorithm for subset-sum problem will obtain O(N) time consuming algorithm. We use dp[i][k] (boolean) to indicate whether the first i items have a subset with sum k,the transition equation is:
dp[i][k] = (dp[i-1][k-v[i] || dp[i-1][k]),
it is O(NM) where N is the size of the set and M is the targeted sum. Since the elements are distinct and the sum must equal to 100*ceil(n^.5), we just need consider at most the first 100*ceil(n^.5) items, then we get N<=100*ceil(n^.5) and M = 100*ceil(n^.5).
The DP algorithm is O(N*M) = O(100*ceil(n^.5) * 100*ceil(n^.5)) = O(n).
Ok following is a simple solution in O(n) time.
Since the required sum S is of the order of O(n^0.5), if we formulate an algorithm of complexity S^2, then we are good since our algorithm shall be of effective complexity O(n).
Iterate once over all the elements and check if the value is less than S or not. If it is then push it in a new array. This array shall contain a maximum of S elements (O(n^.5))
Sort this array in descending order in O(sqrt(n)*logn) time ( < O(n)). This is so because logn <= sqrt(n) for all natural numbers. https://math.stackexchange.com/questions/65793/how-to-prove-log-n-leq-sqrt-n-over-natural-numbers
Now this problem is a 1D knapsack problem with W = S and number of elements = S (upper bound).
Maximize the total weight of items and see if it equals S.
It can be solved using dynamic programming in linear time (linear wrt W ~ S).

Coin change algorithm and pseudocode: Need clarification

I'm trying to understand the coin change problem solution, but am having some difficulty.
At the Algorithmist, there is a pseudocode solution for the dynamic programming solution, shown below:
n = goal number
S = [S1, S2, S3 ... Sm]
function sequence(n, m)
//initialize base cases
for i = 0 to n
for j = 0 to m
table[i][j] = table[i-S[j]][j] + table[i][j-1]
This is a pretty standard O(n^2) algorithm that avoids recalculating the same answer multiple times by using a 2-D array.
My issue is two-fold:
How to define the base cases and incorporate them in table[][] as initial values
How to extract out the different sequences from the table
Regarding issue 1, there are three base cases with this algorithm:
if n==0, return 1
if n < 0, return 0
if n >= 1 && m <= 0, return 0
How to incorporate them into table[][], I am not sure. Finally, I have no idea how to extract out the solution set from the array.
We can implement a dynamic programming algorithm in at least two different approaches. One is the top-down approach using memoization, the other is the bottom-up iterative approach.
For a beginner to dynamic programming, I would always recommend using the top-down approach first since this will help them understand the recurrence relationships in dynamic programming.
So in order to solve the coin changing problem, you've already understood what the recurrence relationship says:
table[i][j] = table[i-S[j]][j] + table[i][j-1]
Such a recurrence relationship is good but is not that well-defined since it doesn't have any boundary conditions. Therefore, we need to define boundary conditions in order to ensure the recurrence relationship could successfully terminate without going into an infinite loop.
So what will happen when we try to go down the recursive tree?
If we need to calculate table[i][j], which means the number of approaches to change i using coins from type 0 to j, there are several corner cases we need to handle:
1) What if j == 0?
If j == 0 we will try to solve the sub-problem table(i,j-1), which is not a valid sub-problem. Therefore, one boundary condition is:
if(j==0) {
if(i==0) table[i][j] = 1;
else table[i][j] = 0;
}
2) What if i - S[j] < 0?
We also need to handle this boundary case and we know in such a condition we should either not try to solve this sub-problem or initialize table(i-S[j],j) = 0 for all of these cases.
So in all, if we are going to implement this dynamic programming from a top-down memoization approach, we can do something like this:
int f(int i, int j) {
if(calc[i][j]) return table[i][j];
calc[i][j] = true;
if(j==0) {
if(i==0) return table[i][j]=1;
else return table[i][j]=0;
}
if(i>=S[j])
return table[i][j]=table[i-S[j][j]+table[i][j-1];
else
return table[i][j]=table[i][j-1];
}
In practice, it's also possible that we use the value of table arrays to help track whether this sub-problem has been calculated before (e.g. we can initialize a value of -1 means this sub-problem hasn't been calculated).
Hope the answer is clear. :)

Can not understand knapsack solutions

In wikipedia the algorithm for Knapsack is as follows:
for i from 1 to n do
for j from 0 to W do
if j >= w[i] then
T[i, j] := max(T[i-1, j], T[i-1, j-w[i]] + v[i]) [18]
else
T[i, j] := T[i-1, j]
end if
end for
end for
And it is the same structures on all examples I found online.
What I can not understand is how does this code take into account the fact that perhaps the max value comes from a smaller knapsack? E.g. if the knapsack capacity is 8 then perhaps max value comes from capacity 7 (8 - 1).
I could not find anywhere logic to consider that perhaps the max value comes from a smaller knapsack. Is this wrong idea?
The Dynamic Programming solution of knapsack is basically recursive:
T(i,j) = max{ T(i-1,j) , T(i-1,j-w[i]) + v[i] }
// ^ ^
// ignore the element add the element, your value is increase
// by v[i] and the additional weight you can
// carry is decreased by w[i]
(The else condition is redundant in the recursive form if you set T(i,j) = -infinity for each j < 0).
The idea is exhaustive search, you start from one element and you have two possibilities: add it, or don't.
You check both options, and chose the best of those.
Since it is done recursively - you effectively checking ALL possibilities to assign the elements to the knapsack.
Note that the solution in wikipedia is basically a bottom-up solution for the same recursive formula
As I see, you have misunderstood the concept of knapsack. which I will describe here in details till we reach the code part.
First, there are two versions of the problem:
0-1 knapsack problem: here, the Items are indivisible, you either take an item or not. and can be solved with dynamic programming. //and this one is the one yo are facing problems with
Fractional knapsack problem: don't care about this one now.
For the first problem you can understand it as the following:
Given a knapsack with maximum capacity W, and a set S consisting of n items
Each item i has some weight wi and benefit value bi (all wi and W are integer values).
SO, How to pack the knapsack to achieve maximum total value of packed
items?
and in mathematical mouth:
and to solve this problem using Dynamic Programming We set up a table V[0..k, 0..W] with one row for each available item, and one column for each weight from 0 to W.
We need to carefully identify the sub-problems,
The sub-problem then will be to compute V[k,w], i.e., to find an optimal solution for
Sk= {items labeled 1, 2, .. k} in a knapsack of size w (maximum value achievable given capacity w and items 1,…, k)
So, we found this formula to solve our problem:
This algorithm only finds the max possible value that can be carried in the knapsack
i.e., the value in V[n,W]
To know the items that make this maximum value, this will be another topic.
I really hope that this answer will help you. I have an pp presentation that walks with you to fill the table and to show you the algorithm step by step. But I don't know how can I upload it to stackoverflow. let me know if any help needed.

How is this solution an example of dynamic programming?

A lecturer gave this question in class:
[question]
A sequence of n integers is stored in
an array A[1..n]. An integer a in A is
called the majority if it appears more
than n/2 times in A.
An O(n) algorithm can be devised to
find the majority based on the
following observation: if two
different elements in the original
sequence are removed, then the
majority in the original sequence
remains the majority in the new
sequence. Using this observation, or
otherwise, write programming code to
find the majority, if one exists, in
O(n) time.
for which this solution was accepted
[solution]
int findCandidate(int[] a)
{
int maj_index = 0;
int count = 1;
for (int i=1;i<a.length;i++)
{
if (a[maj_index] == a[i])
count++;
else
count--;
if (count == 0)
{
maj_index =i;
count++;
}
}
return a[maj_index];
}
int findMajority(int[] a)
{
int c = findCandidate(a);
int count = 0;
for (int i=0;i<a.length;i++)
if (a[i] == c) count++;
if (count > n/2) return c;
return -1;//just a marker - no majority found
}
I can't see how the solution provided is a dynamic solution. And I can't see how based on the wording, he pulled that code out.
The origin of the term dynamic programming is trying to describe a really awesome way of optimizing certain kinds of solutions (dynamic was used since it sounded punchier). In other words, when you see "dynamic programming", you need to translate it into "awesome optimization".
'Dynamic programming' has nothing to do with dynamic allocation of memory or whatever, it's just an old term. In fact, it has little to do with modern meaing of "programming" also.
It is a method of solving of specific class of problems - when an optimal solution of subproblem is guaranteed to be part of optimal solution of bigger problem. For instance, if you want to pay $567 with a smallest amount of bills, the solution will contain at least one of solutions for $1..$566 and one more bill.
The code is just an application of the algorithm.
This is dynamic programming because the findCandidate function is breaking down the provided array into smaller, more manageable parts. In this case, he starts with the first array as a candidate for the majority. By increasing the count when it is encountered and decreasing the count when it is not, he determines if this is true. When the count equals zero, we know that the first i characters do not have a majority. By continually calculating the local majority we don't need to iterate through the array more than once in the candidate identification phase. We then check to see if that candidate is actually the majority by going through the array a second time, giving us O(n). It actually runs in 2n time, since we iterate twice, but the constant doesn't matter.

What can be the efficient approach to solve the 8 puzzle problem?

The 8-puzzle is a square board with 9 positions, filled by 8 numbered tiles and one gap. At any point, a tile adjacent to the gap can be moved into the gap, creating a new gap position. In other words the gap can be swapped with an adjacent (horizontally and vertically) tile. The objective in the game is to begin with an arbitrary configuration of tiles, and move them so as to get the numbered tiles arranged in ascending order either running around the perimeter of the board or ordered from left to right, with 1 in the top left-hand position.
I was wondering what approach will be efficient to solve this problem?
I will just attempt to rewrite the previous answer with more details on why it is optimal.
The A* algorithm taken directly from wikipedia is
function A*(start,goal)
closedset := the empty set // The set of nodes already evaluated.
openset := set containing the initial node // The set of tentative nodes to be evaluated.
came_from := the empty map // The map of navigated nodes.
g_score[start] := 0 // Distance from start along optimal path.
h_score[start] := heuristic_estimate_of_distance(start, goal)
f_score[start] := h_score[start] // Estimated total distance from start to goal through y.
while openset is not empty
x := the node in openset having the lowest f_score[] value
if x = goal
return reconstruct_path(came_from, came_from[goal])
remove x from openset
add x to closedset
foreach y in neighbor_nodes(x)
if y in closedset
continue
tentative_g_score := g_score[x] + dist_between(x,y)
if y not in openset
add y to openset
tentative_is_better := true
elseif tentative_g_score < g_score[y]
tentative_is_better := true
else
tentative_is_better := false
if tentative_is_better = true
came_from[y] := x
g_score[y] := tentative_g_score
h_score[y] := heuristic_estimate_of_distance(y, goal)
f_score[y] := g_score[y] + h_score[y]
return failure
function reconstruct_path(came_from, current_node)
if came_from[current_node] is set
p = reconstruct_path(came_from, came_from[current_node])
return (p + current_node)
else
return current_node
So let me fill in all the details here.
heuristic_estimate_of_distance is the function Σ d(xi) where d(.) is the Manhattan distance of each square xi from its goal state.
So the setup
1 2 3
4 7 6
8 5
would have a heuristic_estimate_of_distance of 1+2+1=4 since each of 8,5 are one away from their goal position with d(.)=1 and 7 is 2 away from its goal state with d(7)=2.
The set of nodes that the A* searches over is defined to be the starting position followed by all possible legal positions. That is lets say the starting position x is as above:
x =
1 2 3
4 7 6
8 5
then the function neighbor_nodes(x) produces the 2 possible legal moves:
1 2 3
4 7
8 5 6
or
1 2 3
4 7 6
8 5
The function dist_between(x,y) is defined as the number of square moves that took place to transition from state x to y. This is mostly going to be equal to 1 in A* always for the purposes of your algorithm.
closedset and openset are both specific to the A* algorithm and can be implemented using standard data structures (priority queues I believe.) came_from is a data structure used
to reconstruct the solution found using the function reconstruct_path who's details can be found on wikipedia. If you do not wish to remember the solution you do not need to implement this.
Last, I will address the issue of optimality. Consider the excerpt from the A* wikipedia article:
"If the heuristic function h is admissible, meaning that it never overestimates the actual minimal cost of reaching the goal, then A* is itself admissible (or optimal) if we do not use a closed set. If a closed set is used, then h must also be monotonic (or consistent) for A* to be optimal. This means that for any pair of adjacent nodes x and y, where d(x,y) denotes the length of the edge between them, we must have:
h(x) <= d(x,y) +h(y)"
So it suffices to show that our heuristic is admissible and monotonic. For the former (admissibility), note that given any configuration our heuristic (sum of all distances) estimates that each square is not constrained by only legal moves and can move freely towards its goal position, which is clearly an optimistic estimate, hence our heuristic is admissible (or it never over-estimates since reaching a goal position will always take at least as many moves as the heuristic estimates.)
The monotonicity requirement stated in words is:
"The heuristic cost (estimated distance to goal state) of any node must be less than or equal to the cost of transitioning to any adjacent node plus the heuristic cost of that node."
It is mainly to prevent the possibility of negative cycles, where transitioning to an unrelated node may decrease the distance to the goal node more than the cost of actually making the transition, suggesting a poor heuristic.
To show monotonicity its pretty simple in our case. Any adjacent nodes x,y have d(x,y)=1 by our definition of d. Thus we need to show
h(x) <= h(y) + 1
which is equivalent to
h(x) - h(y) <= 1
which is equivalent to
Σ d(xi) - Σ d(yi) <= 1
which is equivalent to
Σ d(xi) - d(yi) <= 1
We know by our definition of neighbor_nodes(x) that two neighbour nodes x,y can have at most the position of one square differing, meaning that in our sums the term
d(xi) - d(yi) = 0
for all but 1 value of i. Lets say without loss of generality this is true of i=k. Furthermore, we know that for i=k, the node has moved at most one place, so its distance to
a goal state must be at most one more than in the previous state thus:
Σ d(xi) - d(yi) = d(xk) - d(yk) <= 1
showing monotonicity. This shows what needed to be showed, thus proving this algorithm will be optimal (in a big-O notation or asymptotic kind of way.)
Note, that I have shown optimality in terms of big-O notation but there is still lots of room to play in terms of tweaking the heuristic. You can add additional twists to it so that it is a closer estimate of the actual distance to the goal state, however you have to make sure that the heuristic is always an underestimate otherwise you loose optimality!
EDIT MANY MOONS LATER
Reading this over again (much) later, I realized the way I wrote it sort of confounds the meaning of optimality of this algorithm.
There are two distinct meanings of optimality I was trying to get at here:
1) The algorithm produces an optimal solution, that is the best possible solution given the objective criteria.
2) The algorithm expands the least number of state nodes of all possible algorithms using the same heuristic.
The simplest way to understand why you need admissibility and monotonicity of the heuristic to obtain 1) is to view A* as an application of Dijkstra's shortest path algorithm on a graph where the edge weights are given by the node distance traveled thus far plus the heuristic distance. Without these two properties, we would have negative edges in the graph, thereby negative cycles would be possible and Dijkstra's shortest path algorithm would no longer return the correct answer! (Construct a simple example of this to convince yourself.)
2) is actually quite confusing to understand. To fully understand the meaning of this, there are a lot of quantifiers on this statement, such as when talking about other algorithms, one refers to similar algorithms as A* that expand nodes and search without a-priori information (other than the heuristic.) Obviously, one can construct a trivial counter-example otherwise, such as an oracle or genie that tells you the answer at every step of the way. To understand this statement in depth I highly suggest reading the last paragraph in the History section on Wikipedia as well as looking into all the citations and footnotes in that carefully stated sentence.
I hope this clears up any remaining confusion among would-be readers.
You can use the heuristic that is based on the positions of the numbers, that is the higher the overall sum of all the distances of each letter from its goal state is, the higher the heuristic value. Then you can implement A* search which can be proved to be the optimal search in terms of time and space complexity (provided the heuristic is monotonic and admissible.) http://en.wikipedia.org/wiki/A*_search_algorithm
Since the OP cannot post a picture, this is what he's talking about:
As far as solving this puzzle, goes, take a look at the iterative deepening depth-first search algorithm, as made relevant to the 8-puzzle problem by this page.
Donut's got it! IDDFS will do the trick, considering the relatively limited search space of this puzzle. It would be efficient hence respond to the OP's question. It would find the optimal solution, but not necessarily in optimal complexity.
Implementing IDDFS would be the more complicated part of this problem, I just want to suggest an simple approach to managing the board, the games rules etc. This in particular addresses a way to obtain initial states for the puzzle which are solvable. An hinted in the notes of the question, not all random assignemts of 9 tites (considering the empty slot a special tile), will yield a solvable puzzle. It is a matter of mathematical parity... So, here's a suggestions to model the game:
Make the list of all 3x3 permutation matrices which represent valid "moves" of the game.
Such list is a subset of 3x3s w/ all zeros and two ones. Each matrix gets an ID which will be quite convenient to keep track of the moves, in the IDDFS search tree. An alternative to matrices, is to have two-tuples of the tile position numbers to swap, this may lead to faster implementation.
Such matrices can be used to create the initial puzzle state, starting with the "win" state, and running a arbitrary number of permutations selected at random. In addition to ensuring that the initial state is solvable this approach also provides a indicative number of moves with which a given puzzle can be solved.
Now let's just implement the IDDFS algo and [joke]return the assignement for an A+[/joke]...
This is an example of the classical shortest path algorithm. You can read more about shortest path here and here.
In short, think of all possible states of the puzzle as of vertices in some graph. With each move you change states - so, each valid move represents an edge of the graph. Since moves don't have any cost, you may think of the cost of each move being 1. The following c++-like pseudo-code will work for this problem:
{
int[][] field = new int[3][3];
// fill the input here
map<string, int> path;
queue<string> q;
put(field, 0); // we can get to the starting position in 0 turns
while (!q.empty()) {
string v = q.poll();
int[][] take = decode(v);
int time = path.get(v);
if (isFinalPosition(take)) {
return time;
}
for each valid move from take to int[][] newPosition {
put(newPosition, time + 1);
}
}
// no path
return -1;
}
void isFinalPosition(int[][] q) {
return encode(q) == "123456780"; // 0 represents empty space
}
void put(int[][] position, int time) {
string s = encode(newPosition);
if (!path.contains(s)) {
path.put(s, time);
}
}
string encode(int[][] field) {
string s = "";
for (int i = 0; i < 3; i++) for (int j = 0; j < 3; j++) s += field[i][j];
return s;
}
int[][] decode(string s) {
int[][] ans = new int[3][3];
for (int i = 0; i < 3; i++) for (int j = 0; j < 3; j++) field[i][j] = s[i * 3 + j];
return ans;
}
See this link for my parallel iterative deepening search for a solution to the 15-puzzle, which is the 4x4 big-brother of the 8-puzzle.

Resources