How to find the complexity of decision trees in Tic Tac Toe by hand? - complexity-theory

I know the upper bound for the size of the game tree is 9! = 362,880 in a 3X3 Tic Tac Toe. After deducted the invalid case and the rotation and reflection, only 26,830 possible games are left. So complexity of decision trees in 3X3 Tic Tac Toe is 5 which is the number of digit of the leaf nodes (26,830). Am I conclusion right?
If so, how can i calculate a decision tree complexity of 4X4 Tic Tac Toe without drawing out a full decision tree?
Sorry for my dump question

You probably want to use some kind of model-checker, that can count the solutions, e.g. a #SAT solver https://en.wikipedia.org/wiki/Sharp-SAT. I don't believe there is any trick possible beyond exploring the state space, apart from using symmetries (but that only alleviates the exploration).
This is like the N-queens problem https://en.wikipedia.org/wiki/Eight_queens_puzzle there is no analytical solution for the number of solutions when you scale the board up.

Related

How to apply A* algorithm to a Tic Tac Toe game?

My teacher asked me to write a Tic Tac Toe game using A* algorithm. I don't know how I can use A* in my game. Is it possible to check winner using A*? Or can it be used for something else?
you should extend you game,the grid graph should not be 3x3,may be 20x20.
and give a initial situation,you should quickly work out the least steps that you can win the game,its like a dfs problem,then you can use A*,because A* is for optimization problem.

Creating an "untransparent" squares

I'm writing a program that prints the circumference of squares to the screen according to coordinates and length of a side given by the user for each square.
The squares should be on top of each other if they overlap so that the bottom one is being hidden by the top one.
The order of the squares is set according to the order they were entered to the program (First is bottom).
For example:
&&&&
& &
& &$$$
&&&& $
$ $
$ $
$$$$$
The best algorithm I came up with is with time complexity of O(n^2) for each square.
Any suggestion for how to make the squares "untransparent"?
The O(n^2) algorithm you mention is probably the classic "painter's algorithm", in which you simply render ("rasterize") the squares one after another from the bottom ones to the top ones. This is a perfectly good algorithm, widely used in computer graphics. However, any "raster" algorithm will have the same time complexity of O(n^2) per square.
But if you want an asymptotically faster algorithm, you have to look for a "vector" algorithm, i.e. the algorithm that works with the edges of the squares, but does not waste time processing their interiors. One way to build such an algorithm is to pre-calculate the final visible edge layout in vector form and then draw only the visible edges on the screen.
To implement something like that each square has to be initially represented by a set of four edges. Then a single pass of sweep-line algorithm will eliminate the invisible edges. And then you can render the remaining visible edges on the screen. This algorithm will be a lot more complex than "painter's algorithm", since you will have to implement the sweeping and edge elimination logic. But for this particular problem (especially considering that it deals with orthogonal geometry) it is not at all that difficult.
P.S. One critical point here is that the latter approach can only work if all the squares are known in advance, i.e it is only applicable to an off-line problem. If you are dealing with an on-line problem, i.e. you have to draw the squares immediately as they are received from the input, not knowing all of them in advance, then in general case there's no reason to attempt to improve anything here. Just use the painter's algorithm.

Minimax and tic tac toe - is my algorithm correct?

I implemented a minimax algorithm for TTT. When I make the AI player make the first move, it evaluates all minimax values of the possible moves as 0. That means it can choose any square on the grid as the first move. However, any Tic Tac Toe guide will tell you that choosing a corner or center square when making the first move is the better choice since there is a higher chance of winning.
Why is my algorithm not reflecting this?
EDIT: To clarify, what I'm trying to ask is: is this a limitation of the minimax algorithm or is my implementation incorrect?
Your algorithm does not have to reflect this: if you try out all starting positions, you would find that the corners and the center give you more path to win than other cells.
With tic-tac-toe's lack of complexity, the minimax can look ahead all the way through the end of the game, starting with the very first move. The number of available moves goes down quickly as you progress through the game, so the full search finishes pretty fast.
With more complex games (othello, checkers, chess), the so-called "opening books" become more important. The number of available moves in the beginning of the game is enormous, so a traditional approach is to pick a move from the "book" of openings, and stay with pre-calculated "book moves" for the first three to six plays. Moves outside the book are not considered, saving a lot of CPU on moves that remain essentially the same.

Algorithm for solving tiling/jigsaw puzzle

I've been thinking about an algorithm for solving small puzzles. I found different algortihms on the internet and on stackoverflow but they do not meet my needs in some points:
My puzzle pieces are in one color, there is no image/pattern/... on them
Every edge of a part can be one of 8 options, similar to them on the picture (you can describe the parts as ABCD, cdab, cBBb, ADcb for example); there are no more complicated structures or anything like that
The puzzles I want to solve are not to big, there are no ones bigger than 8x8
The corner/egde pieces have no specific edges, the result will just not be a clean rectangle
Not all my puzzles are solvable
The parts can be rotated but not turned
Every puzzle part is unique
Example puzzle parts
So my starting point would be just brute force - lay piece 0 down in the (0,0) position, then start trying any of the remaining pieces in (0,1) until one fits, then move on to (0,2), etc. At any step if there are no pieces that fit in that space, take out the previously fit piece and try to find a new fit for that square.
I can't prove it, but I suspect that filling in pieces such that you are more likely to be evaluating a piece with 2 constraints (that is, instead of doing larger squares, 2x2, 3x3, 4x4, moving out) will terminate faster than just doing rows.
It reminds me of those 3x3 puzzles where you have square pieces with heads and tails of animals. One optimization there is to count up the mismatch between pairs - if you you have a lot more A than you have a then you know that A will tend to be located at the edges of the puzzle, but in an 8x8 puzzle you have a lot less edge to interior ratio so that difference isn't as likely to be useful, nor do I have a good idea for integrating it into an algorithm.
(Edit) Thinking about it more, I think the first thing that counting would get you is an early out if no solution exists. An NxN grid has 2*N*(N-1) interior matches that must be satisfied. If min(A,a) + min(B,b) + min(C,c) + min(D,d) < 2*N*(N-1) you know no solution exists.
(Edit 2) had abs() where I meant to have min(). Ooops.

Genetic Algorithm to Draw a Graph? Position assignment problem

I have an assignment problem at hand and am wondering how suitable it would be to apply local search techniques to reach a desirable solution (the search space is quite large).
I have a directed graph (a flow-chart) that I would like to visualize on 2-D plane in a way that it is very clear, understandable and easy to read by human-eye. Therefore; I will be assigning (x,y) positions to each vertex. I'm thinking of solving this problem using simulated annealing, genetic algorithms, or any such method you can suggest
Input: A graph G = (V,E)
Output: A set of assignments, {(xi, yi) for each vi in V}. In other words, each vertex will be assigned a position (x, y) where the coordinates are all integers and >= 0.
These are the criteria that I will use to judge a solution (I welcome any suggestions):
Number of intersecting edges should be minimal,
All edges flow in one direction (i.e from left to right),
High angular resolution (the smallest angle formed by two edges
incident on the same vertex),
Small area - least important.
Furthermore; I have an initial configuration (assignment of positions to vertices), made by hand. It is very messy and that's why I'm trying to automate the process.
My questions are,
How wise would it be to go with local search techniques? How likely
would it produce a desired outcome?
And what should I start with? Simulated annealing, genetic algorithms
or something else?
Should I seed randomly at the beginning or use the initial
configuration to start with?
Or, if you already know of a similar implementation/pseudo-code/thing, please point me to it.
Any help will be greatly appreciated. Thanks.
EDIT: It doesn't need to be fast - not in real-time. Furthermore; |V|=~200 and each vertex has about 1.5 outgoing edges on average. The graph has no disconnected components. It does involve cycles.
I would suggest looking at http://www.graphviz.org/Theory.php since graphviz is one of the leading open source graph visualizers.
Depending on what the assignment is, maybe it would make sense to use graphviz for the visualization altogether.
This paper is a pretty good overview of the various approaches. Roberto Tomassia's book is also a good bet.
http://oreilly.com/catalog/9780596529321 - In this book you might find implementation of genetic algorithm for fine visualization of 2D graph.
In similar situations I'm prefer using genetic algorithm. Also you might start with random initialized population - according to my experience after few iterations, you'll find quite good (but also not the best) solution.
Also, using java you're may paralell this algorithm (isolated islands strategy) - it is rather efficient improvement.
Also I'd like to advice you Differential evolution algorithm. From my experience - it finds solution much more quickly than genetic optimization.
function String generateGenetic()
String genetic = "";
for each vertex in your graph
Generate random x and y;
String xy = Transform x and y to a fixed-length bit string;
genetic + = xy;
endfor
return genetic;
write a function double evaluate(String genetic) which will give you a level of statisfaction. (probably based on the how many edges intersect and edges direction.
your program:
int population = 1000;
int max_iterations = 1000;
double satisfaction = 0;
String[] genetics = new String[population]; //this is ur population;
while((satisfaction<0.8)&&(count<max_iterations)){
for (int i=0;i<population;i++){
if(evaluate(genetics[i])>satisfaction)
satisfaction = evaluate(genetics[i]);
else
manipulate(genetics[i]);
}
}
funciton manipulate can flip some bit of the string or multiple bits or a portion that encodes x and y of a vertex or maybe generate completely a new genetic string or try to solve a problem inside it(direct an edge).
To answer your first question, I must say it depends. It depends on a number of different factors such as:
How fast it needs to be (does it need to be done in real-time?)
How many vertices there are
How many edges there are compared to the number of vertices (i.e. is it a dense or sparse graph?)
If it needs to be done in a real-time, then local search techniques would not be best as they can take a while to run before getting a good result. They would only be fast enough if the size of the graph was small. And if it's small to begin with, you shouldn't have to use local search to begin with.
There are already algorithms out there for rendering graphs as you describe. The question is, at which point does the problem grow too big for them to be effective? I don't know the answer to that question, but I'm sure you could do some research to find out.
Now going on to your questions about implementation of a local search.
From my personal experience, simulated annealing is easier to implement than a genetic algorithm. However I think this problem translates nicely into both settings. I would start with SA though.
For simulated annealing, you would start out with a random configuration. Then you can randomly perturb the configuration by moving one or more vertices some random distance. I'm sure you can complete the details of the algorithm.
For a genetic algorithm approach, you can also start out with a random population (each graph has random coordinates for the vertices). A mutation can be like the perturbation in SA algorithm I described. Recombination can simply be taking random vertices from the parents and using them in the child graph. Again, I'm sure you can fill in the blanks.
The sum up: Use local search only if your graph is big enough to warrant it and if you don't need it to be done super quickly (say less than a few seconds). Otherwise use a different algorithm.
EDIT: In light of your graph parameters, I think you can do just use whichever algorithm is easiest to code. With V=200, even an O(V^3) algorithm would be sufficient. Personally I feel like simulated annealing would be easiest and the best route.

Resources