One-dimensional automata - computation-theory

For one-dimensional cellular automata, CA-Predecessor is in P. To prove this I am using the following hint
Hint:
think about using recursion on a one-bit-shorter prefix of the input configuration.
Suppose that the CA follows the rule that the configuration shrinks by 2 cells on each update, since
the leftmost cell has no left neighbor and the rightmost cell has no right neighbor. Therefore, on
input a configuration of length n, we are asking whether there is a predecessor configuration of
length n + 2.
I know that the neighborhood size is 3 but not sure how to prove that it is in P.Can someone help me with this?

Related

Knight's Travails and Binary Search Tree

Here is some context if you're unfamiliar with Knight's Travails.
Your task is to build a function knight_moves that shows the simplest possible way to get from one square to another by outputting all squares the knight will stop on along the way.
I know where to find complete solutions to this exercise, but I'm trying to mostly work through it on my own.
Where I'm stuck is how to set up a binary tree, or, specifically, what is the relationship between all the next possible moves a knight can make from its current location?
As I understand a BST, the defining property is that a tree(and any of its subtrees and leaves) store keys that are larger to the root node to the right, and smaller keys to the left of the root node.
How do I represent the value of the knight's current location and its possible(valid) moves?
It would be more valuable if the answer provided is a more of a guiding principle (philosophy?) when thinking about BST keys/values and defining parent-child relationships.
Thank you.
This is not a BST problem. Attack this as a graph search problem with backtracking. Start by learning and understanding Dijkstra's algorithm, treating the squares of the board as nodes in a graph. For instance, a1 connects to b3 and c2; all moves have a cost of 1 (which simplifies the algorithm a little).
You'll need to build your graph either before you start, connecting all possible moves; or on the fly, generating all possibilities from the current square. Most students find that doing it on the fly is easier for them.
Also, changing from algebraic chess notation to (row, col) notation helps. If the knight is currently at (row, col), then the legal next moves are
(row+-2, col +-1) and
(row+-1, col +-2)
... where x and y are both in the range 0-7 (throw out moves that take the knight past the board's edge).
Maintain a global list of squares you've already visited, and another of squares you need to visit (one move from those you've visited). You'll also need an array of least cost (number of moves) to get to each square. Initialize that to 100, and let the algorithm reduce it as it walks through the search.
Is that enough to get you moving?

The overlapping sub problems in bitmask dynamic programming

I am trying to learn bit masking with dynamic programming but I'm failing to understand the overlapping sub problems for a case. Can someone please explain how the sub problems overlap based on any example they feel fit for explaining easily?
Let's take the example for a Shortest Hamiltonian walk, In this problem we need to find a Hamiltonian walk that is the shortest where each edge has a certain amount of weight associated with it.
Hamiltonian walk is where we visit each and every node in the graph exactly once.
This problem can be solved using DP Bitmasks for small no of nodes. So what we do is to keep a Bitmask to keep track of which nodes we have visited in the current state, and then we can iterate over all the nodes not visited using the mask we can go to different states.
Now suppose a subproblem lets say of k no of nodes is computed, this solution of k nodes constitutes of smaller subproblems, that form a larger solution of k nodes, i.e initial solution had only 2 nodes, then 3 and so on when we reached the kth node.
Now let's take another subproblem that constitutes of let's say m nodes also exists.
Now there is an edge from a node in the first subproblem to a node in the second subproblem and we want to join these 2 subproblems, so in this case all the smaller subproblems of the k nodes are also smaller subproblems of the whole combined solution and hence here is referred to as overlapping as it is present in both the first subproblems and the larger combined subproblem.
In order to avoid redundant calculation of these overlapping subproblems we use the concept of memoisation, i.e once we have the answer to a overlapping subproblem we store it for later use.
Also note that in the above 2 subproblem's no vertex should be present in both the smaller subproblem which we can check using the corresponding bitmasks.
I am not entirely sure if this is what you are asking. But an example which is sadly not in the domain of bit masking problems would be the de facto beginner's example to DP: Fibonacci sequence.
As you probably know, Fibonacci sequence is defined roughly as follows.
F(n) = F(n-1) + F(n-2)
F(0) = F(1) = 1
Now, say you wanted to find F(8). Then you're actually looking for F(7) + F(6). To find F(7), you need F(6) and F(5). And to find F(6), you need F(5) and F(4).
As you see, both F(6) and F(7) require solving F(5), which means they are overlapping. Incidentally, F(7) requires solving the entire problem F(6) as well, but that need not always be the case for each DP problem. In essence, sometimes your subproblems A and B may both depend on a lower-level subproblem C, in which case they are considered overlapping.

Finding minimum cost in a binary matrix

Consider a n * n binary matrix. Each cell of this matrix has at most 4 neighbors (if it exists). We call two cells of this matrix incompatible if they are neighbors and their values are not equal. We pay $b for each incompatible pair. Also we can change the value of the cell with paying $a.
The question is to find the minimum cost for this matrix.
I already used backtracking and found an algorithm of O(2 ^ (n * n)). Can someone help me find a more efficient algorithm?
This idea is due to Greig, Porteous, and Seheult. Treat the matrix as a capacitated directed graph with vertices corresponding to matrix entries and arcs from each vertex to its four neighbors, each with capacity b. Introduce two more vertices, a source and a sink, and arcs of capacity a: from the source to each vertex with a corresponding 0 entry, and to the sink from each vertex with a corresponding 1 entry. Find a minimum cut; the entries with value 0 after changes are the vertices on the source side of the cut, and the entries with value 1 after changes are the vertices on the sink side of the cut.
The cost of this cut is exactly your objective. Of the capacity-a from-source arcs, the ones crossing the cut correspond to changes from 0 to 1. Of the capacity-a to-sink arcs, the ones crossing the cut correspond to changes from 1 to 0. Of the capacity-b arcs, the ones crossing the cut correspond to those instances where there is an arc from a 0 to a 1.
I would suggest you to reformulate your backtracking in order to use dynamic programming to trimm the backtracking tree. Here is the logic I propose to design it:
When deciding if you want or not to change a cell, it doesn't really matter how you assigned the previous cells, it only matters the accumulated cost, so you can keep track for each cell, and each possible accumulated cost, what was the minimum result already found. This way whenever you find the same configuration again, you will have the answer saved.
So your dp matrix would be something like:
dp[top_bound][n][n];
And before doing backtracking you should do:
if(dp[acc_cost][this_i][this_j] != -1)
return dp[acc_cost][this_i][this_j];
// Perform your calculations
// ...
dp[acc_cost][this_i][this_j] = result;
return result;
Here I'm assuming -1 is an invalid value in the result, if not you just choose any invalid value and initialize your matrix to it. Since this matrix is of size n*n*top_bound, then a correctly implemented DP should solve the problem in O(n*n*top_bound).

All paths of given length between two given nodes in a graph

I came across this problem:
http://www.iarcs.org.in/inoi/contests/oct2005/Advanced-2.php
The problem basically is about graphs. You are given a graph with up to 70 nodes, and an adjacency matrix which tells how many edges exist between two nodes. Each edge is bidirectional.
Now the question asks you to find out the number of distinct paths OF A FIXED LENGTH N between any two nodes N1 and N2. The path can have repetitions. I.e., the path can go through an already included node.
The naivest algo is to run a breadth first search and check how many N2 appear in the Nth layer with the BFS tree rooted at N1. But this wont work.
How to go about it?
The solution to this problem is simple - raise the adjacency matrix to the N-th power and the answer will sit in the cell (N1, N2) for each pair N1 and N2 - basic graph theory.
You can also make use of binary exponentiation of the matrix to compute the answer faster.
To understand why does the above algorithm work, try to write down the first few steps of the exponentiation. You will notice that on each iteration the matrix holds the paths with a given fixed length going from 1 to N. If you write down how is a cell computed when performing matrix mutliplication you should see also why does it happen so.
NOTE: there is also a really cool hack on how do you compute all paths with length up to a fixed length -simply add a "loop" at the start vertex thus making it accessible from itself.

Generic definition of back-tracking design technique

I am reading back tracking algorithm design technique in Introduction to Design and analysis of Algorithms by Anany Levtin.
I am having tough time in understanding generic definition of back-tracking algorithm and map it to 4 queen problem.
For back-tracking algorithm design technique from a more general
perspective, most backtracking algorithms fit the following
description.
An output of a backtracking algorithm can be thought of as an n-tuple
(x1, x2, x3,...,xn) where each coordinate xi is an element of some
finite linearly ordered set Si. For example, for the n-queens
problem, each Si is the set of integers 1 through n. The tuple may
need to satisfy some additional constraints (e.g., the nonattacking
requirments in the n-queens problem).
For example for 4 queen problem each Si is set {1, 2, 3, 4}
A back-tracking algorithm generates explicitly or implicityly, a
state-space tree, its nodes represent partially constructed tuples
with the first "i" coordinates defined by the earlier actions of the
algorithm. If such a tuple (x1, x2, x3,.. xi) is not a solution, the
algorithm finds the next element in Si+1 that is consistent with the
values of (x1, x2, x3...xi) and the problems constraints and adds it
to the tuple as its (i+1)st coordinate. If such an element doesnot
exist, the algorithm backtracks to consider the next value of xi, and
so on.
My questions on above text are
What does author mean by "A back-tracking algorithm generates explicitly or implicityly, a state-space tree, its nodes represent partially constructed tuples with the first "i"
coordinates defined by the earlier actions of the algorithm. If such a tuple (x1, x2, x3,.. xi) is not a solution, the algorithm finds the next element
in Si+1 that is consistent with the values of (x1, x2, x3...xi) and the problems constraints and adds it to the tuple as its (i+1)st coordinate." ?
To be specific what does author mean by the algorithm finds next element in Si+1?
Kindly request to explain above statement with 4 queen problem.
If element does not exist the algorithm back track to consider next value of xi? Please explain this stament with 4 queen problem.
Thanks!
This explanation of backtracking is indeed hard to follow, as it uses a lot of formal notation, which is not used for a specific purpose or prove. Let me try to explain it in a less formal way with the 4-queens problem, which is an excellent example:
At the start of the backtracking process, the board is empty. This is the root of the state-space tree. This root has child nodes, one for each possibility where we could put the first queen. Rather than constructing each child node before we go to the next level (that would result in a BFS of the state-space-tree), we choose one possible position for the first queen, thereby constructing one child node. From this child node we have again multiple options to place the second queen, so we choose one, and so on.
If we arrive at the node where we have no possibilities to put a remaining queen we backtrack - we go up one level to that nodes father-node and check if there are possibilities left that we did not explore yet. If yes we explore them by creating the respective child node, if no we backtrack again, going up another level, until we either found a solution to the problem (all queens are placed) or we expanded all children of the root node and arrived at the root node during a backtrack operation - that means there is no solution.

Resources