You are given an n x n chessboard with k knights (of the same color) on it. Someone has spilled superglue on k of the squares, and if a knight ever finishes his move on one of these glue squares, it becomes stuck forever. Additionally (and this is why we can't have nice things) someone has cut out some of the squares so the chessboard has holes in it. You are given an initial position of the knights. The knights move as they do in regular chess, but unlike regular chess, on each turn all the knights move at once (except of course the stuck ones). At the end of each move, a square cannot be occupied by more than 1 knight. Hole squares can't be occupied by knights either (but they do count as squares that the knight can jump over). Give an 0(t x poly(n))-time algorithm to determine whether you can use < t moves to move all the knights from their initial positions to new positions where they are each stuck at a glue square.
My initial thought is to formulate this problem into a maximum flow problem and use Ford-Fulkerson algorithm to solve it. But I am not sure what my nodes and edges should be. Any idea? Thanks!
The described problem can be modeled as a layered network problem as follows. The node set of the network consists of an artificial starting node s and an artificial terminal node t. The intermediate node set consists of k copies of the n * n chessboard, which means that there are
2 + k * n * n
nodes in total. Imagine s at the top, followed by the k layers of the chessboard copies. The terminal node t would be at the bottom.
Connect s to the initial knight positions in the first chessboard and connect t to all desired terminal positions of the knights in the k-th chessboard.
For every i in {1,...,k-1} connect each square in the i-th chessboard to every square in the i+1the chessboard if and only if it can be reached by a legal knight's move. Finally, delete all edges which leave a superglued square (except if t is its tail) and delete all edges which lead to a hole. Furthermore, every edge is constrained to permit a flow of at least 0 and at most 1. In total, the network has at most
2 * k + k * n * n = k * ( 2 + n * n )
edges. To furthermore take into account that every square is to be occupied by at most one knight, the flow in every intermediate node must also be constrained by 1. This can be done by expanding each intermediate node into two nodes and connecting them by an additional edge in which the flow is constrained by 1, which causes the set of nodes and edges to grow by a factor of at most 2.
The k knights can be moved from their initial positions to their terminal positions if and only if the network admits an s-t-flow of value k, where the sequence of knight's movements and the realizing network flows bijectively correspond.
Related
There is an grid consisting of h * w (h, w <= 200) pixels, every pixel is represented by a value, we want to find the largest continuous region.
A continuous region is defined in this way:
Given a point P(x, y), The connected region must include this point.
There exists reference point R(x, y) of value v, any point in the connected region must be connected to this point. Also, there is a value g_critical(g_critical <= 100000). Let the value of a point in the connected region be v, the difference
of u and v must be smaller or equal than g_critical.
The question is to find the size of the largest connected region.
For example the grid. h = 5, w = 5, g_critical = 3, P(x, y) = (2, 4)
1 3 7 9 2
2 5 6 6 8
3 5 9 3 6
2 7 3 2 9
In this case, the bold region is the largest connected region. Notice that R(x, y) is chosen at (2, 3) or (2, 2) in this case. The size of the region is 14.
I have rephrased the question a bit so it is shorter. So if there is any ambiguity, please point it out in the comment. This question is also in our private judge so I am unable to share the problem source here.
My attempt
I have tried to loop through every cell, consider it as the R point and use bfs to find the connected region attached to it. Then, check if P is contained in the region.
The complexity is O(h * h * w * w), which is too large. So any way to optimize it?
I am guessing that maybe starting with p will help, but I am not sure how I should do it. Maybe there is some kind of flood fill algorithms that allow me to do it?
Thanks in advance.
There's an O(h w √(g_critical) α(h w))-time algorithm (where α is the inverse Ackermann function, constant for practical purposes) that uses a disjoint set data structure with an "undo" operation and a variant of Mo's trick. The idea is, decompose the interval [v − g_critical, v] into about √g_critical subintervals of length about √g_critical. For each subinterval [a, b], prepare a disjoint set data structure representing the components of the matrix where the allowed values are [b, a + 2 g_critical]. Then for each c in [a, b], extend the disjoint set with points whose values lie in [c, b) and (a + 2 g_critical, c + 2 g_critical] and report the number of nodes in the component of P(x,y), then undo these operations (keep a stack of the writes made to the structure, with original values; then pop each one, writing the original values).
There's also an O(h w log(h w))-time algorithm that you're not going to like because it uses dynamic trees. (Sleator–Tarjan's 1985 construction based on splay trees is the simplest and works OK here.) Posting it mainly in case it inspires a more practical approach.
The high-level idea is a kinetic algorithm that "slides" the interval of allowed values over the at most g_critical + 1 possibilities, repeatedly reporting and taking the max over the size of the connected component containing P.
To do this, we need to maintain the maximum spanning forest on a derived graph. Given a node-weighted undirected graph, construct a new, edge-weighted graph by subdividing each edge and setting the weight of each new edge to the weight of the old node to which it is incident. Deleting the lightest nodes in the graph is straightforward – all paths avoid these nodes if possible, so the new maximum spanning forest won't have any more edges. To add the lightest nodes not in the graph, try to link their incident edges one at a time. If a cycle would form, evert one endpoint, query the path minimum from the other endpoint, and unlink that minimum.
To report the size of the component containing P we need another decoration that captures the size of the concrete subtree (as opposed to the represented subtree) of each node. The details get a bit gnarly.
Here's some heuristics which might help:
First some pre-processing in O(h*w*log(h*w)):
Store matrix values in an array, sort it
Now every value in array is a possible candidate for point R
Also maximum component will be of values in range [R-critical, R+critical]
So we can estimate size of component (in best case) with simple binary search
Now some heuristics:
Sort array again this time by estimated component size in descending order
Now try BFS in this order if estimated size is bigger than currently found best size
Bob and Alice have teamed up on a game show. After winning the first round, they now have access to a maze with hidden gold. If Bob can collect all the gold coins and deliver them to Alice's position, they can split the gold. Bob can move horizontally or vertically as long as he stays in the maze, and the cell is not blocked.
The maze is represented by an n x m array. Each cell has a value, where 0 is open, 1 is blocked, and 2 is open with a gold coin. Bob starts at the top left in cell in (row, column) = (0, 0). Alice's position is given by (x,y). Determine the.shortest path Bob can follow to collect all gold coins and deliver them to Alice. If Bob can't collect and give all the gold coins, return -1.
Constraints :-
1<=n,m<=100
0<=no of gold coins <=10
1 <= x < n
1 <=y < m
Can anyone help me come up with an algorithm?
A semi-bruteforce algorithm will be good here for the simple reason that this problem is NP-Hard.
In order to show that we will need to do some simplifications. Transfer the maze to a graph where the path from each coin to another is a weighted edge, and the coins are the nodes.
Now the problem is to find the shortest path from Bob to all the nodes in any order and than back to Alice, according to this question in Stack Overflow this is NP-Hard.
For this reason the given constraints are so low, so it will be able to be brute forced.
An idea for a brute force algorithm I had:
Find all coins pairs shortest paths - O(#coins * (E + V log V)) using Dijkstra's algorithm multiple times.
Create all permutations of all the coins. O(#coins!), where ! marks the Factorial sign.
For each permutation:
For each two consecutive coins in the permutation:
Find the shortest path between the two coins (Using stage 1).
Notice not to go to a coin that you have accidently already been at.
(Outside of the loops) Return the permutation that the sum of paths in stage 3 was the shortest.
The time complexity is O(#coins! + #coins * (E + V log V)).
Which is a lot but given your constraints is possible!
My problem is that why the probability of the second,three..k-1th smallest element in the right subheap is 1/2-1/N,while in the left subheap is 1/2+1/N.
The previous answer is right, but I thought pictures might help.
The sample space (or probability space) is the set of all possible outcomes. And the individual probabilities of all these exclusive outcomes will add up to 1. For example, if there are only two mutually exclusive outcomes A and B in a particular sample space, and the chance of A is 1/4, then the chance of B has to be 3/4.
In your case, there are two outcomes: either k-1th smallest node is in the left subheap of the root or k-1th smallest is in the right subheap.
Before thinking about where we put the kth smallest node, the sample space looks like this below. The two outcomes are represented by a square and both squares are equal size (1/2 + 1/2 = 1)
But then the proposition said something like "let's assume the kth smallest node is in one of the subheaps, and let's choose left because we feel like it". There are N-1 nodes in total to choose from (N-1 because the root is not part of this selection). Below shows where we placed the kth smallest node. The overall number of nodes from which to choose the k-1th smallest, has changed (and that fact will alter the shapes in our sample space).
So now, the chance that the k-1th smallest node is in the left subheap has been reduced by (1/N-1).
And this is represented below by removing a triangle from the Black square. (It could have been any shape, but I chose a triangle). That triangle has to go somewhere, because everything has to add up to 1. Number of nodes is not relevant. We reduced the probability of one outcome by the size of that triangle, and consequently increased the probability of the other outcome by the same amount. The space looks like this now.
A probability tree is a useful ways to think about it.
First, we need 1/(N-1) because we are considering the left and right subtree of the root. Hence, as the size of the tree is N, we have N-1 nodes to be considered. Also, as mentioned, "Without loss of generality, we may assume that k-th smallest element is in the left subheap", so, the potential nodes in the left subheap are one less than the nodes in the right subheap. Therefore, the probability in the left subheap is 1/2 - 1/(N-1), and reasonably, the probability in the right subheap is 1 - the probability in the left subheap = 1 - (1/2 - 1/(N-1)) = 1/2 + 1/(N-1).
Given a path P described by a list of positions in the xy-plane that are each connected by edges, compute the least number of edges that have to be removed from P such that P does not close off any regions in the xy-plane (i.e., it should be possible to go from any point to any other point). Every position will have integer coordinates, and each position will be one unit left, right, up, or down from the previous one.
For example, if P = {[0,0], [0,1], [1,1], [1,0], [0,0]}, then the path is a square starting and ending at (0,0). Any 1 of the 4 edges of the square could be removed, so the answer is 1.
Note that the same edge can be drawn twice. That is, if P = {[0,0], [0,1], [1,1], [1,0], [0,0], [0,1], [1,1], [1,0], [0,0]}, the answer would be 2, because now each side of the square has 2 edges, so at least 2 edges would have to be removed to "free" the square.
I've tried a naive approach where if any position is visited twice, there could be an enclosed region (not always, but my program relies on this assumption), so I add 1 to the minimum number of edges removed. In general if a vertex is visited N times I add N-1 to the number of edges removed. However, if, for example, P = {[0,0], [0,1], [0,0]}, there is no enclosed region whereas my program would think there is. Another case of where it breaks down: if P = {[0,0], [0,1], [1,1], [1,0], [0,0], [1,0]}, my program would output 2 (since (0,0) and (0,1) are each visited twice), whereas the correct answer is 1, since we can just remove any of the other three sides of the square.
It seems that there are two primary subtasks to solve this problem: first, given the path, figure out which positions are enclosed (i.e., figure out the regions that the path splits the graph into); second, use knowledge of the regions to identify which edges must be removed to prevent enclosures.
Any hints, pseudocode, or code would be appreciated.
Source: Princeton's advanced undergraduate class on algorithms.
Here are a few ideas that might help. I'm going to assume that you have n points.
You could first insert all of the edges in a set S so that duplicate edges are removed:
for(int i = 0; i < n-1; i++)
S.insert( {min(p[i], p[i+1), max(p[i], p[i+1])} );
Now iterate over the edges again and build a graph. Then find the longest simple path in this graph.
The resulting graph is bipartite (if a cycle exists it must have even length). This piece of information might help as well.
You could use a flood-fill algorithm to find the contiguous regions of the plane created by the path. One of these regions is infinite but it's easy to compute the perimeter with a scanline sweep, and that will limit the total size to be no worse than quadratic in the length of the path. If the path length is less than 1,000 then quadratic is acceptable. (Edit: I later realized that since it is only necessary to identify the regions adjacent to edges of the line, you can do this computation by sorting the segments and then applying a scanline sweep, resulting in O(n log n) time complexity.)
Every edge in the path is between two regions (or is irrelevant because the squares on either side are the same region). For the relevant edges, you can count repetitions and then find the minimum cost boundary between any pair of adjacent regions. All that is linear once you've identified the region id of each square.
Now you have a weighted graph. Construct a minimum spanning tree. That should be precisely the minimum collection of edges which need to be removed.
There may well be a cleverer solution. The flood-fill strikes me as brute-force and naive, but it's the best I can do in ten minutes.
Good luck.
Suppose you have an NxN maze with a Knight, Princess and Exit.
There is also an evil Witch that is planning to block M squares (set them on fire). She will set all these blocks on fire before the Knight makes his first move (they do not alternate turns).
Given the map to the maze, and M, can you decide in O(N^2) whether the Knight will be able to reach the princess, and then the exit, for any choice of blocks by the Witch (meaning - can the Witch make choices that would prevent the Knight & Princess from escaping)?
This problem seems to be equivalent to determining if there exists M + 1 distinct paths from the knight to the princess, and M + 1 distinct paths from the princess to the exit. If there are only M distinct paths from the knight to the princess (or princess to exit), the witch can just burn one square from each path, blocking the rescue (and, alas, any chance of a happily-ever-after romance between them).
For example, the maze in your question has two distinct paths from the knight to the princess, and two distinct paths from the princess to the exit. Thus, the which can burn min(2, 2) to prevent escape.
The number of distinct paths between two points can be found by using a maximal network flow algorithm. Each cell in the grid is a node in the network; two nodes have an edge (of capacity 1) connecting them if they are adjacent and both white. The maximal network flow from the one point to another represents the number of distinct paths between them.
The Ford Fulkerson algorithm will solve the network flow problem in O(E * f) time, where E is the number of edges in the network (at most N^2) and f is the value of the maximum network flow. Because the maximum network flow is at most 4 (the knight only has four possible directions for his first move), the total complexity becomes O(2 * E * 4) = O(N^2), as requested.
Avoiding using a node more than once
As others have pointed out, the above solution prevents edges going into and out of nodes being used more than once; not the nodes themselves.
We can modify the flow graph to avoid nodes being used more than once by giving each cell four input edges, a single guard edge, and four output edges (each having a weight of 1) as follows:
The output edge of one cell corresponds to the input of another. Each cell can now only be used for one path, as the guard edge can only have a flow of 1. Sink and source cells remain unchanged. We still have a constant number of edges per cell, leaving the complexity of the algorithm unchanged.