How to solve Twisty Movement from Codeforces? - algorithm

I've read the editorial but it's very short and claims something I don't understand why it's true. Why is it equivalent to finding longest subsequence of 1*2*1*2*?. Can someone explain the solution step by step and justify the claims at every step? http://codeforces.com/contest/934/problem/C
Here is the 'solution' from the editorial, but as I said it's very short and I don't understand it. Hope someone can guide me to the solution step by step justifying the claims along the way, not like in the solution here. Thanks.
Since 1 ≤ ai ≤ 2, it's equivalent to find a longest subsequence like
1 * 2 * 1 * 2 * . By an easy dynamic programming we can find it in
O(n) or O(n2) time. You can see O(n2) solution in the model
solution below. Here we introduce an O(n) approach: Since the
subsequence can be split into 4 parts (11...22...11...22...) , we
can set dp[i][j](i = 1...n, j = 0..3) be the longest subsequence of
a[1...i] with first j parts.

I also think that the cited explanation is not super clear. Here is another take.
You can collapse an original array
1 1 2 2 2 1 1 2 2 1
into a weighted array
2 3 2 2 1
^ ^ ^ ^ ^
1 2 1 2 1
where numbers at the top represent lengths of contiguous strips of repeated values in the original array.
We can convince ourselves that
The optimal flip does not "break up" any contiguous sequences.
The optimal flip starts and ends with different values (i.e. starts with 1 and ends with 2, or starts with 2 and ends with 1).
Hence, the weighted array contains enough information to solve the problem. We want to flip a contiguous slice of the weighted array s.t. the sum of weights associated with some contiguous monotonic sequence is maximized.
Specifically, we want to perform the flip in such a way that some contiguous monotonic sequence 112, 122, 211 or 221 has maximum weight.
One way to do this with dynamic programming is by creating 4 auxiliary arrays.
A[i] : maximal weight of any 1 to the right of i.
B[i] : maximal weight of any 1 to the left of i.
C[i] : maximal weight of any 2 to the right of i.
D[i] : maximal weight of any 2 to the left of i.
Let's assume that if any of A,B,C,D is accessed out of bounds, the returned value is 0.
We initialize x = 0 and do one pass through the array Arr = [1, 2, 1, 2, 1] with weights W = [2, 3, 2, 2, 1]. At each index i, we have 2 cases:
Arr[i:i+2] == 1 2. In this case we set
x = max(x, W[i] + W[i+1] + C[i+1], W[i] + W[i+1] + B[i-1]).
Arr[i:i+2] == 2 1. In this case we set
x = max(x, W[i] + W[i+1] + A[i+1], W[i] + W[i+1] + D[i-1]).
The resulting x is our answer. This is an O(N) solution.

Related

Maximize number of zigzag sequence in an array

I want to maximize number of zigzag sequence in an array(without reordering).
I've a main array of random sequence of integers.I want a sub-array of index of main array that has zigzag pattern.
A sequence of integers is called zigzag sequence if each of its elements is either strictly less or strictly greater than its neighbors(and two adjacent of neighbors).
Example : The sequence 4 2 3 1 5 2 forms a zigzag, but 7 3 5 5 2 and 3 8 6 4 5
and 4 2 3 1 5 3 don't.
For a given array of integers we need to find (contiguous) sub-array of indexes that forms a zigzag sequence.
Can this be done in O(N) ?
Yes, this would seem to be solvable in O(n) time. I'll describe the algorithm as a dynamic program.
Setup
Let the array containing potential zig-zags be called Z.
Let U be an array such that len(U) == len(Z), and U[i] is an integer representing the largest contiguous left-to-right subsequence starting at i that is a zig-zag such that Z[i] < Z[i+1] (it zigs up).
Let D be similar to U, except that D[i] is an integer representing the largest contiguous left-to-right subsequence starting at i that is a zig-zag such that Z[i] > Z[i+1] (it zags down).
Subproblem
The subproblem is to find both U[i] and D[i] at each i. This can be done as follows:
U[i] = {
1 + D[i+1] if i < i+1
0 otherwise
}
L[i] = {
1 + U[i+1] if i > i+1
0 otherwise
}
The top version says that if we're looking for the largest sequence beginning with an up-zig, we see if the next element is larger (goes up), and then add a single zig to the size of the next down-zag sequence. The next one is the reverse.
Base Cases
If i == len(Z) (it is the last element), U[i] = L[i] = 0. The last element cannot have a left-to-right sequence after it because there is nothing after it.
Solution
To get the solution, first we find max(U[i]) and max(L[i]) for every i. Then get the maximum of those two values, store i, and store the length of this largest zig-zag (in a variable called length). The sequence begins at index i and ends at index i + length.
Runtime
There are n indexes, so there are 2n subproblems between U and L. Each subproblem takes O(1) time to solve, given that solutions to previously solved subproblems are memoized. Finally, iterating through U and L to get the final answer takes O(2n) time.
We thus have O(2n) + O(2n) time, or O(n).
This may be an overly complex solution, but it demonstrates that it can be done in O(n).

Find minimum no of swaps required to move all 1's together in a binary array

Eg: Array : [0,1,0,1,1,0,0]
Final Array: [0,0,1,1,1,0,0] , So swaps required = 1
i need a O(n) or O(nlogn) solution
You can do it in O(n):
In one pass through the data, determine the number of 1s. Call this k (it is just the sum of the elements in the list).
In a second pass through the data, use a sliding window of width k to find the number, m which is the maximum number of 1s in any window of size k. Since this is homework, I'll leave the details to you, but it can be done in O(n).
Then: the minimal number of swaps is k-m.
EDIT This answer assumes that only two neighboring cells can be swapped. If the distance between the two swapped elements is arbitrary, see #JohnColeman's answer.
This can be done easily in linear time.
Suppose that the array is called a and its size is n.
Allocate integer array b of size n. Walk from left to right, save in b[i] the number of ones seen so far in a[0], ..., a[i].
Allocate integer array c of size n. Walk from right to left, save in c[i] the number of ones seen so far in a[i], ..., a[N - 1].
Initialize integer res = 0. Walk through a one last time. For each i with a[i] = 0, add res += min(b[i] c[i])
Output res
Why this works? Each zero must somehow bubble out of the block of ones. So, every zero must either "bubble-up" past all ones to the right of it, or it must "bubble-down" past all ones to the left of it. Swapping zeros with zeros is waste of time, therefore the process of zero-eviction from the homogeneous block of ones must start with those zeros that are as close to the first 1 or the last 1 as possible. This means, that every zero will have to make exactly min(b[i], c[i]) swaps with 1s to exit the homogeneous block of ones.
Example:
a = [0,1,0,1,1,0,1,0,1,0,1,0]
b = [0,1,1,2,3,3,4,4,5,5,6,6]
c = [6,6,5,5,4,3,3,2,2,1,1,0]
now, min(b,c) would be (no need to compute it explicitly):
m = [0,1,1,2,3,3,3,2,2,1,1,0]
^ ^ ^ ^ ^ ^
The interesting values of min(b[i], c[i]) which correspond to 0s are marked with ^. Summing it up yields: 0 + 1 + 3 + 2 + 1 + 0 = 7.
Indeed:
[0,1,0,1,1,0,1,0,1,0,1,0]
[0,0,1,1,1,0,1,0,1,0,1,0] 1
[0,0,1,1,1,0,1,0,1,1,0,0] 2 = 1 + 1
[0,0,1,1,1,0,1,1,0,1,0,0] 3
[0,0,1,1,1,0,1,1,1,0,0,0] 4 = 1 + 1 + 2
[0,0,1,1,0,1,1,1,1,0,0,0] 5
[0,0,1,0,1,1,1,1,1,0,0,0] 6
[0,0,0,1,1,1,1,1,1,0,0,0] 7 = 1 + 1 + 2 + 3
done: block of ones homogeneous.
Runtime for computation of the number res of swaps is obviously O(n). (Note: it does NOT say that the number of swaps is itself O(n)).
Let's consider each 1 as a potential static point. Then the cost for the left side of the static point would be the number of 1's to the left subtracted by the number of 1's already in the section it would naturally extend to, the length of which is the number of 1's on the left. Similarly for the right side.
Now find a way to do it efficiently for each potential static 1 :) Hint: think about how we could update those values as we iterate across the array.
1 0 1 0 1 1 0 0 1 0 1 1
x potential static point
<----- would extend to
-----> would extend to
left cost at x: 3 - 2 = 1
right cost at x: 3 - 1 = 2

Generate a random integer from 0 to N-1 which is not in the list

You are given N and an int K[].
The task at hand is to generate a equal probabilistic random number between 0 to N-1 which doesn't exist in K.
N is strictly a integer >= 0.
And K.length is < N-1. And 0 <= K[i] <= N-1. Also assume K is sorted and each element of K is unique.
You are given a function uniformRand(int M) which generates uniform random number in the range 0 to M-1 And assume this functions's complexity is O(1).
Example:
N = 7
K = {0, 1, 5}
the function should return any random number { 2, 3, 4, 6 } with equal
probability.
I could get a O(N) solution for this : First generate a random number between 0 to N - K.length. And map the thus generated random number to a number not in K. The second step will take the complexity to O(N). Can it be done better in may be O(log N) ?
You can use the fact that all the numbers in K[] are between 0 and N-1 and they are distinct.
For your example case, you generate a random number from 0 to 3. Say you get a random number r. Now you conduct binary search on the array K[].
Initialize i = K.length/2.
Find K[i] - i. This will give you the number of numbers missing from the array in the range 0 to i.
For example K[2] = 5. So 3 elements are missing from K[0] to K[2] (2,3,4)
Hence you can decide whether you have to conduct the remaining search in the first part of array K or the next part. This is because you know r.
This search will give you a complexity of log(K.length)
EDIT: For example,
N = 7
K = {0, 1, 4} // modified the array to clarify the algorithm steps.
the function should return any random number { 2, 3, 5, 6 } with equal probability.
Random number generated between 0 and N-K.length = random{0-3}. Say we get 3. Hence we require the 4th missing number in array K.
Conduct binary search on array K[].
Initial i = K.length/2 = 1.
Now we see K[1] - 1 = 0. Hence no number is missing upto i = 1. Hence we search on the latter part of the array.
Now i = 2. K[2] - 2 = 4 - 2 = 2. Hence there are 2 missing numbers up to index i = 2. But we need the 4th missing element. So we again have to search in the latter part of the array.
Now we reach an empty array. What should we do now? If we reach an empty array between say K[j] & K[j+1] then it simply means that all elements between K[j] and K[j+1] are missing from the array K.
Hence all elements above K[2] are missing from the array, namely 5 and 6. We need the 4th element out of which we have already discarded 2 elements. Hence we will choose the second element which is 6.
Binary search.
The basic algorithm:
(not quite the same as the other answer - the number is only generated at the end)
Start in the middle of K.
By looking at the current value and it's index, we can determine the number of pickable numbers (numbers not in K) to the left.
Similarly, by including N, we can determine the number of pickable numbers to the right.
Now randomly go either left or right, weighted based on the count of pickable numbers on each side.
Repeat in the chosen subarray until the subarray is empty.
Then generate a random number in the range consisting of the numbers before and after the subarray in the array.
The running time would be O(log |K|), and, since |K| < N-1, O(log N).
The exact mathematics for number counts and weights can be derived from the example below.
Extension with K containing a bigger range:
Now let's say (for enrichment purposes) K can also contain values N or larger.
Then, instead of starting with the entire K, we start with a subarray up to position min(N, |K|), and start in the middle of that.
It's easy to see that the N-th position in K (if one exists) will be >= N, so this chosen range includes any possible number we can generate.
From here, we need to do a binary search for N (which would give us a point where all values to the left are < N, even if N could not be found) (the above algorithm doesn't deal with K containing values greater than N).
Then we just run the algorithm as above with the subarray ending at the last value < N.
The running time would be O(log N), or, more specifically, O(log min(N, |K|)).
Example:
N = 10
K = {0, 1, 4, 5, 8}
So we start in the middle - 4.
Given that we're at index 2, we know there are 2 elements to the left, and the value is 4, so there are 4 - 2 = 2 pickable values to the left.
Similarly, there are 10 - (4+1) - 2 = 3 pickable values to the right.
So now we go left with probability 2/(2+3) and right with probability 3/(2+3).
Let's say we went right, and our next middle value is 5.
We are at the first position in this subarray, and the previous value is 4, so we have 5 - (4+1) = 0 pickable values to the left.
And there are 10 - (5+1) - 1 = 3 pickable values to the right.
We can't go left (0 probability). If we go right, our next middle value would be 8.
There would be 2 pickable values to the left, and 1 to the right.
If we go left, we'd have an empty subarray.
So then we'd generate a number between 5 and 8, which would be 6 or 7 with equal probability.
This can be solved by basically solving this:
Find the rth smallest number not in the given array, K, subject to
conditions in the question.
For that consider the implicit array D, defined by
D[i] = K[i] - i for 0 <= i < L, where L is length of K
We also set D[-1] = 0 and D[L] = N
We also define K[-1] = 0.
Note, we don't actually need to construct D. Also note that D is sorted (and all elements non-negative), as the numbers in K[] are unique and increasing.
Now we make the following claim:
CLAIM: To find the rth smallest number not in K[], we need to find right most occurrence of r' in D (which occurs at position defined by j), where r' is the largest number in D, which is < r. Such an r' exists, because D[-1] = 0. Once we find such an r' (and j), the number we are looking for is r-r' + K[j].
Proof: Basically the definition of r' and j tells us that there are exactlyr' numbers missing from 0 to K[j], and more than r numbers missing from 0 to K[j+1]. Thus all the numbers from K[j]+1 to K[j+1]-1 are missing (and these missing are at least r-r' in number), and the number we seek is among them, given by K[j] + r-r'.
Algorithm:
In order to find (r',j) all we need to do is a (modified) binary search for r in D, where we keep moving to the left even if we find r in the array.
This is an O(log K) algorithm.
If you are running this many times, it probably pays to speed up your generation operation: O(log N) time just isn't acceptable.
Make an empty array G. Starting at zero, count upwards while progressing through the values of K. If a value isn't in K add it to G. If it is in K don't add it and progress your K pointer. (This relies on K being sorted.)
Now you have an array G which has only acceptable numbers.
Use your random number generator to choose a value from G.
This requires O(N) preparatory work and each generation happens in O(1) time. After N look-ups the amortized time of all operations is O(1).
A Python mock-up:
import random
class PRNG:
def __init__(self, K,N):
self.G = []
kptr = 0
for i in range(N):
if kptr<len(K) and K[kptr]==i:
kptr+=1
else:
self.G.append(i)
def getRand(self):
rn = random.randint(0,len(self.G)-1)
return self.G[rn]
prng=PRNG( [0,1,5], 7)
for i in range(20):
print prng.getRand()

Algorithm to maximize the smallest diagonal element of a matrix

Suppose we are given a square matrix A. Our goal is to maximize the smallest diagonal element by row permutations. In other words, for the given matrix A, we have n diagonal elements and thus we have the minimum $min{d_i}$. Our purpose is to reach the matrix with possibly largest minimum diagonal element by row permutations.
This is like $max min{d_i}$ over all row permutations.
For example, suppose A = [4 3 2 1; 1 4 3 2; 2 1 4 3; 2.5 3.5 4.5 1.5]. The diagonal is [4, 4, 4, 1.5]. The minimum of the diagonal is 1.5. We can swap row 3 and 4 to get to a new matrix \tilde_A = [4 3 2 1; 1 4 3 2; 2.5 3.5 4.5 1.5; 2 1 4 3]. The new diagonal is [4, 4, 4.5, 3] with a new minimum 3. And in theory, this is the best result I can obtain because there seems no better option: 3 seems to be the max min{d_i}.
In my problem, n is much larger like 1000. I know there are n! row permutations so I cannot go through each permutation in theory. I know greedy algorithm will help--we start from the first row. If a_11 is not the smallest in the first column, we swap a_11 with the largest element in the first column by row permutation. Then we look at the second row by comparing a_22 with all remaining elements in the second column(except a_12). Swap a_22 if it is not the smallest. ... ... etc. We keep doing this till the last row.
Is there any better algorithm to do it?
This is similar to Minimum Euclidean Matching but they are not the same.
Suppose you wanted to know whether there was a better solution to your problem than 3.
Change your matrix to have a 1 for every element that is strictly greater than 3:
4 3 2 1 1 0 0 0
1 4 3 2 0 1 0 0
2.5 3.5 4.5 1.5 -> 0 1 1 0
2 1 4 3 0 0 1 0
Your problem can be interpreted as trying to find a perfect matching in the bipartite graph which has this binary matrix as its biadjacency graph.
In this case, it is easy to see that there is no way of improving your result because there is no way of reordering rows to make the diagonal entry in the last column greater than 3.
For a larger matrix, there are efficient algorithms to determine maximal matchings in bipartite graphs.
This suggests an algorithm:
Use bisection to find the largest value for which the generated graph has a perfect matching
The assignment corresponding to the perfect matching with the largest value will be equal to the best permutation of rows
EDIT
This Python code illustrates how to use the networkx library to determine whether the graph has a perfect matching for a particular cutoff value.
import networkx as nx
A = [[4,3,2,1],
[1,4,3,2],
[2,1,4,3],
[2.5,3.5,4.5,1.5]]
cutoff = 3
G=nx.DiGraph()
for i,row in enumerate(A):
G.add_edge('start','row'+str(i),capacity=1.0)
G.add_edge('col'+str(i),'end',capacity=1.0)
for j,e in enumerate(row):
if e>cutoff:
G.add_edge('row'+str(i),'col'+str(j),capacity=1.0)
if nx.max_flow(G,'start','end')<len(A):
print 'No perfect matching'
else:
print 'Has a perfect matching'
For a random matrix of size 1000*1000 it takes about 1 second on my computer.
Let $x_{ij}$ be 1 if row i is moved to row j and zero otherwise.
You're interested in the following integer program:
max z
\sum_{i=0}^n x_{ij} = 1 \forall j
\sum_{j=0}^n x_{ij} = 1 \forall i
A[j,j]x_{ij} >= z
Then plug this into GLPK, Gurobi, or CPLEX. Alternatively, solve the IP using your own branch and bound solve.

Algorithm for "pick the number up game"

I'm struggling to get some solution, but I have no idea for that.
RobotA and RobotB who choose a permutation of the N numbers to begin with. RobotA picks first, and they pick alternately. Each turn, robots only can pick any one remaining number from the permutation. When the remaining numbers form an increasing sequence, the game finishes. The robot who picked the last turn (after which the sequence becomes increasing) wins the game.
Assuming both play optimally , who wins?
Example 1:
The original sequence is 1 7 3.
RobotA wins by picking 7, after which the sequence is increasing 1 3.
Example 2:
The original sequence is 8 5 3 1 2.
RobotB wins by selecting the 2, preventing any increasing sequence.
Is there any known algorithm to solve that? Please give me any tips or ideas of where to look at would be really thankful!
Given a sequence w of distinct numbers, let N(w) be the length of w and let L(w) be the length of the longest increasing subsequence in w. For example, if
w = 3 5 8 1 4
then N(w) = 5 and L(w) = 3.
The game ends when L(w) = N(w), or, equivalently, N(w) - L(w) = 0.
Working the game backwards, if on RobotX's turn N(w) - L(w) = 1, then the optimal play is to remove the unique letter not in a longest increasing subsequence, thereby winning the game.
For example, if w = 1 7 3, then N(w) = 3 and L(w) = 2 with a longest increasing subsequence being 1 3. Removing the 7 results in an increasing sequence, ensuring that the player who removed the 7 wins.
Going back to the previous example, w = 3 5 8 1 4, if either 1 or 4 is removed, then for the resulting permutation u we have N(u) - L(u) = 1, so the player who removed the 1 or 4 will certainly lose to a competent opponent. However, any other play results in a victory since it forces the next player to move to a losing position. Here, the optimal play is to remove any of 3, 5, or 8, after which N(u) - L(u) = 2, but after the next move N(v) - L(v) = 1.
Further analysis along these lines should lead to an optimal strategy for either player.
The nearest mathematical game that I do know is the Monotone Sequence Game. In a monotonic sequence game, two players alternately choose elements of a sequence from some fixed ordered set (e.g. 1,2,...,N). The game ends when the resulting sequence contains either an ascending subsequence of length A or a descending one of length D. This game has its origins with a theorem of Erdos and Szekeres, and a nice exposition can be found in MONOTONIC SEQUENCE GAMES, and this slide presentation by Bruce Sagan is also a good reference.
If you want to know more about game theory in general, or these sorts of games in particular, then I strong recommend Winning Ways for Your Mathematical Plays by Berlekamp, Conway and Guy. Volume 3, I believe, addresses these sorts of games.
Looks like a Minimax problem.
I guess there is more fast solution for this task. I will think.
But I can give you an idea of solution with O(N! * N^2) complexity.
At first, note that picking number from N-permutation is equivalent to the following:
Pick number from N-permutation. Let's it was number X.
Reassign numbers using rule:
1 = 1
2 = 2
...
X-1 = X-1
X = Nothing, it's gone.
X+1 = X
...
N = N - 1
And you get permutation of N-1 numbers.
Example:
1 5 6 4 2 3
Pick 2
1 5 6 4 3
Reassign
1 4 5 3 2
Let's use this one as move, instead just picking. It's easy too see that games are equivalent, player A wins in this game for some permutation if and only if it wins in original.
Let's give a code to all permutations of N numbers, N-1 numbers, ... 2 numbers.
Define F(x) -> {0; 1} (where x is a permutation code) is function which is 1 when current
player wins, and 0 if current player fails. Easy to see F(1 2 .. K-1 K) = 0.
F(x) = 1 if there is at least on move which transforms x to y, and F(y) = 0.
F(x) = 0 if for any move which transforms x to y, F(y) = 1.
So you can use recursion with memorization to compute:
Boolean F(X)
{
Let K be length of permutation with code X.
if you already compute F for argument X return previously calculated result;
if X == 1 2 .. K return 0;
Boolean result = 0;
for i = 1 to K do
{
Y code of permutation get from X by picking number on position i.
if (F(y) == 0)
{
result = 1;
break;
}
}
Store result as F(X);
return result;
}
For each argument we will compute this function only once. There is 1! permutations of length 1, 2! permutations of length 2 .. N! permutations of length N. For permutation length K, we need to do O(K) operation to compute. So complexity will be O(1*1! + 2*2! + .. N*N!) =<= O(N! * N^2) = O(N! * N^2)
Here is Python code for Wisdom's Wind's algorithm. It prints out wins for RobotA.
import itertools
def moves(p):
if tuple(sorted(p)) == p:
return
for i in p:
yield tuple(j - (j > i) for j in p if j != i)
winning = set()
for n in range(6):
for p in itertools.permutations(range(n)):
if not winning.issuperset(moves(p)):
winning.add(p)
for p in sorted(winning, key=lambda q: (len(q), q)):
print(p)

Resources