Bipartie Matching to form an Array - algorithm

I am given a number from 1 to N , and there are M relationship given in the form a and b where we can connect number a and b.
We have to form the valid array , A array is said to be valid if for any two consecutive indexes A[i] and A[i+1] is one of the M relationship
We have to construct a valid Array of Size N, it's always possible to construct that.
Solution: Make A Bipartite Graph of the following , but there is a loophole on this,
let N=6
M=6
1 2
2 3
1 3
4 5
5 6
3 4
So Bipartite Matching gives this:
Match[1]=2
Match[2]=3
Match[3]=1 // Here it form a Loop
Match[4]=5
Match[5]=6
So how to i print a valid Array of size N , since N can be very large so many loops can be formed ? Is there any other solution ?
Another Example:
let N=6
M=6
1 3
3 5
2 5
5 1
4 2
6 4
It's will form a loop 1->3->5->1
1 3 5 2 4 6

Related

count the number of split points

I just got a question about counting the split points in a integer array, to ensure there is at least one duplicated integer on the two sides.
ex:
1 1 4 2 4 2 4 1
we can either split it into:
1 1 4 2 | 4 2 4 1
or
1 1 4 2 4 | 2 4 1
so that there is at least one '1', '2' ,and '4' are in both sides.
The integer can range from 1 to 100,000
The complexity requires O(n). How to solve this question?
Make one pass over the array and build count[i] = how many times the value i appears in the array. The problem is only solvable if count[i] >= 2 for all non-zero values. You can use this array to tell how many distinct values you have in your array.
Next, make another pass and using another array count2[i] (or you can reuse the first one), keep track of when you have visited each value at least once. Then use that position as your split point.
Example:
1 1 4 2 4 2 4 1
count = [3, 2, 0, 4] => 3 distinct values
1 1 4 2 4 2 4 1
^ => 1 distinct value so far
^ => 1 distinct value so far
^ => 2 distinct values so far
^ => 3 distinct values so far => this is your split point
There might be cases for which there is no solution, for example if the last 1 was at the beginning as well. To detect this, you can just make another pass over the rest of the array after you have decided on the split point and see if you still have all the values on that side.
You can avoid this last pass by using the count and count2 arrays to detect when you can no longer have a split point. This is left as an exercise.

Move square inside large matrix, find minimum number in overlapping

I have a sqaure matrix and a smaller square which moves inside the matrix at all possible positions (does not go out of the matrix). I need to find the smallest number in all such possible overlappings.
The problem is that the sizes of both can go upto thousands. Any fast way to do that?
I know one way - if there's an array instead of a matrix and a window instead of a square, we can do that in linear time using a deque.
Thanks in advance.
EDIT: Examples
Matrix:
1 3 6 2 5
8 2 3 4 5
3 8 6 1 5
7 4 8 2 1
8 0 9 0 5
For a square of size 3, total 9 overlappings are possible. For each overlapping the minimum numbers in matrix form are:
1 1 1
2 1 1
0 0 0
It is possible in O(k * n^2) with your deque idea:
If your smaller square is k x k, iterate the first row of elements from 1 to k in your matrix and treat it as an array by precomputing the minimum of the elements from 1 to k, from 2 to k + 1 etc in each column of the matrix (this precomputation will take O(k * n^2)). This is what your first row will be:
*********
1 3 6 2 5
8 2 3 4 5
3 8 6 1 5
*********
7 4 8 2 1
8 0 9 0 5
The precomputation I mentioned will give you the minimum in each of its columns, so you will have reduced the problem to your 1d array problem.
Then continue with the row of elements from 2 to k + 1:
1 3 6 2 5
*********
8 2 3 4 5
3 8 6 1 5
7 4 8 2 1
*********
8 0 9 0 5
There will be O(n) rows and you will be able to solve each one in O(n) because our precomputation allows us to reduce them to basic arrays.

Minimize maximum absolute difference in pairs of numbers

The problem statement:
Give n variables and k pairs. The variables can be distinct by assigning a value from 1 to n to each variable. Each pair p contain 2 variables and let the absolute difference between 2 variables in p is abs(p). Define the upper bound of difference is U=max(Abs(p)|every p).
Find an assignment that minimize U.
Limit:
n<=100
k<=1000
Each variable appear at least 2 times in list of pairs.
A problem instance:
Input
n=9, k=12
1 2 (meaning pair x1 x2)
1 3
1 4
1 5
2 3
2 6
3 5
3 7
3 8
3 9
6 9
8 9
Output:
1 2 5 4 3 6 7 8 9
(meaning x1=1,x2=2,x3=5,...)
Explaination: An assignment of x1=1,x2=2,x3=3,... will result in U=6 (3 9 has greastest abs value). The output assignment will get U=4, the minimum value (changed pair: 3 7 => 5 7, 3 8 => 5 8, etc. and 3 5 isn't changed. In this case, abs(p)<=4 for every pair).
There is an important point: To achieve the best assignments, the variables in the pairs that have greatest abs must be change.
Base on this, I have thought of a greedy algorithm:
1)Assign every x to default assignment (x(i)=i)
2)Locate pairs that have largest abs and x(i)'s contained in them.
3)For every i,j: Calculate U. Swap value of x(i),x(j). Calculate U'. If U'<U, stop and repeat step 3. If U'>=U for every i,j, end and output the assignment.
However, this method has a major pitfall, if we need an assignment like this:
x(a)<<x(b), x(b)<<x(c), x(c)<<x(a)
, we have to swap in 2 steps, like: x(a)<=>x(b), then x(b)<=>x(c), then there is a possibility that x(b)<<x(a) in first step has its abs become larger than U and the swap failed.
Is there any efficient algorithm to solve this problem?
This looks like http://en.wikipedia.org/wiki/Graph_bandwidth (NP complete, even for special cases). It looks like people run http://en.wikipedia.org/wiki/Cuthill-McKee_algorithm when they need to do this to try and turn a sparse matrix into a banded diagonal matrix.

Permuting rows in an array to eliminate increasing subsequences

The following problem is taken from Problems on Algorithms (Problem 653):
You are given a n x 2 matrix of numbers. Find an O(n log n) algorithm that permutes the rows in the array such that that neither column of the array contains an increasing subsequence (that may not consist of contiguous array elements) longer than ⌈√n.⌉
I'm not sure how to solve this. I think that it might use some sort of divide-and-conquer recurrence, but I can't seem to find one.
Does anyone have any ideas how to solve this?
Heres's my solution.
1) Sort rows according to the first element from greatest to lowest.
1 6 5 1
3 3 -\ 3 3
2 4 -/ 2 4
5 1 1 6
2) Divide it into groups of ⌈√n⌉, and what is left(no more then ⌈√n⌉ groups)
5 1 5 1
3 3 -\ 3 3
2 4 -/
1 6 2 4
1 6
3) Sort rows in each group according to the second element from greatest to lowest
5 1 3 3
3 3 5 1
->
2 4 1 6
1 6 2 4
Proof of correctness:
Increasing subsequences in column 1 can happen only in single group(size is <= ⌈√n⌉),
No 2 elements of increasing subsequences in column 2 are in the same group(no more than ⌈√n⌉ groups)

Matrix, algorithm interview question

This was one of my interview questions.
We have a matrix containing integers (no range provided). The matrix is randomly populated with integers. We need to devise an algorithm which finds those rows which match exactly with a column(s). We need to return the row number and the column number for the match. The order of of the matching elements is the same. For example, If, i'th row matches with j'th column, and i'th row contains the elements - [1,4,5,6,3]. Then jth column would also contain the elements - [1,4,5,6,3]. Size is n x n.
My solution:
RCEQUAL(A,i1..12,j1..j2)// A is n*n matrix
if(i2-i1==2 && j2-j1==2 && b[n*i1+1..n*i2] has [j1..j2])
use brute force to check if the rows and columns are same.
if (any rows and columns are same)
store the row and column numbers in b[1..n^2].//b[1],b[n+2],b[2n+3].. store row no,
// b[2..n+1] stores columns that
//match with row 1, b[n+3..2n+2]
//those that match with row 2,etc..
else
RCEQUAL(A,1..n/2,1..n/2);
RCEQUAL(A,n/2..n,1..n/2);
RCEQUAL(A,1..n/2,n/2..n);
RCEQUAL(A,n/2..n,n/2..n);
Takes O(n^2). Is this correct? If correct, is there a faster algorithm?
you could build a trie from the data in the rows. then you can compare the columns with the trie.
this would allow to exit as soon as the beginning of a column do not match any row. also this would let you check a column against all rows in one pass.
of course the trie is most interesting when n is big (setting up a trie for a small n is not worth it) and when there are many rows and columns which are quite the same. but even in the worst case where all integers in the matrix are different, the structure allows for a clear algorithm...
You could speed up the average case by calculating the sum of each row/column and narrowing your brute-force comparison (which you have to do eventually) only on rows that match the sums of columns.
This doesn't increase the worst case (all having the same sum) but if your input is truly random that "won't happen" :-)
This might only work on non-singular matrices (not sure), but...
Let A be a square (and possibly non-singular) NxN matrix. Let A' be the transpose of A. If we create matrix B such that it is a horizontal concatenation of A and A' (in other words [A A']) and put it into RREF form, we will get a diagonal on all ones in the left half and some square matrix in the right half.
Example:
A = 1 2
3 4
A'= 1 3
2 4
B = 1 2 1 3
3 4 2 4
rref(B) = 1 0 0 -2
0 1 0.5 2.5
On the other hand, if a column of A were equal to a row of A then column of A would be equal to a column of A'. Then we would get another single 1 in of of the columns of the right half of rref(B).
Example
A=
1 2 3 4 5
2 6 -3 4 6
3 8 -7 6 9
4 1 7 -5 3
5 2 4 -1 -1
A'=
1 2 3 4 5
2 6 8 1 2
3 -3 -7 7 4
4 4 6 -5 -1
5 6 9 3 -1
B =
1 2 3 4 5 1 2 3 4 5
2 6 -3 4 6 2 6 8 1 2
3 8 -7 6 9 3 -3 -7 7 4
4 1 7 -5 3 4 4 6 -5 -1
5 2 4 -1 -1 5 6 9 3 -1
rref(B)=
1 0 0 0 0 1.000 -3.689 -5.921 3.080 0.495
0 1 0 0 0 0 6.054 9.394 -3.097 -1.024
0 0 1 0 0 0 2.378 3.842 -0.961 0.009
0 0 0 1 0 0 -0.565 -0.842 1.823 0.802
0 0 0 0 1 0 -2.258 -3.605 0.540 0.662
1.000 in the top row of the right half means that the first column of A matches on of its rows. The fact that the 1.000 is in the left-most column of the right half means that it is the first row.
Without looking at your algorithm or any of the approaches in the previous answers, but since the matrix has n^2 elements to begin with, I do not think there is a method which does better than that :)
IFF the matrix is truely random...
You could create a list of pointers to the columns sorted by the first element. Then create a similar list of the rows sorted by their first element. This takes O(n*logn).
Next create an index into each sorted list initialized to 0. If the first elements match, you must compare the whole row. If they do not match, increment the index of the one with the lowest starting element (either move to the next row or to the next column). Since each index cycles from 0 to n-1 only once, you have at most 2*n comparisons unless all the rows and columns start with the same number, but we said a matrix of random numbers.
The time for a row/column comparison is n in the worst case, but is expected to be O(1) on average with random data.
So 2 sorts of O(nlogn), and a scan of 2*n*1 gives you an expected run time of O(nlogn). This is of course assuming random data. Worst case is still going to be n**3 for a large matrix with most elements the same value.

Resources