Intersection of trinary matrices [closed] - algorithm

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Consider 3 matrices of binary variables b0,b1,b2,b3. All these matrices have same number of column but can be different number of rows. Each element of the matrix can have three values 1,0 or 2 where 2 represent don't care. I have to find binary strings that presents in all three matrices. For example consider the following 3 matrices:
matrix1:
1 0 2 2
2 2 0 0
1 2 1 1
matrix2:
2 2 0 2
1 0 1 2
matrix3:
2 2 1 2
1 2 2 1
2 2 2 1
So, for this example string b0=1,b1=0,b2=1,b3=1 is present in all matrices. Because, in matrix1, b0=1,b1=2,b2=1,b3=1 is same as 1011. In matrix2, b0=1,b1=0,b2=1,b3=2 is same as 1011 and in matrix3, b0=2,b1=2,b3=2,b3=1 is same as 1011.
How to find all binary strings that exists in all 3 matrices?

The idea is to "expand" each row to its set of possibilities, so for example 1022 gets expanded to:
1000
1001
1010
1011
Then, it's convenient to convert each string to an integer (a single byte integer since the "strings" are 4 bit long) and place in a sorted array, or even a set.
Next step is to sort groups by length, from the smallest to the largest, then iterate the smallest group values and see that it exists in all other groups, this is very fast because of the preparation work in the "parsing" step.
Every value that passes for all groups is a match.

I suppose that simplest and reasonably efficient algorithm will be to brute-force check all possible combinations. Start with 0000 then 0001, then 0010 etc. With each of them, iterate each matrix and compare values. On first match, go to next matrix, on non-match, immediately reject.
You will have to iterate each matrix maximum 16 times, which is still O(N) from size of matrix.
If you want to optimize actual comparison, you can precompute lookup strings for each matrix. Create reverse bitmasks for 0-allowed and 1-allowed and AND them with bitmasks of has-0 and has-1 of query string. If any of two results is non-zero (you can just add or OR them and check result), string won't be matching.
In any case, it should be very fast with any kind of comparison implementation, as you will be doing only 16*(1000+1000+1000) rather than (1000*1000*1000) operations you probably were considering.

Related

What is the logic or algorithm for the following from HackerEarth? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Hackers love bits, so does Alex. Alex has just started his career as hacker and found a special binary array
A
A (containing
0
s
and
1
s
).
In one operation, he can choose any two positions in the array, and can swap their values. He has to perform exactly one operation and maximize the length of the subarray containing only
1
s
.
As Alex is a newbie in his field, help him for the same, and output the required length of the sub array containing only
1
s
1s.
Input Format:
First line consists of one integer
N, denoting the number of elements in the array.
Second line consists of N space-separated integers, denoting the elements of the array.
Output Format:
Print the required length of the sub array containing only
1
s
1s.
Input Constraints:
1 ≤ N ≤ 100
1 ≤ N ≤ 1000 ≤ A[i] ≤ 1
Input:
5
1 1 1 0 1
Output:
4
General algorithm:
Have a variable, inSub, that keeps track of the substring you are looking at. Its value will be -1 at first, indicating that you are not looking at a particular substring atm.
Iterate through the string until you find a 1. The index of this 1 will be the new value of inSub.
Also have a variable, hitZero (initialized to False), which keeps track of if the current substring has encountered a zero yet. This is because one zero can be replaced by a one, assuming there is another one in the whole list. If a zero is hit and hitZero is False, it turns to True. If hitZero is already True, you will need to store the substring's length in a list SubList, inSub will be set to -1, and hitZero will be False again.
At the end of this you will have a list of substring lengths. If there are multiple substrings in the list, the answer will be the longest substring (because you can take a one from one of the other substrings). If there is only one substring, the answer will be the longest contiguous string of 1's in the substring. (Edited)
Let me explain you what thought process you can use to solve this problem.
Whenever you see a problem, try to come up with some test cases in your mind.
For this particular question, any of the following mentioned test cases could be a good starting points.
Example 1:-
Input:
1 1 1 1 0 0 0
Output
4
In the above example, you can clearly see that there is no swap possible and therefore the output would be 4.
Example 2:-
Input
1 1 1 1 1 1
Output
6
In the above example also no swap is possible, as there is no sub-string separated by 0, so your answer is basically equal to the length of the whole string.
Example 3:-
Input
1 1 1 0 0 0 1 1 1 1
Output
5
In the above example, you can see one swap of one from the first contiguous strings of one with the zero present near the other strings will give you, your answer.
So now you have three good examples covering most of the corner case.
Look at the comment below and above algorithm to get a good view about how to solve this problem.
Thanks to Carl for reviewing my algorithm.
Hope this helps!

Dynamic Programming for sub sum elements in a matrix [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Given a chess board with 4 rows and n columns. On every cell there's an integer. Given 2n discs that part of them, or all of them you can put on a different cell on the board so the total sum is maximal. The only limitation is that 2 discs can't be next to each other vertically or horizontally. How to place the best combination of discs on the board in O(n) using DP ?
1st, we cannot possibly use more than 2*n disk as any column could contain at maximum 2 disks.
let's say d[i][mask] - maximum amount obtained after filling columns from 1 to i with disks so that the last column is filled as mask ( mask could be 0000, 0001, 0010, 0100, 0101, 1000, 1001 or 1010 where 1 means disk was placed and 0 means it wasn't)
So d[i][mask1] = max of (d[i-1][mask2] + number obtained from applying mask1 on i-th column), where mask2 could be any mask which doensn't contradict with mask1
Edit 1: You do not need to change anything. When you are on i-th step on certain mask, it only depends on the answers for masks of i-1. And you already have all of them. You just try to update the current d[i][mask] from those which are valid. As I already said d[i][mask] - stores the maximum value which could be obtained filling columns from 1 to i, optimally, so that the last column has disks in form of this mask (doesn't matter masks before the i-th column were filled, because they won't affect next column)

Counting the number of group in a matrix [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have found an interesting problem.
A n*m matrix is given, with a such form:
11111111
11111001
11111001
10111111
10111111
11100111
11111111
The goal of the problem is to find the number of '0' blocks. On the previous example, there were 3 '0' blocks.
I don't understand how to solve this problem. I don't ask for any code, I would like to get some hints about how to solve this problem.
You can use depth-first search to find connected components in a graph where vertices are cells with 0 and an edge between two vertices is present if two cell are adjacent.
Given your definition of block:
For every row you check if there are two (or more) contiguous zeros if that is the case you increase the 0's block count by 1 for each one of these occurrences.
You repeat the same procedure for the columns of the matrix.
I am not sure from your description of the problem how you should count bigger blocks like:
1 1 1 1
1 0 0 1
1 0 0 1
1 1 1 1
Is this a single block?

Sort a deck of cards with minimum number of moves [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
We have n cards with each card numbered from 1 to n.
All cards are randomly shuffled.
We are allowed only operation MoveCard(n) which moves the card with value n to the top of the pile.
We need to sort the pile of cards with minimum number of MoveCard operations.
The naive approach which i can think of is start with MoveCard(n), MoveCard(n-1), MoveCard(n-2).... MoveCard(1).
This approach will solve the problem in n MoveCard operations.
But can we optimize it.
For instance, If the input is like: 3 1 4 2
As per my approach:
4 3 1 2
3 4 1 2
2 3 4 1
1 2 3 4
MoveCard operations is 4.
But we can solve this problem with minimum number of moves:
Optimized solution is:
3 1 4 2
2 3 1 4
1 2 3 4
MoveCard operations is 2.
From the optimized solution above, I am feeling the following approach will solve the problem.
Always we are picking the element to move which gives the sorted elements on the top and bottom with a condition the maximum element in the sorted sub array from the start should be less than the minimum element of the sorted sub array from the bottom.
In this case:
3 1 4 2
Moving 2 we are getting 2 3 1 4 { 2,3 sorted from the start and 4 sorted from the bottom}
Now we are choosing 1 which gives the full sorted array. 1 2 3 4.
A simple way to do this is looking at the numbers in reverse order. If the top two aren't in order, move the lower one. If they are in order, look for the next one down. After you find one not in order, move that one, and then each other card below it in descending order.
Basically, find n. If n-1 comes after n in the array, move n-1 to the front. n--, and repeat.
For example:
2 4 3 1 // 3 comes after 4, so move 3
3 2 4 1 // move 2
2 3 4 1 // move 1
1 2 3 4 // done after 3 moves
3 1 4 2 // 3 comes -before- 4, so leave it alone. 2 comes after 3, move it
2 3 1 4 // move 1
1 2 3 4 // done after 2 moves
It ends up being the same as the naive approach, but starting only with the "optimum" start. You don't always have to move the top cards.
Worst case time complexity is O(n^2), simply because you have to do an unordered search for each number. I can't prove this is the best complexity possible, but it's surely the simplest and clearest way to do it.
Worst case number of moves is n-1, since you can always just leave the n card alone.
Now, if you just want to know how many moves you need, instead of actually sorting, you can stop at the first move. For example, if you have to move 3 because it comes after 4, then you'll need 3 moves. You can see this because if 3 is at the front, you'll always have to move every card below it to the front.

number of possible sudoku puzzles [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
Wiki http://en.wikipedia.org/wiki/Mathematics_of_Sudoku says Sudoku has 6,670,903,752,021,072,936,960 possible permutations.I tried to find out but it seems difficult.Can someone tell me how this number is calculated.
You can find all about it in this Wiki: http://en.wikipedia.org/wiki/Mathematics_of_Sudoku.
"the number of valid Sudoku solution grids for the standard 9×9 grid was calculated by Bertram Felgenhauer and Frazer Jarvis in 2005 to be 6,670,903,752,021,072,936,960 . This number is equal to 9! × 722 × 27 × 27,704,267,971, the last factor of which is prime. The result was derived through logic and brute force computation."
You can read the most recent rewrite of the original publication by Bertram Felgenhauer and Frazer Jarvis : Mathematics of Sudoku, it details the computation over 7 pages. The calculation actually isn't trivial (the idea being to enumerate distinct and valid Sudoku grids, rather than all possible arrangements of digits over a 9x9 grid).
Interestingly there was an estimation of the number of possible sudokus posted in an internet forum before the actual value was calculated and published by Felgenhauer & Jarvis.
The author of the post points out that there are some unproven assumptions in his guess. But the estimated value differs by 0.2% from the actual value published later.
In this Wiki you can find some estimation of other types of sudoku based on similar guesses.
Here is the full post from The New Sudoku Players' Forum:
by Guest » Fri Apr 22, 2005 1:27 pm
Lets try this from a whole different direction:
Step A:
Pretend that the only 'rule' was the 'block' rule, and that the row and column rules did not exist. Then each block could be arranged 9! ways, or 9!^9 ways to populate the puzzle (1.0911*10^50 'solutions').
Step B1:
If we then say 'let us add a rule about unique values in a row', then the top three blocks can be filled as follows:
Block 1: 9! ways
Block 2: 56 ways to select which values go in each 3-cell row, and 3! ways to arrange them (remember that we haven't invented a column rule yet).
Block 3: with 1 and 2 filled, the values that go in each row is now defined, but each row can be arranged 3! ways.
Therefore, we have 9! * 56 * 3!^6 ways to fill the top three blocks, and this value cubed to fill all nine blocks. (or 8.5227*10^35 solutions). Note that this represents a 'reduction ratio' (denoted as R) of 1.2802*10^14, by adding this one new rule.
Step B2: But we could have just as easily added a 'unique in columns' rule, and achieved the same results downward instead of across, with the same value of R.
Step C: (and here is where my solution is not rigorous) What if we assume that each of these rules would constrain the number of valid solutions by exactly the same ratio? Then there would be a combined reduction ratio of R^2. So the intitial value of 1.0911*10^50 solutions would reduce by a factor of R^2, or 1.639*10^28, leaving 6.6571*10^21 valid solutions.
This post and the account are attributed to Kevin Kinfoil (Felgenhauer & Jarvis).
Additional notes
Assume the Block 1 is
1 2 3
4 5 6
7 8 9
Then we have the following possibilities for Block2, if we ignore the order of the rows
1 2 3 4 5 6
4 5 6 7 8 9
7 8 9 1 2 3
this is 1 possibility
1 2 3 7 8 9
4 5 6 1 2 3
7 8 9 4 5 6
this is 1 possibility
1 2 3 two of 4,5,6, one of 7,8,9 3*3
4 5 6 the two remaining of 7,8,9, one of 1,2,3 3
7 8 9 the two remaining of 1,2,3, the remaining of (two of 4,5,6) 1
these are (3*3)*3*1=27 possibilities
1 2 3 two of 7,8,9, one of 4,5,6 3*3
4 5 6 two of 1,2,3, the remaining of 7,8,9 3
7 8 9 the two remaining of 4,5,6, the remaining of two of 1,2,3 1
these are (3*3)*3*1=27
So all in all these are 1+1+27+27=56 possibilities.

Resources