This question already has answers here:
Find maximum length of good path in a grid
(4 answers)
Closed 7 years ago.
Basically, you have something like this:
0 9 5 3'
4 1 5' 4'
5 7' 6' 9
2 8' 5 10
In this case, the longest snake would be 3 -> 4 -> 5 -> 6 -> 7 -> 8. I put ' behind the numbers in this to help show it visually.
You can go both horizontally and vertically. The matrix can be n x m, so there isn't really a limit to the number of rows and columns.
What is the most optimal way to figure this out?
I've thought about starting at position n/2 and m/2, then recursively doing breadth-first search and keeping track of the max interval I can find. I'm not sure how to best tackle it.
You could create a graph where nodes are matrix positions and vertices are pointing from a number N to a N+1 neighbour.
Once the graph is built, your problem amounts to finding one of the longest paths in this graph.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
We have n cards with each card numbered from 1 to n.
All cards are randomly shuffled.
We are allowed only operation MoveCard(n) which moves the card with value n to the top of the pile.
We need to sort the pile of cards with minimum number of MoveCard operations.
The naive approach which i can think of is start with MoveCard(n), MoveCard(n-1), MoveCard(n-2).... MoveCard(1).
This approach will solve the problem in n MoveCard operations.
But can we optimize it.
For instance, If the input is like: 3 1 4 2
As per my approach:
4 3 1 2
3 4 1 2
2 3 4 1
1 2 3 4
MoveCard operations is 4.
But we can solve this problem with minimum number of moves:
Optimized solution is:
3 1 4 2
2 3 1 4
1 2 3 4
MoveCard operations is 2.
From the optimized solution above, I am feeling the following approach will solve the problem.
Always we are picking the element to move which gives the sorted elements on the top and bottom with a condition the maximum element in the sorted sub array from the start should be less than the minimum element of the sorted sub array from the bottom.
In this case:
3 1 4 2
Moving 2 we are getting 2 3 1 4 { 2,3 sorted from the start and 4 sorted from the bottom}
Now we are choosing 1 which gives the full sorted array. 1 2 3 4.
A simple way to do this is looking at the numbers in reverse order. If the top two aren't in order, move the lower one. If they are in order, look for the next one down. After you find one not in order, move that one, and then each other card below it in descending order.
Basically, find n. If n-1 comes after n in the array, move n-1 to the front. n--, and repeat.
For example:
2 4 3 1 // 3 comes after 4, so move 3
3 2 4 1 // move 2
2 3 4 1 // move 1
1 2 3 4 // done after 3 moves
3 1 4 2 // 3 comes -before- 4, so leave it alone. 2 comes after 3, move it
2 3 1 4 // move 1
1 2 3 4 // done after 2 moves
It ends up being the same as the naive approach, but starting only with the "optimum" start. You don't always have to move the top cards.
Worst case time complexity is O(n^2), simply because you have to do an unordered search for each number. I can't prove this is the best complexity possible, but it's surely the simplest and clearest way to do it.
Worst case number of moves is n-1, since you can always just leave the n card alone.
Now, if you just want to know how many moves you need, instead of actually sorting, you can stop at the first move. For example, if you have to move 3 because it comes after 4, then you'll need 3 moves. You can see this because if 3 is at the front, you'll always have to move every card below it to the front.
I'm creating an algorithm that can build an adjacency list from a list of edges.
For example, if the data input was:
1 2
1 8
2 8
3 5
3 1
4 5
4 6
5 2
5 9
6 4
6 8
7 4
7 10
8 4
8 6
9 4
9 5
10 7
10 3
The output would be:
1: 8 4 6
2: 4 6
3: 9 2 8
4: 2 9 8
5: 8 4
6: 5 4
7: 5 6 3
8: 5 6 4
9: 5 6 2
10: 4 5 1
The algorithm is obviously bounded by the number of vertices and edges so originally I was thinking it would be O(v + e). But I could only get the program to work by implementing for loops inside for loops with 2d arrays, which I believe cause complexity of O(N^2).
Can anyone help me better understand?
It depends a fair bit on what sort of data structure you are using to store the map from vertices to lists of adjacent vertices. Iterating through the list of edges is of course going to have time complexity O(e). Any larger time complexity is going come from the time required to find a vertex in the map and the time required to insert a new item into a vertex's list of adjacent vertices. If you were using flat arrays then you could have O(v*e) complexity (for each edge, loop through the vertex list to find the desired vertex), but this could be improved quite a lot by using a hash-table or tree data structure that gave you better lookup performance.
My coworker gave me a challenging question that I believe is NP but he won't take that as an answer.
Given a matrix determine how many non repeating numbers/letter combinations there are by picking only one number per column. It isn't acceptable to brute force (try all possible combinations) for this. He wants a formula to solve this problem.
For example he gave me this matrix
1 2 2 3
2 3 3 4
3 4 4 5
4 5 5 6
Some example results would be
1) 1 2 3 4
2) 1 2 3 5
3) 1 2 3 6
4) 1 3 2 4
5) 1 3 2 5
6) etc...
I wrote a java program which basically consisted of 4 for loops to go through all possible combinations (4x4x4x4=256 combos) to get I believe the answer was 36 possible combos. But to him this in unacceptable. And for the solution it can't be independent to one matrix alone it has to work for all n x n matrices.
Been racking my brain on this and I believe the problem is np(hard/complete) because it can be solved in polynomial time but there is no general algorithm you can do...you have to brute force it.
Any help/pointers/places of reference would be greatly appreciated...
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
Wiki http://en.wikipedia.org/wiki/Mathematics_of_Sudoku says Sudoku has 6,670,903,752,021,072,936,960 possible permutations.I tried to find out but it seems difficult.Can someone tell me how this number is calculated.
You can find all about it in this Wiki: http://en.wikipedia.org/wiki/Mathematics_of_Sudoku.
"the number of valid Sudoku solution grids for the standard 9×9 grid was calculated by Bertram Felgenhauer and Frazer Jarvis in 2005 to be 6,670,903,752,021,072,936,960 . This number is equal to 9! × 722 × 27 × 27,704,267,971, the last factor of which is prime. The result was derived through logic and brute force computation."
You can read the most recent rewrite of the original publication by Bertram Felgenhauer and Frazer Jarvis : Mathematics of Sudoku, it details the computation over 7 pages. The calculation actually isn't trivial (the idea being to enumerate distinct and valid Sudoku grids, rather than all possible arrangements of digits over a 9x9 grid).
Interestingly there was an estimation of the number of possible sudokus posted in an internet forum before the actual value was calculated and published by Felgenhauer & Jarvis.
The author of the post points out that there are some unproven assumptions in his guess. But the estimated value differs by 0.2% from the actual value published later.
In this Wiki you can find some estimation of other types of sudoku based on similar guesses.
Here is the full post from The New Sudoku Players' Forum:
by Guest » Fri Apr 22, 2005 1:27 pm
Lets try this from a whole different direction:
Step A:
Pretend that the only 'rule' was the 'block' rule, and that the row and column rules did not exist. Then each block could be arranged 9! ways, or 9!^9 ways to populate the puzzle (1.0911*10^50 'solutions').
Step B1:
If we then say 'let us add a rule about unique values in a row', then the top three blocks can be filled as follows:
Block 1: 9! ways
Block 2: 56 ways to select which values go in each 3-cell row, and 3! ways to arrange them (remember that we haven't invented a column rule yet).
Block 3: with 1 and 2 filled, the values that go in each row is now defined, but each row can be arranged 3! ways.
Therefore, we have 9! * 56 * 3!^6 ways to fill the top three blocks, and this value cubed to fill all nine blocks. (or 8.5227*10^35 solutions). Note that this represents a 'reduction ratio' (denoted as R) of 1.2802*10^14, by adding this one new rule.
Step B2: But we could have just as easily added a 'unique in columns' rule, and achieved the same results downward instead of across, with the same value of R.
Step C: (and here is where my solution is not rigorous) What if we assume that each of these rules would constrain the number of valid solutions by exactly the same ratio? Then there would be a combined reduction ratio of R^2. So the intitial value of 1.0911*10^50 solutions would reduce by a factor of R^2, or 1.639*10^28, leaving 6.6571*10^21 valid solutions.
This post and the account are attributed to Kevin Kinfoil (Felgenhauer & Jarvis).
Additional notes
Assume the Block 1 is
1 2 3
4 5 6
7 8 9
Then we have the following possibilities for Block2, if we ignore the order of the rows
1 2 3 4 5 6
4 5 6 7 8 9
7 8 9 1 2 3
this is 1 possibility
1 2 3 7 8 9
4 5 6 1 2 3
7 8 9 4 5 6
this is 1 possibility
1 2 3 two of 4,5,6, one of 7,8,9 3*3
4 5 6 the two remaining of 7,8,9, one of 1,2,3 3
7 8 9 the two remaining of 1,2,3, the remaining of (two of 4,5,6) 1
these are (3*3)*3*1=27 possibilities
1 2 3 two of 7,8,9, one of 4,5,6 3*3
4 5 6 two of 1,2,3, the remaining of 7,8,9 3
7 8 9 the two remaining of 4,5,6, the remaining of two of 1,2,3 1
these are (3*3)*3*1=27
So all in all these are 1+1+27+27=56 possibilities.