*there are just one peak
the textbook im studying says the time complexity of greedy ascent algorithm is O(nm) and O(n^2) when m=n. So it means in the worst case, I have to visit all elements of the 2d array.
But I think that case is only when the row or column is 1, and the elements are sorted, so if I choose the minimum element I have to visit all element to get to peak.
At that point, I can't understand that, when n=m, the time complexity is O(n^2).
If n=m=1, it would take O(1), but saying that as O(n^2) is kind of meaningless.
In the other case where n=m and n>1 , isn't it possible to take O(n^2)?
If there are none of those, then isn't O(n^2) not the right complexity?
I think O(n+m) might be the right complexity becuase the worst case is when the starting point is (0,0) and the peak is at (n,m). So to get to the peak, I have to move verticaly n time, horizentaly m time.
Where am I understanding wrong?
*The point of my question is, I think defining the complexity at the case n=m as O(n^2) based on O(nm) is wrong. And the right complexity is O(n+m)=O(2n)=O(n)
You can easily construct a case where you need to visit at least n*m/2 elements: put the maximum in the bottom-right corner, then follow a zigzagging path, putting an element slightly less than the previous on each traversed space. The path goes all the way to the left, then up two, then all the way to the right, and so on until you reach the top-left or top-right corner. Put a minimum element in all other spaces.
For n=m=4, it looks like this:
0 0 0 1
5 4 3 2
6 0 0 0
7 8 9 10
If you happen to pick the 1, you need to go through 10 > n*m/2 elements.
For n=4 and m=5:
1 2 3 4
0 0 0 5
9 8 7 6
10 0 0 0
11 12 13 14
Here you need to go through 14 > n*m/2 elements.
So in the worst case, this is O(n*m/2) = O(nm).
Related
I believe that selection sort has the following behavior:
Best case: No swaps required as all elements are properly arranged
Worst case: n-1 swaps required i.e a swap required for each pass and there are n-1 passes as we know where n is number of elements in array
Average case: Not able to find out this. What is the procedure for finding it out?
Is the above information correct?
This says time complexity of swaps in best case is O(n)
http://ocw.utm.my/file.php/31/Module/ocwChp5SelectionSort.pdf
Each iteration of selection sort consists of scanning across the array, finding the minimum element that hasn't already been placed yet, then swapping it to the appropriate position. In a naive implementation of selection sort, this means that there will always be n - 1 swaps made regardless of distribution of elements in the input array.
If you want to minimize the number of swaps, though, you can implement selection sort so that it doesn't perform a swap in the case where the element to be moved is already in the right place. If you add in this restriction, then you're correct that zero swaps would be made in the best case. (I'm not sure whether it's worthwhile to modify selection sort this way, since swaps are pretty fast in most cases).
Really, it depends on the implementation. You could potentially have a weird implementation of selection sort that constantly swaps the candidate minimum element to its tentative final spot on each iteration, which would dramatically increase the number of swaps in the worst case. I'm not sure why you'd do this, though. It's little details like this that accounts for why your explanation seems at odds with what you've found online - depending on how the code is put together, the number of swaps can be different.
The best case and worst case running time of selection sort are n^2. This is because regardless of how the elements are initially arranged, on the i iteration of the main for loop, the algorithm always inspects each of the remaining n-i elements to find the smallest one remaining.
Selection sort is the algorithm which takes minimum number of swaps, and in the best case it takes ZERO (0) swaps, when the input is in the sorted array like 1,2,3,4. But the more pertinent question is what is the worst case of number of swaps in selection sort? And for which input does it occur?
Answer: Worst case of number of swaps is n-1. But it does not occur for the just the oppositely ordered input, rather the oppositely ordered input like 6,5,3,2,1 does not take the worst number of swaps rather it takes n/2 swaps. So what is really the input for which the number of swaps takes N-1 swaps, if you analyse a bit more you’ll see that the worst case occurs for “SINE WAVE KIND OF AN INPUT”. That is alternatively increasing and decreasing input, same as the crest and trough.
7 6 8 5 9 4 10 3 - input of eight (8) elements will therefore require 7 swaps
3 6 8 5 9 4 10 7 (1)
3 4 8 5 9 6 10 7 (2)
3 4 5 8 9 6 10 7 (3)
3 4 5 6 9 8 10 7 (4)
3 4 5 6 7 8 10 9 (5)
3 4 5 6 7 8 10 9 (6)
3 4 5 6 7 8 9 10 (7)
Hence proved that the worst case of number of swaps in selection sort is n-1, best case is 0, and average is (n-1)/2 swaps.
For a given matrix NxN having 0 or 1′s only. find the count of rows and columns where at least one 1 occurs.
e,g
0 0 0 0
1 0 0 1
1 0 0 1
1 1 0 1
Row count having 1 at least once: 3
Col count having 1 at least once: 3
Mind is frozen can not think of any way better than normal double for loops giving me O(n^2)
looking forward to some help
this solution prove you can not read your matrix less than O(N^2) but if your mean of this questions is you want to calculate your result in a search: I think it is not relation between do it or said that i need to solve this question in better order than O(2*(n^2)).
you need to Know about every cell in your array.assume you have a graph that every vertex is pointing to a cell in your matrix.for find about value of a cell you should search in your graph.you can do it with DFS in minimal order.
The time and space analysis of DFS differs according to its
application area. In theoretical computer science, DFS is typically
used to traverse an entire graph, and takes time O(|E|), linear in the
size of the graph. In these applications it also uses space O(|V|) in
the worst case to store the stack of vertices on the current search
path as well as the set of already-visited vertices. Thus, in this
setting, the time and space bounds are the same as for breadth-first
search and the choice of which of these two algorithms to use depends
less on their complexity and more on the different properties of the
vertex orderings the two algorithms produce.
and you have N^2 vertex in your graph--array At least (O(V+E) >= O(V)). so you can not do it in better than O(n^2) with use every data-structure.(because calculate this order is not related to edge-structure).
maxcol=0;
for(int i=0;i<n;i++)
{
sumcol=0;
for(int j=0;j<n;j++)
{
if (a[i][j]==1)
{
sumcol=sumcal+1;
}
}
if (sumcol>maxcol)
{
maxcol=sumcol;
}
}
repeat this for rows.this is very easy solution but this code have a minimum space.and you can not improve it with algorithm idea.you should attention to means of algorithm complexity.you can solve it with one search but you just increase complexity of your code.
Idea: store the sum of numbers for each row and column in the matrix.
Additional storage: O(n * log(n)) - assuming O(log(n)) bits to store one number.
Time required to count the nonzero rows and columns: O(n).
This is a time-optimized algorithm, not a "space and time optimized algorithm" - it requires more space but less time.
We have N numbers in a stack and we want to sort them with minimum number of operations. The only available operation is reversing last K numbers in top of the stack (K can be between 2 and N).
For example to sort sequence "2 3 4 5 1", we need 2 steps:
2 3 4 5 1 ---> 1 5 4 3 2 ---> 1 2 3 4 5
Is there any polynomial algorithm to find minimum number of steps needed?
I think you are talking about the famous Pancake sorting algorithm.
Quoting from wikipedia : "The maximum number of flips required to sort
any stack of n pancakes has been shown to lie between (15/14)n and (18/11)n,
but the exact value is not known. The simplest pancake sorting algorithm requires
at most 2n−3 flips.
In this algorithm, a variation of selection sort, we bring the largest pancake
not yet sorted to the top with one flip, and then take it down to its final
position with one more, then repeat this for the remaining pancakes.
Note that we do not count the time needed to find the largest pancake, only the
number of flips; if we wished to create a real machine to execute this algorithm
in linear time, it would have to both perform prefix reversal (flips) and be
able to find the maximum of a range of consecutive numbers in constant time"
It can be done in 2N-3 steps (worst case)
Find the position of '1'
Shuffle it to the end (one step)
Shuffle it to the beginning (reverse all N)
Find the position of 2
Shuffle to the end
Shuffle to the beginning (reverse last N-1)
Repeat...
When you get to consider element N-1, it is either already in the right place, or at the end. Worst case you need one more reversal to finish. This gives you 2N-3.
It is possible you can do better for a given sequence when you take advantage of some intrinsic order. I have a hunch that an initial step that maximizes the "order" of elements might be good- that is, do an initial step such that the "number of elements that have all elements smaller than them to their left" is greatest. For example, starting with 43215, an initial complete reversal gives 51234 (order number =3), after which my algorithm gets the correct order in just two steps. I'm not sure if this is general.
This is an interview question I saw online and I am not sure I have correct idea for it.
The problem is here:
Design an algorithm to find the two largest elements in a sequence of n numbers.
Number of comparisons need to be n + O(log n)
I think I might choose quick sort and stop when the two largest elements are find?
But not 100% sure about it. Anyone has idea about it please share
Recursively split the array, find the largest element in each half, then find the largest element that the largest element was ever compared against. That first part requires n compares, the last part requires O(log n). Here is an example:
1 2 5 4 9 7 8 7 5 4 1 0 1 4 2 3
2 5 9 8 5 1 4 3
5 9 5 4
9 5
9
At each step I'm merging adjacent numbers and taking the larger of the two. It takes n compares to get down to the largest number, 9. Then, if we look at every number that 9 was compared against (5, 5, 8, 7), we see that the largest one was 8, which must be the second largest in the array. Since there are O(log n) levels in this, it will take O(log n) compares to do this.
For only 2 largest element, a normal selection may be good enough. it's basically O(2*n).
For a more general "select k elements from an array size n" question, quick Sort is a good thinking, but you don't have to really sort the whole array.
try this
you pick a pivot, split the array to N[m] and N[n-m].
if k < m, forget the N[n-m] part, do step 1 in N[m].
if k > m, forget the N[m] part, do step in in N[n-m]. this time, you try to find the first k-m element in the N[n-m].
if k = m, you got it.
It's basically like locate k in an array N. you need log(N) iteration, and move (N/2)^i elements in average. so it's a N + log(N) algorithm (which meets your requirement), and has very good practical performance (faster than plain quick sort, since it avoid any sorting, so the output is not ordered).
I'm trying to find the optimal solution for a little puzzle game called Twiddle (an applet with the game can be found here). The game has a 3x3 matrix with the number from 1 to 9. The goal is to bring the numbers in the correct order using the minimum amount of moves. In each move you can rotate a 2x2 square either clockwise or counterclockwise.
I.e. if you have this state
6 3 9
8 7 5
1 2 4
and you rotate the upper left 2x2 square clockwise you get
8 6 9
7 3 5
1 2 4
I'm using a A* search to find the optimal solution. My f() is simply the number of rotations needed. My heuristic function already leads to the optimal solution (if I modify it, see the notice a t the end) but I don't think it's the best one you can find. My current heuristic takes each corner, looks at the number at the corner and calculates the manhatten distance to the position this number will have in the solved state (which gives me the number of rotation needed to bring the number to this postion) and sums all these values. I.e. You take the above example:
6 3 9
8 7 5
1 2 4
and this end state
1 2 3
4 5 6
7 8 9
then the heuristic does the following
6 is currently at index 0 and should by at index 5: 3 rotations needed
9 is currently at index 2 and should by at index 8: 2 rotations needed
1 is currently at index 6 and should by at index 0: 2 rotations needed
4 is currently at index 8 and should by at index 3: 3 rotations needed
h = 3 + 2 + 2 + 3 = 10
Additionally, if h is 0, but the state is not completely ordered, than h = 1.
But there is the problem, that you rotate 4 elements at once. So there a rare cases where you can do two (ore more) of theses estimated rotations in one move. This means theses heuristic overestimates the distance to the solution.
My current workaround is, to simply excluded one of the corners from the calculation which solves this problem at least for my test-cases. I've done no research if really solves the problem or if this heuristic still overestimates in some edge-cases.
So my question is: What is the best heuristic you can come up with?
(Disclaimer: This is for a university project, so this is a bit of homework. But I'm free to use any resource if can come up with, so it's okay to ask you guys. Also I will credit Stackoverflow for helping me ;) )
Simplicity is often most effective. Consider the nine digits (in the rows-first order) as forming a single integer. The solution is represented by the smallest possible integer i(g) = 123456789. Hence I suggest the following heuristic h(s) = i(s) - i(g). For your example, h(s) = 639875124 - 123456789.
You can get an admissible (i.e., not overestimating) heuristic from your approach by taking all numbers into account, and dividing by 4 and rounding up to the next integer.
To improve the heuristic, you could look at pairs of numbers. If e.g. in the top left the numbers 1 and 2 are swapped, you need at least 3 rotations to fix them both up, which is a better value than 1+1 from considering them separately. In the end, you still need to divide by 4. You can pair up numbers arbitrarily, or even try all pairs and find the best division into pairs.
All elements should be taken into account when calculating distance, not just corner elements. Imagine that all corner elements 1, 3, 7, 9 are at their home, but all other are not.
It could be argued that those elements that are neighbors in the final state should tend to become closer during each step, so neighboring distance can also be part of heuristic, but probably with weaker influence than distance of elements to their final state.