I've made a program that count cost of mergesort algorithm for different value of n, i've taken cost variable and i am incrementing it every time loop encounter or condition chech occure and when i get sorted array i gave that sorted array in and input to merge sort again and after that in third case i am reversing the sorted array so it would be worst case but for all three cases i am getting the same cost,so what would be the Best And Worst Case For Mergesort.
The cost of mergesort implemented classically either as a top-down recursive function or a bottom-up iterative with a small local array of pointers is the same: O(N.log(N)). The number of comparisons will vary depending on the actual contents of the array, but by at most a factor of 2.
You can improve this algorithm at a linear cost by adding an initial comparison between the last element of the left slice and the first element of the right slice in the merge phase. If the comparison yields <= then you can skip the merge phase for this pair of slices.
With this modification, a fully sorted array will sort much faster, with a linear complexity, making it the best case, and a partially sorted array will behave better as well.
Related
So i guess its because it just compares A[k] and A[k-1], and does the implementation in one sweep but its still not clear. Can someone explain better.
Thanks
This link shows a graphical representation of sorting algorithm with different types of data set.
As you can see, when the data is sorted the algorithm complexity is reduced to N. Which is equivalent to the number of elements as inputs.
The link provided gives a clear picture of how its more efficient.
You answered your own question: For a nearly sorted array, insertion sort will only need a handful of O(n) passes to complete. Contrast that to a divide and conquer sorting algorithm like merge sort, which takes O(n*lgn). For any non trivial value of n, a divide and conquer algorithm will need many O(n) passes, even if the array be almost completely sorted, whereas insertion sort might only require a few.
Insertion sort is a faster and more improved sorting algorithm than selection sort. In selection sort the algorithm iterates through all of the data through every pass whether it is already sorted or not. However, insertion sort works differently, instead of iterating through all of the data after every pass the algorithm only traverses the data it needs to until the segment that is being sorted is sorted. Again there are two loops that are required by insertion sort and therefore two main variables, which in this case are named 'i' and 'j'. Variables 'i' and 'j' begin on the same index after every pass of the first loop, the second loop only executes if variable 'j' is greater then index 0 AND arr[j] < arr[j - 1]. In other words, if 'j' hasn't reached the end of the data AND the value of the index where 'j' is at is smaller than the value of the index to the left of 'j', finally 'j' is decremented. As long as these two conditions are met in the second loop it will keep executing, this is what sets insertion sort apart from selection sort. Only the data that needs to be sorted is sorted.
The general goal of a sorting algorithm is to minimize the number of comparisons. Sorting algorithms have a lower bound and an upper bound on the number of comparisons( n log n worst-case for merge and heap sorts, n log n average case for quick sort). In the most general case, you'd go with an algorithm that happens to have the best average or best worst-case number of comparisons. However, when you know something about the data (e.g., the array is already sorted, or almost sorted), you can exploit the fact that insertion sort's lower bound is far lower than the "n log n" sorts.
For example, if you have an array [1,2,3,4,5,6,7,9] and you need to insert 8 into it, you can either insert it at the end, and sort the array using a vanilla n log n sort (which will do about 28 comparisons (roughly) to sort the data to [1,2,3,4,5,6,7,8,9]). However, insertion sort lets you insert the 8 at the right position in only about 8 comparisons.
I think I have a solution but I am not completely sure.
My solution was to Convert Arrays to Linked Lists. Then Merge and sort the linked list recursively.
I've read that it will take O(1) space in memory. But I am not sure the runtime would be faster than linear time.
Any suggestions please?
There is a special case where you can merge 2 arrays in constant time:
The arrays are adjacent, that is they are slices of the same array and the last element of the first is just before the first element of the second.
The last element of the first array is less or equal to the first element of the second array.
The case can be checked with a single test.
This may seem ludicrous, but it is a very common case for mergesort and testing for this special case first increases mergesort performance significantly for arrays that are already fully or partially sorted. A similar test can be used to handle arrays that are sorted in reverse order, and carefully crafted code can achieve O(N) sorting times for both sorted and reverse sorted arrays while keeping the same number of element comparisons for the general case.
My solution was to Convert Arrays to Linked Lists.
That takes O(N) time and memory.
This is a practice exam question i'm working on, i have a general idea of what the answer is but would like some clarification.
The
following
is
a
sorting
algorithm
for
n
integers
in
an
array.
In
step
1,
you
iterate
through
the
array,
and
compare
each
pair
of
adjacent
integers
and
swap
each
pair
if
they
are
in
the
wrong
order.
In
step
2,
you
repeat
step
1
as
many
times
as
necessary
until
there
is
an
iteration
where
no
swaps
are
made
(in
which
case
the
list
is
sorted
and
you
can
stop).
What
is
the
worst case
complexity
of
this algorithm?
What is the best case complexity of this algorithm?
Basically the algorithm presented here is a bubble sort.
The worst case complexity here is O(n^2).
The best case complexity is O(n).
Here is the explanation:
The best case situation here would be "Already sorted array". so all you need is N comparisions(To be precise its n-1) so the complexity is O(n).
The worst case situation is reverse ordered array.
To better understand why its O(n^2), consider just first element of reverse ordered array which indeed is a largest element, to make this array sorted you need to get that element to the last index of the array. Through the algorithm explained in the question, on each iteration it takes the largest element one index towards its actual position(last index here) and it requires O(n) comparisions to move one posistion. and hence O(n^2) comparision to move it to its actual position.
In the best case, no swapping will be required and a single pass of the array would suffice. So the complexity is O(n).
In the worst case, the elements of the array could be in the reverse order. So the first iteration requires (n-1) swaps, the next one (n-2) and do on...
So it would lead to O(n^2) complexity.
As others have said, this is bubble sort. But if you are measuring complexity in terms of comparisons, you can easily be more precise than big-O.
In the best case, you need only compare n-1 pairs to verify they're all in the right order.
In the worst case, the first element is the one that should be in the last position, so n-1 passes will be needed, each advancing that element one more position toward the end of the list. Each pass requires n-1 comparisons. In all, then, (n-1)^2 comparisons are needed.
If anyone can give some input on my logic, I would very much appreciate it.
Which method runs faster for an array with all keys identical, selection sort or insertion sort?
I think that this would be similar to when the array is already sorted, so that insertion sort will be linear, and the selection sort quadratic.
Which method runs faster for an array in reverse order, selection sort or insertion sort?
I think that they would run similarly, since the values at every position will have to be changed. The worst case scenario for insertion sort is reverse order, so that would mean it is quadratic, and then the selection sort would already be quadratic as well.
Suppose that we use insertion sort on a randomly ordered array where elements have only one of three values. Is the running time linear, quadratic, or something in between?
Since it is randomly sorted, I think that would mean that the insertion sort would have to perform many more times the number of operations that the number of values. If that's the case, then its not linear.So, it would likely be quadratic, or perhaps a little below quadratic.
What is the maximum number of times during the execution of Quick.sort() that the largest item can be exchanged, for an array of length N?
The maximum number cannot be passed over more times than there are spaces available, since it should always be approaching its right position. So, going from being the first to the last value spot, it would be exchanged N times.
About how many compares will quick.sort() make when sorting an array of N items that are all equal?
When drawing out the quick sort , a triangle can be drawn around the compared objects at every phase, that is N tall and N wide, the area of this would equal the number of compares performed, which would be (N^2)/2
Here are my comments on your comments:
Which method runs faster for an array with all keys identical, selection sort or insertion sort?
I think that this would be similar to when the array is already sorted, so that insertion sort will be linear, and the selection sort quadratic.
Yes, that's correct. Insertion sort will do O(1) work per element and visit O(n) elements for a total runtime of O(n). Selection sort always runs in time Θ(n2) regardless of the input structure, so its runtime will be quadratic.
Which method runs faster for an array in reverse order, selection sort or insertion sort?
I think that they would run similarly, since the values at every position will have to be changed. The worst case scenario for insertion sort is reverse order, so that would mean it is quadratic, and then the selection sort would already be quadratic as well.
You're right that both algorithms have quadratic runtime. The algorithms should actually have relatively comparable performance, since they'll make the same total number of comparisons.
Suppose that we use insertion sort on a randomly ordered array where elements have only one of three values. Is the running time linear, quadratic, or something in between?
Since it is randomly sorted, I think that would mean that the insertion sort would have to perform many more times the number of operations that the number of values. If that's the case, then its not linear.So, it would likely be quadratic, or perhaps a little below quadratic.
This should take quadratic time (time Θ(n2)). Take just the elements in the back third of the array. About a third of these elements will be 1's, and in order to insert them into the sorted sequence they'd need to be moved above 2/3's of the way down the array. Therefore, the work done would be at least (n / 3)(2n / 3) = 2n2 / 9, which is quadratic.
What is the maximum number of times during the execution of Quick.sort() that the largest item can be exchanged, for an array of length N?
The maximum number cannot be passed over more times than there are spaces available, since it should always be approaching its right position. So, going from being the first to the last value spot, it would be exchanged N times.
There's an off-by-one error here. When the array has size 1, the largest element can't be moved any more, so the maximum number of moves would be N - 1.
About how many compares will quick.sort() make when sorting an array of N items that are all equal?
When drawing out the quick sort , a triangle can be drawn around the compared objects at every phase, that is N tall and N wide, the area of this would equal the number of compares performed, which would be (N^2)/2
This really depends on the implementation of Quick.sort(). Quicksort with ternary partitioning would only do O(n) total work because all values equal to the pivot are excluded in the recursive calls. If this isn't done, then your analysis would be correct.
Hope this helps!
Suppose we have an array of size n with all the elements identical. What will be O(n)? Will it be linear?
This depends on how the algorithm is implemented.
With a standard "vanilla" implementation of mergesort, the time required to sort an array will always be Θ(n log n) because the merges required at each step each take linear time.
However, with the appropriate optimizations, it's possible to get this to run in time O(n). In many mergesort implementations, the input array is continuously modified so that larger and larger ranges are sorted, and when a merge step occurs, the algorithm uses an external buffer to merge two adjacent sorted ranges. In that case, there's a nifty optimization you can do: before doing the merge, check if the last element of the first range is less than or equal to the first element of the second range. If so, the two ranges taken together are already sorted, so no merging needs to be done.
Suppose you perform this optimization and try sorting an array where all elements are already sorted. What happens? Well, each call to mergesort will fire off two more recursive calls. After those return, it can check the endpoints of the sorted ranges and will notice that they're already in sorted order, so there's no more work left to be done. Overall, this does O(1) work per call, so we have this recurrence relation for the time complexity of the algorithm:
T(n) = 2T(n/2) + O(1)
This solves to O(n), so only linear work is done.