If I use basic collision handling by having to relocate the input value to the next empty slot, wouldn't I need n*(n+1)/2 hits in total?
Example:
Input: 0,0,0;
Allocated size=3;
Thus it would require 6 hits in total to allocate all three values.
I've read that the worst case complexity is O(n) but shouldn't it be O(n^2) then?
Each insertion is O(1) in average (with a good hash function and resizing strategy), but O(N) in the worst case. So N insertions are O(N^2) in the worst case.
It's O(N) on average.
It's indeed O(N^2) in the worst case.
Related
How could i generate the worst case array for Heap Sort for a large array with size of n?
Heap Sort has O(n* log(n)) time complexities for all the cases ( best case, the average case, and worst case).
Reference: https://cs.stackexchange.com/questions/18391/best-and-worse-case-inputs-for-heap-sort-and-quick-sort
Problem:
I have to analyze the time complexity to sort (using Quick sort) a list of integer values which are almost sorted.
What I have done?
I have read SO Q1, SO Q2, SO Q3 and this one.
However, I have not found anything which mentioned explicitly the time complexity to sort a k sorted array using Quick sort.
Since the time complexity of Quick sort algorithm depends on the strategy of choosing pivot and there is a probability to face the worst case due to having almost sorted data, to avoid worst case, I have used median of three values(first, middle, last) as a pivot as referred here.
What do I think?
Since in average case, the time complexity of Quick sort algorithm is O(n log(n)) and as mentioned here, "For any non trivial value of n, a divide and conquer algorithm will need many O(n) passes, even if the array be almost completely sorted",
I think the time complexity to sort a k sorted array using Quick sort algorithm is O(n log(n)), if the worst case does not occur.
My Question:
Am I right that the time complexity to sort a k sorted array using Quick sort algorithm is O(n log(n)) if I try to avoid worst case selecting a proper pivot and if the worst case does not occur.
When you say time complexity of Quick Sort, it is O(n^2), because the worst case is assumed by default. However, if you use another strategy to choose pivot, like Randomized Quick Sort, for example, your time complexity is still going to be O(n^2) by default. But the expected time complexity is O(n log(n)), since the occurrence of the worst case is highly unlikely. So if you can prove somehow that the worst case is 100% guaranteed not to happen, then you can say time complexity is less than O(n^2), otherwise, by default, the worst case is considered, no matter how unlikely.
I am trying to prove that the amortized complexity of insert operation in a sorted LinkedList is O(1).
I know that the worst case time is O(n) but finding it hard to find an appropriate potential function.
I'll be glad if someone could help.
Thanks.
O(1) amortized means that a sequence of n insertions costs O(n) time in the worst case. This is not true for this case because inserting the elements in reverse order takes O(n*n).
If an operation has an amortized time of O(1), can it ever, worst-case, take O(N^2) time?
Yes, it can. Amortized complexity takes into account the frequency with which the worst case appears. Thus as soon as the worst case appears in about 1 in N^2 operations the amortized complexity will be constant.
Let's take a simple example - the dynamically expanding array(I will call that vector as it is called in c++) in most languages has an amortized constant time for pushing an element to its back. Most of the time pushing an element is a simple assignment of a value, but once in a while all the elements allocated will be assigned and we need to re-allocate the vector. This would be the worst case of a push_back operation and when that happens the operation is with linear complexity. Still the way vector grows makes sure that re-allocation is infrequent enough. Each time the vector is re-allocated it doubles its size. Thus before another re-allocation happens we will have n simple push_back operations(assuming n was the capacity of the vector before re-allocation). As a result the worst case of linear complexity appears at most once in a linear number of operations.
Analogously to the case above imagine a data structure that re-allocates in O(n^2), but makes sure that re-allocation is performed at most once in n^2 constant operations. This would be an example of an operation with amortized complexity of O(1) and worst-case complexity O(N^2).
I am rather clear on how to programme it, but I am not sure on the definition, e.g. how to write it down in mathematics terms.
A normal heapsort is done with N elements in O notation. So O(log(n))
I just started with heapsort, so I might be a little bit off here.
But how can I for example look for a random element, when there are N elements?
And then pick that random element and delete it?
I was thinking that in a worst case - situation it has to go through the whole tree (Because the element could either be at the first place or at the last place, e.g. highest or lowest).
But how can I write that down in mathematics terms?
Heapsort's worst case performance is O(n log n), and to quote alestanis:
Max in max-heap: O(1). Min in min-heap: O(1). Opposite cases in O(n).
Here's an SO-answer explaining how to do the opposite cases in O(1) if you create the heap yourself.
To build maxheap array worstcase is O(n) and to max heapify complexcity in worst case is O(logn) so HeapSort worstCase is O(nlogn)