Best running time - algorithm

What is the best running time using theta notation for:
Find an element in a sorted array
Find an element in a sorted linked list
Inserting an element in a sorted array, once the position is found
Inserting an element in a sorted linked list, once the position is found
So far I have 2) is theta(n) and 4) is theta(1) only because I remembered my prof just said the answer in class, but is there an explanation on how to get these?

First of all reading one of your answers it seems like you might be asking for complexity in O[big o].Theta notation is used when the complexity is bound asymptotically both above and below. Big O notation is for when the complexity is bound asymptotically, only above.
1. Find an element in a sorted array:
Using binary search it can be O(logn). But in Best case Ω(1)
2. Find an element in a sorted linked list
You can't use binary search here. You have to traverse the entire list to find a particular number. No way of going to a particular position without traversing numbers before(or after) it. So in worst case, you traverse n(length) times. So O(n)
Ω(1) because in best case you can find it in the beginning.
3. Inserting an element in a sorted array, once the position is found
O(n) since you have to shift all the numbers to the right of the position of new insertion place.
Ω(1) because in the best case you might just add it at the end.
4. Inserting an element in a sorted linked list, once the position is found
Ɵ(1) O(1) Ω(1), because adding a new element in a particular position (after you know the position and you have a pointer to that position) is theta(1)

Related

Insertion sort time complexity

The unusual Θ(n2) implementation of Insertion Sort to sort an array uses linear search to identify the position where an element is to be inserted into the already sorted part of the array. If then, instead, we use binary search to identify the position, the worst case running time will then
A) remain Θ(n2)
B) become Θ(n(logn)2)
C) become Θ(nlogn)
D) become Θ(n)
This is my first question on stackoverflow please forgive any mistakes.
First of all the question is about Insertion Sort not Quicksort as you display above.
The correct answer is A-Remain Θ(n^2) since even if you can binary search the position of the element in the already sorted part of the array you have to move every element greater than it one position to the right which cause an Θ(k) amount of moves if the original array's element ordering is from greatest to lowest, where k is the initial index of the element being added to the sorted part. The total running time is Θ(n^2) when you do the math.
Question answer aside: the time complexity average case of Randomized-QuickSort is O(nlogn) and it can be proved if you have a mathematical background in expected value (probabilities). You can find more about it reading the quicksort section in the book Introduction to Algorithms (Cormen).

Incorrect Worst-case time complexity of Insertion Sort

Hi I am new to algorithms and am quite fascinated by it.
I am trying to figure out worst case time complexity of insertion sort and it is mentioned as O(n**2). Instead, we can have the time complexity as O(N*logN).
Here is my explanation,
THe insertion sort looks at the 1st element and assumes it is sorted. Next it looks at the 2nd element and compares with the predecessor sorted sublist of 1 element and inserts it based on the comparison with elements in the predecessor sorted sublist. This process is repeated similarly.
Everywhere it is mentioned that to insert an element into the predecessor sorted sublist, basically linear search,it takes O(N) time and as we have do these operations for n elements it takes us O(N**2).
However, if we use binary insertion to insert the element into predecessor sublist it should take O(logn) time where n is the length of sublist. Basically compare the new element with the middle element of predecessor sorted sublist and if it is greater than the middle element then new element lies between the middle element and the last element of the sublist.
As we repeat the operations for n items it should take us O(N*logN). We can use binary search approach as we know the predecessor sublist is sorted.
So shouldn't the worst case time complexity be O(N*logN) instead of O(N**2).
Yes, you can find the insertion point in O(log n), but then you have to make space to insert the item. That takes O(n) time.
Consider this partially-sorted array:
[1,2,3,5,6,7,9,4]
You get to the last item, 4, and you do a binary search to locate the position where it needs to be inserted. But now you have to make space, which means moving items 9, 7, 6, and 5 down one place in the array. That's what makes insertion sort O(n^2).

Is there a name for this sorting algorithm?

I thought of a sorting algorithm but I am not sure if this already exists.
Say we have a container with n items:
We choose the 3rd element and do a binary search on the first 2, putting it in the correct position. The first 3 items in the container are sorted.
We choose the 4th element and do a binary search on the first 3 and put it in the correct position. Now the first 4 items are sorted.
We choose the 5th element and do a binary search on the first 4 items and put it in the correct position. Now 5 items are sorted.
.
.
.
We choose the nth element and do a binary search on the other n-1 elements putting it in the correct position. All the items are sorted.
Binary search takes logk for k elements and let's say that the insertion takes constant time. Shouldn't this take:
log2 to put the 3rd element in the correct spot.
log3 to put the 4th element in the correct spot.
log4 to put the 5th element in the correct spot.
.
.
.
log(n-1) to put the nth element in the correct spot.
log2 + log3 + log4 + ... log(n-1) = log((n-1)!) ?
I may be talking nonsense but this looked interesting.
EDIT:
I did not take the insertion time into consideration. What if the sorting was done in a sorting array with gaps between the elements? This would allow for fast inserting without having to shift many elements. After a number of inserts, we could redistribute the elements. Considering that the array is not sorted (we could use a shuffle to ensure this) I think that the results could be quite fast.
It sounds like insertion sort modified to use binary search. It's fairly well-known, but not particularly well-used (as far as I know), possibly because it doesn't affect the O(n²) worst case, but makes the O(n) best case take O(n log n) instead, and because insertion sort isn't commonly used on anything but really small arrays or those already sorted or nearly sorted.
The problem is that you can't really insert in O(1). Random-access insert into an array takes O(n), which is of course what the well-known O(n²) complexity of insertion sort assumes.
One could consider a data structure like a binary search tree, which has O(log n) insert - it's not O(1), but we still end up with an O(n log n) algorithm.
Oh O(log (n!)) = O(n log n), in case you were wondering about that.
Tree sort (generic binary search tree) and splaysort (splay tree) both use binary search trees to sort. Adding elements to a balanced binary search tree is equivalent to doing a binary search to find where to add the elements then some tree operations to keep the tree balanced. Without a tree of some type, this becomes insertion sort as others have mentioned.
In the worst case the tree can become highly unbalanced, resulting in O(N^2) for tree sort. Using a self-balancing binary search tree yields O(N log N), at least on average. Splay sort is an adaptive sort, making it rather efficient when the input is already nearly sorted.
I think by binary search, he meant that there is an insertion taking place placed on a searchable index of where we would expect to find the item we are inserting. in which case it would be called insertion sort... Either way it's still N*log(N)

Why is insertion into a sorted array O(n)?

Skiena, in The Algorithm Design Manual, states that insertion into a sorted array is O(n). Yet searching for an item in a sorted array is O(log n), because you can do a binary search.
Couldn't insertion also be O(log n), if I did binary search comparisons to figure out where in the array it should go?
Finding the position is only half the battle. Show me how to place a 2 in its place into [1,3,4,5,6,7] using fewer than five move operations.
You can do O(log n) search on a sorted array but when you insert an item you need to shift data, so shift is O(n).
You can use binary search to figure out where the element should go.
However, inserting the element means that you have to make space for it. This is done by moving all elements after the new element to the right. That takes O(n) complexity.

Find k-th smallest element data structure

I have a problem here that requires to design a data structure that takes O(lg n) worst case for the following three operations:
a) Insertion: Insert the key into data structure only if it is not already there.
b) Deletion: delete the key if it is there!
c) Find kth smallest : find the ݇k-th smallest key in the data structure
I am wondering if I should use heap but I still don't have a clear idea about it.
I can easily get the first two part in O(lg n), even faster but not sure how to deal with the c) part.
Anyone has any idea please share.
Two solutions come in mind:
Use a balanced binary search tree (Red black, AVG, Splay,... any would do). You're already familiar with operation (1) and (2). For operation (3), just store an extra value at each node: the total number of nodes in that subtree. You could easily use this value to find the kth smallest element in O(log(n)).
For example, let say your tree is like follows - root A has 10 nodes, left child B has 3 nodes, right child C has 6 nodes (3 + 6 + 1 = 10), suppose you want to find the 8th smallest element, you know you should go to the right side.
Use a skip list. It also supports all your (1), (2), (3) operations for O(logn) on average but may be a bit longer to implement.
Well, if your data structure keeps the elements sorted, then it's easy to find the kth lowest element.
The worst-case cost of a Binary Search Tree for search and insertion is O(N) while the average-case cost is O(lgN).
Thus, I would recommend using a Red-Black Binary Search Tree which guarantees a worst-case complexity of O(lgN) for both search and insertion.
You can read more about red-black trees here and see an implementation of a Red-Black BST in Java here.
So in terms of finding the k-th smallest element using the above Red-Black BST implementation, you just need to call the select method, passing in the value of k. The select method also guarantees worst-case O(lgN).
One of the solution could be using the strategy of quick sort.
Step 1 : Pick the fist element as pivot element and take it to its correct place. (maximum n checks)
now when you reach the correct location for this element then you do a check
step 2.1 : if location >k
your element resides in the first sublist. so you are not interested in the second sublist.
step 2.2 if location
step 2.3 if location == k
you have got the element break the look/recursion
Step 3: repete the step 1 to 2.3 by using the appropriate sublist
Complexity of this solution is O(n log n)
Heap is not the right structure for finding the Kth smallest element of an array, simply because you would have to remove K-1 elements from the heap in order to get to the Kth element.
There is a much better approach to finding Kth smallest element, which relies on median-of-medians algorithm. Basically any partition algorithm would be good enough on average, but median-of-medians comes with the proof of worst-case O(N) time for finding the median. In general, this algorithm can be used to find any specific element, not only the median.
Here is the analysis and implementation of this algorithm in C#: Finding Kth Smallest Element in an Unsorted Array
P.S. On a related note, there are many many things that you can do in-place with arrays. Array is a wonderful data structure and only if you know how to organize its elements in a particular situation, you might get results extremely fast and without additional memory use.
Heap structure is a very good example, QuickSort algorithm as well. And here is one really funny example of using arrays efficiently (this problem comes from programming Olympics): Finding a Majority Element in an Array

Resources