Binary Search - Best and worst case - algorithm

In an exam I see a question:
Which one of the following is true?
For a binary search, the best-case occurs when the target item is in the beginning of the search list.
For a binary search, the best-case occurs when the target is at the end of the search list.
For a binary search, the worst-case is when the target item is not in the search list.
For a binary search, the worst-case is when the target is found in the middle of the search list.
Well in my point of view both 1. and 3. are correct but it's only asking for one option. What am I missing?

3. is indeed correct, as you will need to go through the algorithm and terminate at the "worst" stop clause, where the list is empty, needed log(n) iterations.
1. is not correct. The best case is NOT when the first element is the target, it is when the middle element is the target, as you compare the middle element to the target, not the first element, so if the middle element is the target - the algorithm will finish in one iteration.

I think 1. is not correct. For each iteration, we compare the middle item of the current search list with the target. So if the target item is at the beginning of the search list, we need the maximum search time.

If we are assuming that the search list is sorted then option 3 will be correct, because each time we divide list into two parts and we go for one part out of two. With this we get log n level. If element is not found then it takes O(log n) to check which is worst case of binary search.
If the list is not sorted and we want to apply binary search then first we have to sort the list.
We can search element with doing insertion sort. In that case 1 & 3 will both be correct.
Correctness of option 1:
When we insert first element from list and at that time we check whether this matches a given element or not - if it matched then it will be best case O(1).
Correctness of option 3:
If we go through through all element in list with doing sorting and searching and still haven't found the element then it will be wort case O(n²).

Related

What would the runtime be if I used binary search and then linear search on a sorted array?

So for example I have this sorted array of x amount of Key, Value pairs(tuples). I have a method called range which takes 2 arguments, the first is the first key to find and the second is the last key to find. This function returns all of the keys between the two. So range uses binary search to find the first element and once it does, it uses linear search to go until the last key has been found. I thought this entire thing would have a runtime of O(log(n)) but now I am second guessing myself as it takes longer to execute than I'd like. So my question is what would be the runtime of this function since I seem to be mistaken?
worst-case complexity - [O(log(n))+O(n) = O(n)]
just think of a case where first key is index-1 and second one is index-n in worst case scenario.

Time complexity of while loop until list is empty?

My pseudo code is currently constructed as:
WHILE list is not empty
FOR item in list
do stuff to find an item that matches certain criteria
REMOVE matching item from list
I am having a hard time wrapping my mind about what the time complexity will be due to the fact that after each iteration the list will be getting smaller. Any thoughts? I thought o(n) since there is only 1 for loop per while iteration.
Be careful. o(n) and O(n) are very different.
“Only one for loop per iteration” - that one loop is not constant time, so your argument is wrong.
You seem to assume there is always a “matching item” that can be removed. If that isn’t true then your algorithm is fatally flawed.
Unless the criteria to be matched changes when an item is removed, you can make one pass through the loop without starting the search again at the beginning, so this can be done in O(n) where n is the size of the initial list.

Time Complexity of searching

there is a sorted array which is of very large size. every element is repeated more than once except one element. how much time will it take to find that element?
Options are:
1.O(1)
2.O(n)
3.O(logn)
4.O(nlogn)
The answer to the question is O(n) and here's why.
Let's first summarize the knowledge we're given:
A large array containing elements
The array is sorted
Every item except for one occurs more than once
Question is what is the time growth of searching for that one item that only occurs once?
The sorted property of the array, can we use this to speed up the search for the item? Yes, and no.
First of all, since the array isn't sorted by the property we must use to look for the item (only one occurrence) then we cannot use the sorted property in this regard. This means that optimized search algorithms, such as binary search, is out.
However, we know that if the array is sorted, then all items that have the same value will be grouped together. This means that when we look at an item we see for the first time we only have to compare it to the following item. If it's different, we've found the item we're looking for.
"see for the first time" is important, otherwise we would pick the first value since there will be a boundary between two groups of items where the two items are different.
So we have to move from one end of the array to the other, and compare each item to the following item, and this is an O(n) operation.
Basically, since the array isn't sorted by the property we're looking at, we're back to a linear search.
Must be O(n).
The fact that it's sorted doesn't help. Suppose you tried a binary method, jumping into the middle somewhere. You see that the value there has a neighbour that is the same. Now which half do you go to?
How would you write a program to find the value? You'd start at one end an check for an element whose neighbour is not the same. You'd have to walk the whole array until you found the value. So O(n)

Understanding these questions about binary search on linear data structures?

The answers are (1) and (5) but I am not sure why. Could someone please explain this to me and why the other answers are incorrect. How can I understand how things like binary/linear search will behavior on different data structures?
Thank you
I am hoping you already know about binary search.
(1) True-
Explanation
For performing binary search, we have to get to middle of the sorted list. In linked list to get to the middle of the list we have to traverse half of the list starting from the head, while in array we can directly get to middle index if we know the length of the list. So linked lists takes O(n/2) time which can be done in O(1) by using array. Therefore linked list is not the efficient way to implement binary search.
(2)False
Same explanation as above
(3)False
Explanation
As explained in point 1 linked list cannot be used efficiently to perform binary search but array can be used.
(4) False
Explanation
Binary search worst case time is O(logn). As in binary search we don't need to traverse the whole list. In first loop if key is lesser then middle value we will discard the second half of the list. Similarly now we will operate with the remaining list. As we can see with every loop we are discarding the part of the list that we don't have to traverse, so clearly it will take less then O(n).
(5)True
Explanation
If element is found in O(1) time, that means only one loop was run by the code. And in the first loop we always compare to the middle element of the list that means the search will take O(1) time only if the middle element is the key value.
In short, binary search is an elimination based searching technique that can be applied when the elements are sorted. The idea is to eliminate half the keys from consideration by keeping the keys in sorted order. If the search key is not equal to the middle element, one of the two sets of keys to the left and to the right of the middle element can be eliminated from further consideration.
Now coming to your specific question,
True
The basic binary search requires that mid-point can be found in O(1) time which can't be possible in linked list and can be way more expensive if the the size of the list is unknown.
True.
False
False
Binary search, mid-point calculation should be done in O(1) time which can only be possible in arrays , as the indices defined in arrays are known. Secondly binary search can only be applied to the arrays which are in sorted order.
False
The answer by Vaibhav Khandelwal, explained it nicely. But I wanted to add some variations of the array on to which binary search can be still applied. If the given array is sorted but rotated by X degree and contains duplicates, for example,
3 5 6 7 1 2 3 3 3
Then binary search still applies on it, but for the worst case, we needed we go linearly through this list to find the required element, which is O(n).
True
If the element found in the first attempt i.e situated at the mid-point then it would be processed in O(1) time.
MidPointOfArray = (LeftSideOfArray + RightSideOfArray)/ 2
The best way to understand binary search is to think of exam papers which are sorted according to last names. In order to find a particular student paper, the teacher has to search in that student name's category and rule-out the ones that are not alphabetically closer to the name of the student.
For example, if the name is Alex Bob, then teacher directly starts her search from "B", then take out all the copies that have surname "B", then again repeat the process, and skips the copies till letter "o" and so on till find it or not.

How can this be done in O(nlogn) time complexity

I had a question on my exams for which I had to come up with an efficient algorithm. The problem was like this:
We have some objects which have two properties:
H = <1,1000000>
R = <1,1000000>
we can insert one object into another if H1>H2 and R1>R2. The input contains pairs of H and R, one pair per line. if the current object can be inserted in any previous objects, we choose such with the least H and then we destroy both of them. print the number of left objects in the output.
I wonder how can this problem be solved in O(n.log(n)) time complexity using binary search trees or segment tree, or with fenwick tree.
Thanks in advance.
A solution with fenwick tree, as follows;
Let's sort the whole array by R at first (right now, we are not caring about H), and assign each item a token (which is equal to it's position in the sorted array).
Let's get back to our original array. We are going to run a sweep on the given array. Say, we have a fenwick tree, which will, instead of cumulative sum, store maximum (from beginning to that position) only for H.
For an item, say, we couldn't fit it into another item. Then we'll insert it into the tree. We'll insert in such position that is equal to it's token.
So, right now, we've a fenwick tree, which contains only the items we've dealt with till now. Other values are 0. The items in the tree are positioned in R sorted order.
Now, how to find out if we can fit current item to another object? We can actually run a binary search (upper bound) on fenwick tree for current item's H. And, as the items are already sorted in R order, instead of whole tree, we need to search in the effective range.
Binary search in fenwick tree can be done in O(log(n)). Check out the Find index with given cumulative frequency part of this article.

Resources