Big - O notation linear and binary search [closed] - big-o

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
In terms of Big - O notation the linear search is a x^n, but what is the binary search? I am not 100% that the linear search is correct.

Linear search means that you will have to iterate through the list of elements until you find the element that you were looking for.
For instance, if you have a list with elements [1, 3, 5, 7, 9, 11] and you are looking for 11 you will start by the first element, then the second element, and so on, which in this case will take 6 iterations.
Generally, we could say that in the worst case you will have to traverse the whole list; so it will take n iterations, where n is the number of elements on the list.
So we say that the linear search algorithm is O(n).
In the case of binary search, you start on the middle element of the list:
Case 1: the number we are searching is the same as the number on the middle element: we are done!
Case 2: the number we are searching is smaller: we will only search the elements that precedes the middle element.
Case 3: the number we are searching is bigger: we will only search on the subsequent elements.
In our example, the number we are searching is 11 and the middle element is 5; since 11 > 5, we will only search on the sublist of the elements bigger than 5, namely [7, 9, 11].
Now, we will keep doing the same until we find the element that we are searching, in this case it takes only three iterations to get to the last element.
In general this approach takes log(n) iterations; therefore, the algorithm is O(log(n)).
Note that the latter only works for sorted lists.

Related

Why binary search algorithm runs in O(log n) time for a sorted array with n elements? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
Here is my thought:
You have N elements to begin with.
But you aren't going through each one though?
Say I have
{1, 2, 3, 4, 5, 6, 7, 8, 9} and I want to find 4.
The first step is looking at 5, we see 4 < arr[5]. Then, we have
{1, 2, 3, 4, 5}, the middle is 3, we see 4 > arr[2], thus we are left with {3, 4, 5}.
Now we will get 4.
But that was only 3 steps! I am not understanding why the first search takes N elements, when we are looking at the (N-1)/2th element, which is one step?
EDIT!!!
Here is what I am taught:
search 1: n elements in search space
search 2: n/2 elements in search space
search 3: n/4 elements in search space
...
search i: 1 element in search space.
search i has n/(2^[i-1])elements, thus you solve for i then you get
i = log(n) + 1.
What I don't understand:
You have n elements, I agree, but you aren't searching through all of them, you are only searching 1 element, then why do you count all n?
The main reason why binary search (which requires sorted data in a data structure with O(1) random access reads) is O(log N) is that for any given data set, we start by looking at the middle-most element.
If that is large than the element we're looking for, we know that we can ignore anything from that point to the end. If it is smaller, we can ignore anything from the beginning to that element. This means that for every step, we're effectively cutting the size of the remaining in half.
By cutting the problem set in half at every step, we can (relatively) easily see that to get from N elements to a single element is O(log N) steps.
The reason why we're not terminating earlier in your example is that even though it seems as if we're scanning the first elements, the only things we actually do are "get length of array" and "access the middle-most element" (so, we never know if the element we're looking for is contained in the array).

Heap implementation. Worst case extract complexity [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Hi I am learning algorithms using python. I was going through some sample questions in the book where it asks to show why the worst case time of the extraction operation on a heap 
implemented as an array is O(log n)? I have no clue where to start on this and I am getting close to the exam. Could anyone please help me prove this?
Thanks
Let's say we have a max heap. I will illustrate for n = 7, but logic is the same heaps of bigger size.
Worst case for extract happens when the root node has been changed to contain the smallest value of all the nodes (we extract the root in O(1) and put the last element in the array to be a root).
Now when we call Max-Heapify on the root, the value will have to be exchanged down with its child at every level, until it reaches the lowest level.
This is because, after every swapping, the value will still be smaller than both its children (since it is the minimum), until it reaches the lowest level where it has no more children.
In such a heap, the number of exchanges to max-heapify the root will be equal to the
height of the tree, which is log(n). So the worst case running time is O(log n).
Assuming a min priority queue:
Disregarding the actual extraction which is O(1), remember that when you extract the smallest value from a heap, you swap the last element with the first one then restore the heap property by comparing this element to it's children and swapping accordingly. Using heap property, the children of any node p are at index 2p and 2p + 1. So in the worst case, to find the children of any node, one would have to check nodes:
{ 2p, 2 * 2p, 2 * 2 * 2p, ..., N - 1 }
{ 2p, 4p, 8p, ..., lg2N(p) }
or
{ (2 1)p, (2 2)p, (2 3)p, ..., (2 lg2N)p }
So as you can see, the maximum number of elements to check in the worst case is lg2(N)
This was a pretty rough way of showing this property, I apologize

Perfect Balancing, linear time [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
How can I rebuild a binary searching tree into perfect balanced in linear time?
I think i'll do rotations to find median, but I'm not sure if its good way.
Thanks for any ideas.
At the very least, you can do it in two steps:
Extract a sorted array from the tree using in-order traversal.
Construct a near-perfect binary tree. For example, by just capping the height with h=log2n where n is the number of elements. You will get only a part of the perfect tree if n is not equal to 2k-1 for some integer k, but the height will still be minimal possible.
Here is an explanatory image for constructing the tree of values 1, 2, 3, ... 10:
8
4 10
2 6 9
1 3 5 7
Alternatively, on step 2, you can put the middle element of the array as root, divide what remains into two equally sized parts, and proceed recursively. An example:
5
2 8
1 3 7 10
4 6 9
Each of the steps can be performed in linear time.

Proposing an O(logm) algorithm for the following [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I need to propose an algorithm for the following: let us assume that we have an array consisting of zeros and ones. The array is filled with zeros from the beginning of the array to the index m, and all remaining indexes are filled with ones. I need to find this index m in O(logm) time. Here is what i thought: I think this is like binary search, first i look at the middle element of the array, if that is zero, then i forget about the left part of the array and do the same for the right part, and continue like this until i encounter a one. If the middle element is one, then i forget about the right part and do the same for left part of the array. Is this a correct O(logm) solution? Thanks
It is not "like" a binary search - it is a binary search. Unfortunately, it is O(logN), not O(logM).
To find the borderline in O(logM), start from the other end: try positions {1, 2, 4, 8, 16, ... 2^i} and so on, until you hit a 1. Then do a binary search on the interval between 2^i and 2^(i+1), where 2^i+1 is the first position where you discovered a 1.
Finding the first 1 takes O(logM), because the index is doubled on each iteration. After that, the binary search takes another O(logM), because the length of the interval 2^i..2^(i+1) is less than M as well.

What algorithm can be used here [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
This is an interview question, not a homework.
N friends are playing a game. Each of them has a list of numbers in front of himself.
Each of N friends chooses a number from his list and reports it to the game administrator. Then the game administrator sorts the reported numbers and shouts the K-th largest number.
I must the count all possible numbers that the game administrator can shout.
For example, if N = 3 and K = 3, and the lists for the 3 friends are 2 5 3, 8 1 6, 7 4 9. The output is 6, since the possible values are 4 5 6 7 8 9.
Can anybody suggest a decent algorithm for this problem? What I am doing is to create all possible permutations taking one number from each list, then sorting the resultant and printing the kth-largest. But that takes too much time.
To know if a number is present in the result or not, you need to know for each other list if there are numbers above and if there are numbers below. List where there are both numbers above and below are not a problem as you can choose a number in them as it suits you. The problem are lists where there are only numbers above or only numbers below. The first ones must be at most N-K, the second ones must be at most K. If this is not true, your number cannot be picked. If this is true, you can always choose numbers in the lists where there are both number above and below so that your number is picked.
This can be checked in linear time, or even better if your first sort your lists, thus giving an overall complexity of O(n.log(n)) where n is the number of numbers in total. Without sorting you got a n² complexity.
In your example with lists :
{2 5 3}, {8 1 6}, {7 4 9}
say we are looking for the 2-greatest number. For each number we ask if it can be shout by the administator. For each of them we look if in the other list there are both numbers below and numbers above. Let's look further for some of them
For 5 : there is numbers above and below in both other lists. So "5" can be shout by the administrator.
For "2" : there is numbers above and below in the second list so I can freely choose a number above or below in this list. But there are only numbers above in the third list, so the picked number in this list will always be greater. Since i can freely choose a number below in the second list, thus making my "2" the 2nd greatest number.
For "1" : there is only numbers above in the second list, so "1" will always be the smallest element.
For "9" : this is the other way round, it is always the greatest.
take the smallest number from each set. find the K-th -largest of these. This is the smallest number that is in the result. Similarly, find the largest number in the result. Prove that Each number between the two is in the result.

Resources