It's About data Structure - data-structures

There are two unsorted lists containing Integers, need to find the common largest integer in the list ?
I have an idea about this question, i taught first we need to find the largest element in the first list and then we need to apply linear search method for the second list using largest element of the first list. Is this logic correct....? If it's not can one help me with the logic for this question.
can any one help me out for this question please...

The problem with you first thought is that if the largest element in the first item does not occur in the second, you would never try another.
The most efficient way I can think of in a short time is this:
Order both arrays in descending order
grab the first element in the first array
compare to the first element in the second array
if it is the same, you are done
If the first item is larger than the second, pop it off of array 1 and repeat from step 2
if the first item is smaller than the second, pop the second off of array 2 and repeat from step 2

Related

What should be quck sort pointers position if piot is first or last?

It has been discussed in many places many times I believe, but I have been searching for 3 days now and did not realize how this is happening. My question is:
Where is the second pointer, if we take the first or the last element as a pivot?
What I literally mean:
Case 1
Central array element as a pivot
This is all clear as 5 is in the center, we go ahead and find the elements that comply the conditions:
1.) element < pivot for the left side if found we stop the pointer
2.) element > pivot for the right side if found we stop the pointer
3.) swap the elements where we stopped the pointers during steps 1 and 2
Case 2: (unclear one)
first or last element as a pivot
Here is unclear where should I put both pointers to start the finding of an elements, and which direction regarding pivot should I move? Should this be two pointers as well, and how should they be moved?
Still should be exectly two pointers?
If two pointers are being used, it's some variation of Hoare partition scheme, where the pointers start off at the first and last element of the array, and advance towards each other while scanning for "out of place" elements. If the first or last element is chosen as the pivot, the first swap will involve the pivot.
Note that Hoare algorithm normally includes the pivot in one of the partitions. This creates an issue if using the first or last element as a pivot, as data patterns like already sorted data would result in stack overflow as the split will always be 0 elements and n elements. To resolve this issue, the algorithm would need to exclude the pivot in the recursive calls.

Quick sort where the first element in the array is smaller

Consider an array of elements {10,5,20,15,25,22,21}.
Here, I take the pivot element as 21 (last in array). According to the most of the Quick sort algorithms I saw on Internet, they explained starting the first element is compared with the pivot element. If it is smaller, it gets swapped with the index element. But the algorithm breaks on having first small element in the array making me difficult to write down the intermediate steps the quick sort will go through.
All the guys on the Internet explained with example of array having the the first element greater than pivot, thus on comparing they didn't swap and moved to next element.
Please help.
My suggestion on how to understand the quick sort:
The key to understand quick sort is the partition procedure, which is usually a for loop. Keep in mind that:
our goal is to make the array finally become an array consisting of three parts at the end of the loop: the first part is smaller than the pivot, the second part is equal to or larger than the pivot, the last part is the unsorted part(which has no element).
at the very beginning of the loop, we also have three parts: the first part(which has no element) is smaller than the pivot, the second part(which has no element) is equal to or larger than the pivot, the last part is the unsorted part(which has array.length-1 elements).
During the loop, we compare and swap if needed, to ensure that in each loop, we always have those three parts, and the size of the first and second parts are increasing, and the size of the last parts is decreasing.
On your request in the comment:
Check this link: https://www.cs.rochester.edu/~gildea/csc282/slides/C07-quicksort.pdf
Read the three example figures VERY carefully and make sure you have understood them.

Time Complexity of searching

there is a sorted array which is of very large size. every element is repeated more than once except one element. how much time will it take to find that element?
Options are:
1.O(1)
2.O(n)
3.O(logn)
4.O(nlogn)
The answer to the question is O(n) and here's why.
Let's first summarize the knowledge we're given:
A large array containing elements
The array is sorted
Every item except for one occurs more than once
Question is what is the time growth of searching for that one item that only occurs once?
The sorted property of the array, can we use this to speed up the search for the item? Yes, and no.
First of all, since the array isn't sorted by the property we must use to look for the item (only one occurrence) then we cannot use the sorted property in this regard. This means that optimized search algorithms, such as binary search, is out.
However, we know that if the array is sorted, then all items that have the same value will be grouped together. This means that when we look at an item we see for the first time we only have to compare it to the following item. If it's different, we've found the item we're looking for.
"see for the first time" is important, otherwise we would pick the first value since there will be a boundary between two groups of items where the two items are different.
So we have to move from one end of the array to the other, and compare each item to the following item, and this is an O(n) operation.
Basically, since the array isn't sorted by the property we're looking at, we're back to a linear search.
Must be O(n).
The fact that it's sorted doesn't help. Suppose you tried a binary method, jumping into the middle somewhere. You see that the value there has a neighbour that is the same. Now which half do you go to?
How would you write a program to find the value? You'd start at one end an check for an element whose neighbour is not the same. You'd have to walk the whole array until you found the value. So O(n)

Understanding these questions about binary search on linear data structures?

The answers are (1) and (5) but I am not sure why. Could someone please explain this to me and why the other answers are incorrect. How can I understand how things like binary/linear search will behavior on different data structures?
Thank you
I am hoping you already know about binary search.
(1) True-
Explanation
For performing binary search, we have to get to middle of the sorted list. In linked list to get to the middle of the list we have to traverse half of the list starting from the head, while in array we can directly get to middle index if we know the length of the list. So linked lists takes O(n/2) time which can be done in O(1) by using array. Therefore linked list is not the efficient way to implement binary search.
(2)False
Same explanation as above
(3)False
Explanation
As explained in point 1 linked list cannot be used efficiently to perform binary search but array can be used.
(4) False
Explanation
Binary search worst case time is O(logn). As in binary search we don't need to traverse the whole list. In first loop if key is lesser then middle value we will discard the second half of the list. Similarly now we will operate with the remaining list. As we can see with every loop we are discarding the part of the list that we don't have to traverse, so clearly it will take less then O(n).
(5)True
Explanation
If element is found in O(1) time, that means only one loop was run by the code. And in the first loop we always compare to the middle element of the list that means the search will take O(1) time only if the middle element is the key value.
In short, binary search is an elimination based searching technique that can be applied when the elements are sorted. The idea is to eliminate half the keys from consideration by keeping the keys in sorted order. If the search key is not equal to the middle element, one of the two sets of keys to the left and to the right of the middle element can be eliminated from further consideration.
Now coming to your specific question,
True
The basic binary search requires that mid-point can be found in O(1) time which can't be possible in linked list and can be way more expensive if the the size of the list is unknown.
True.
False
False
Binary search, mid-point calculation should be done in O(1) time which can only be possible in arrays , as the indices defined in arrays are known. Secondly binary search can only be applied to the arrays which are in sorted order.
False
The answer by Vaibhav Khandelwal, explained it nicely. But I wanted to add some variations of the array on to which binary search can be still applied. If the given array is sorted but rotated by X degree and contains duplicates, for example,
3 5 6 7 1 2 3 3 3
Then binary search still applies on it, but for the worst case, we needed we go linearly through this list to find the required element, which is O(n).
True
If the element found in the first attempt i.e situated at the mid-point then it would be processed in O(1) time.
MidPointOfArray = (LeftSideOfArray + RightSideOfArray)/ 2
The best way to understand binary search is to think of exam papers which are sorted according to last names. In order to find a particular student paper, the teacher has to search in that student name's category and rule-out the ones that are not alphabetically closer to the name of the student.
For example, if the name is Alex Bob, then teacher directly starts her search from "B", then take out all the copies that have surname "B", then again repeat the process, and skips the copies till letter "o" and so on till find it or not.

Divide the list into 2 equal Parts

I have a list which contains random numbers such that Number >= 0. Now i have to divide the list into 2 equal parts (assume list contains even number of elements) such that all the numbers contain in first list are less than the numbers present in second list. This can be easily done by any sorting mechanism in O(nlogn). But i don't need data to be sorted in any two equal length list. Only condition is that (all elements in first list <= all elements in second list.)
So is there a way or hack we can reduce the complexity since we don't require sorted data here?
If the problem is actually solvable (data is right) you can find the median using the selection algorithm. When you have that you just create 2 equally sized arrays and iterate over the original list element by element putting each element into either of the new lists depending whether it's bigger or smaller than the median. Should run in linear time.
#Edit: as gen-y-s pointed out if you write the selection algorithm yourself or use a proper library it might already divide the input list so no need for the second pass.

Resources