K Closest with unsorted array - algorithm

I am prepping for interview leet-code type problems and I came across the k closest problem, but given a sorted array. This problem requires finding the k closest elements by value to an input value from the array. The answer to this problem was fairly straight forward and I did not have any issues determining a linear-time algorithm to solve it.
However, working on this problem got me thinking. Is it possible to solve this problem given an unsorted array in linear time? My first thought was to use a heap and that would give an O(nlogk) time complexity solution, but I am trying to determine if its possible to come up with an O(n) solution? I was thinking about possibly using something like quickselect, but the issue is that this has an expected time of O(n), not a worst case time of O(n).
Is this even possible?

The median-of-medians algorithm makes Quickselect take O(n) time in the worst case.
It is used to select a pivot:
Divide the array into groups of 5 (O(n))
Find the median of each group (O(n))
Use Quickselect to find the median of the n/5 medians (O(n))
The resulting pivot is guaranteed to be greater and less than 30% of the elements, so it guarantees linear time Quickselect.
After selecting the pivot, of course, you have to continue on with the rest of Quickselect, which includes a recursive call like the one we made to select the pivot.
The worst case total time is T(n) = O(n) + T(0.7n) + T(n/5), which is still linear. Compared to the expected time of normal Quickselect, though, it's pretty slow, which is why we don't often use this in practice.

Your heap solution would be very welcome at an interview, I'm sure.
If you really want to get rid of the logk, which in practical applications should seldom be a problem, then yes, using Quickselect would be another option. Something like this:
Partition your array in values smaller and larger than x. <- O(n).
For the lower half, run Quickselect to find the kth largest number, then take the right-side partition which are your k largest numbers. <- O(n)
Repeat step 2 for the higher half, but for the k smallest numbers. <- O(n)
Merge your k smallest and k largest numbers and extract the k closest numbers. <- O(k)
This gives you a total time complexity of O(n), as you said.
However, a few points about your worry about expected time vs worst-case time. I understand that if an interview question explicitly insists on worst-case O(n), then this solution might not be accepted, but otherwise, this can well be considered O(n) in practice.
The key here being that for randomized quickselect and random or well-behaved input, the probability that the time complexity goes beyond O(n) decreases exponentially as the input grows. Meaning that already at largeish inputs, the probability is as small as guessing at a specific atom in the known universe. The assumption on well-behaved input concerns being somewhat random in nature and not adversarial. See this discussion on a similar (not identical) problem.

Related

A linear algorithm for this specification?

This is my question I have got somewhere.
Given a list of numbers in random order write a linear time algorithm to find the ๐‘˜th smallest number in the list. Explain why your algorithm is linear.
I have searched almost half the web and what I got to know is a linear-time algorithm is whose time complexity must be O(n). (I may be wrong somewhere)
We can solve the above question by different algorithms eg.
Sort the array and select k-1 element [O(n log n)]
Using min-heap [O(n + klog n)]
etc.
Now the problem is I couldn't find any algorithm which has O(n) time complexity and satisfies that algorithm is linear.
What can be the solution for this problem?
This is std::nth_element
From cppreference:
Notes
The algorithm used is typically introselect although other selection algorithms with suitable average-case complexity are allowed.
Given a list of numbers
although it is not compatible with std::list, only std::vector, std::deque and std::array, as it requires RandomAccessIterator.
linear search remembering k smallest values is O(n*k) but if k is considered constant then its O(n) time.
However if k is not considered as constant then Using histogram leads to O(n+m.log(m)) time and O(m) space complexity where m is number of possible distinct values/range in your input data. The algo is like this:
create histogram counters for each possible value and set it to zero O(m)
process all data and count the values O(m)
sort the histogram O(m.log(m))
pick k-th element from histogram O(1)
in case we are talking about unsigned integers from 0 to m-1 then histogram is computed like this:
int data[n]={your data},cnt[m],i;
for (i=0;i<m;i++) cnt[i]=0;
for (i=0;i<n;i++) cnt[data[i]]++;
However if your input data values does not comply above condition you need to change the range by interpolation or hashing. However if m is huge (or contains huge gaps) is this a no go as such histogram is either using buckets (which is not usable for your problem) or need list of values which lead to no longer linear complexity.
So when put all this together is your problem solvable with linear complexity when:
n >= m.log(m)

Time complexity of finding the quantile of a given probability

Assuming a random process X of n elements: X={x1,...,xn}.
For a given probability p, the corresponding quantile is determined through the quantile function Q defined as:
Q(p)={x|Pr(X<=x)=p}
What is the time complexity of finding the quantile of a given probability p?
Quickselect can use median of medians pivot selection to get O(n) worst-case runtime to select the kth order statistic. This appears to be more or less what you are after unless I am misinterpreting (you want the (n*p)'th smallest element).
You cannot possibly improve upon this in the general case: you must at least look at the whole array or else the element you didn't look at might be the answer you need.
Therefore, the Quickselect is theoretically optimal in the worst case. Note: this pivot selection strategy has bad constants and is not used in practice. In practice, using random pivot selection gives good expected performance but O(n^2) worst-case.

Finding ratio objects/key which Counting sort worst case faster than Quicksort worst case

I have a question which I am trying to solve for personal understanding of comparing algorithms as follows. I am given n to be the number of objects and m be the number of keys. I want to find the ratios objects/keys (m/n) that CountingSort is faster in the worst case compared to QuickSort when quicksort chooses the last element as the pivot.
So I have that the worst case running time for CountingSort is O(n+k) and the case when quicksort chooses the last element as the pivot is Theta(n^2). I'm confused of how to approach this question and would welcome some guidance so I could reach a solution.
My idea is that it will be for when the number of keys is double the number of objects? So we have that the runtime for worst case of quicksort is O(n^2) so we want to have counting sort less than this. Therefore if we have rations of m/n < m^2/n this will be the case?

Best and worst case time for Algorithm S when time complexity changes in accordance to n being even/odd

The following is a homework assignment, so I would rather get hints or bits of information that would help me figure this out, and not complete answers.
Consider S an algorithm solution to a problem that takes as input an array A of size n. After analysis, the following conclusion was obtained:
Algorithm S executes an O(n)-time computation for each even number in A.
Algorithm S executes an O(logn)-time computation for each odd number in A.
What are the best and worst case time for algorithm S?
From this I understand that the time complexity changes in accordance to n being even or odd. In other words, if n is even, S takes O(n) time and when n is odd, S takes O(logn).
Is it a simple matter of taking the best case and the worst case of both growth-rates, and choosing their boundaries? Meaning:
Best case of O(n) is O(1), and worst case is O(n).
Best case of O(logn) is O(logn) and worst case is O(logn).
Therefore the best case for Algorithm S is O(logn) and the worst case is O(n)?
Am I missing something? or am I wrong in assessing the different best/worst case of both cases of big-Oh?
1st attempt:
Ok, so I completely misunderstood the problem. Thanks to candu, I can now better understand what is required of me, and so try to calculate the best and worst case better.
It seems that Algorithm S changes its runtime according to EACH number in A. If the number is even, the runtime is O(n), and if the number is odd, we get O(logn).
The worst case will be composed of an array A of n even numbers, and for each the algorithm will run O(n). In other words, the worst case runtime for Algorithm S should be n*O(n).
The best case will be composed of an array A of n odd numbers, and for each the algorithm will run O(logn). The best case runtime for algorithm S should be n*O(logn).
Am I making any sense? is it true then that:
Best case of algorithm S is nO(logn) and worst case is nO(n)?
If that is true, can it be rewritten? for example, as O(log^n(n)) and O(n^n)? or is this an arithmetic mistake?
2nd attempt:
Following JuanLopes' response, it seems like I can rewrite nO(n) as O(n*n) or O(n^2), and nO(logn) as O(nlogn).
Does it make sense now that Algorithm S runs at O(nlogn) at the best case, and O(n^2) at the worst case?
There's a bit of confusion here: the algorithm runtime doesn't depend on n being even or odd, but on whether the numbers in A are even or odd.
With that in mind, what sort of input A would make Algorithm S run faster? Slower?
Also: it doesn't make sense to say that the best case of O(n) is O(1). Suppose I have an algorithm ("Algorithm Q") that is O(n); all I can say is that there exists a constant c such that, for any input of size n, Algorithm Q takes less than cn time. There is no guarantee that I can find specific inputs for which Algorithm Q is O(1).
To give a concrete example, this takes linear time no matter what input it is passed:
def length(A):
len = 0
for x in A:
len += 1
return len
A few thoughts.
First, there is no mention of asymptotically tight time. So an O(n) algorithm can actually be an O(logn) one. So just imagine the best case running time this algorithm can be in this case. I know, this is a little picky. But this is a homework, I guess it's always welcome to mention all the possibilities.
Second, even if it's asymptotically tight, it doesn't necessarily mean it's tight for all elements. Consider insertion sort. For each new element to insert, we need to find the correct position in the previous already-sorted subarray. The time is proportional to the number of element in subarray, which has the upper bound O(n). But it doesn't mean each new element need exactly #n comparisons to insert. Actually, the shorter the subarray, the quicker the insertion.
Back to this question. "executes an O(logn)-time computation for each odd number in A." Let's assume all odd nubmers. It could be that the first odd takes O(log1), the second odd takes O(log2), .. the nth takes O(logn). Totally, it takes O(logn!). It doesn't contradicts "O(logn) for each odd number".
As to worst case, you may analysize it in much the same way.

Algorithm to solve variation of k-partition

For my algorithm design class homework came this brain teaser:
Given a list of N distinct positive integers, partition the list into two
sublists of n/2 size such that the difference between sums of the sublists
is maximized.
Assume that n is even and determine the time complexity.
At first glance, the solution seems to be
sort the list via mergesort
select the n/2 location
for all elements greater than, add to high array
for all elements lower than, add to low array
This would have a time complexity of O((n log n)+ n)
Are there any better algorithm choices for this problem?
Since you can calculate median in O(n) time you can also solve this problem in O(n) time. Calculate median, and using it as threshold, create high array and low array.
See http://en.wikipedia.org/wiki/Median_search on calculating median in O(n) time.
Try
http://en.wikipedia.org/wiki/Selection_algorithm#Linear_general_selection_algorithm_-_Median_of_Medians_algorithm
What you're effectively doing is finding the median. The trick is, once you've found the values, you wouldn't have needed to sort the first n/2 and the last n/2.

Resources