Are there any Computational problem with ϴ(logn)^2 algorithm? - algorithm

Are there any real computaional problems which can be solved by time complexity of log(n) * log(n)?
This is different from finding smallest element in sorted matrix, which is log(n)+log(n) or 2log(n).
There are can be some kind of pattern printing algorithm which can be made as ϴ(logn)^2 but I'm not sure if they are classified as Computational Problems.

A range query on a d-dimensional range tree with k results runs in O(log^d(n) + k) time. So a query that you know will result in a bounded number of results on a 2-d range tree runs in O(log^2(n)) time.
See https://en.wikipedia.org/wiki/Range_tree

Dichotomic search in a sorted array when the indexes are processed as binary strings (bignums).

Related

A linear algorithm for this specification?

This is my question I have got somewhere.
Given a list of numbers in random order write a linear time algorithm to find the 𝑘th smallest number in the list. Explain why your algorithm is linear.
I have searched almost half the web and what I got to know is a linear-time algorithm is whose time complexity must be O(n). (I may be wrong somewhere)
We can solve the above question by different algorithms eg.
Sort the array and select k-1 element [O(n log n)]
Using min-heap [O(n + klog n)]
etc.
Now the problem is I couldn't find any algorithm which has O(n) time complexity and satisfies that algorithm is linear.
What can be the solution for this problem?
This is std::nth_element
From cppreference:
Notes
The algorithm used is typically introselect although other selection algorithms with suitable average-case complexity are allowed.
Given a list of numbers
although it is not compatible with std::list, only std::vector, std::deque and std::array, as it requires RandomAccessIterator.
linear search remembering k smallest values is O(n*k) but if k is considered constant then its O(n) time.
However if k is not considered as constant then Using histogram leads to O(n+m.log(m)) time and O(m) space complexity where m is number of possible distinct values/range in your input data. The algo is like this:
create histogram counters for each possible value and set it to zero O(m)
process all data and count the values O(m)
sort the histogram O(m.log(m))
pick k-th element from histogram O(1)
in case we are talking about unsigned integers from 0 to m-1 then histogram is computed like this:
int data[n]={your data},cnt[m],i;
for (i=0;i<m;i++) cnt[i]=0;
for (i=0;i<n;i++) cnt[data[i]]++;
However if your input data values does not comply above condition you need to change the range by interpolation or hashing. However if m is huge (or contains huge gaps) is this a no go as such histogram is either using buckets (which is not usable for your problem) or need list of values which lead to no longer linear complexity.
So when put all this together is your problem solvable with linear complexity when:
n >= m.log(m)

One problem to cover all the time complexities

A college instructor here. I am trying to find a meaningful (practical) code example to illustrate different time complexities for beginners in a ELi5 manner. The code should start with constant complexity and then incrementally, by adding small piece of code, increases in complexity: .., logn, n, nlogn, n^2, 2^n, ..
I think I can explain it better with one example that has small incremental changes rather than switch the context from searching to sorting to brute force algorithms .
Any example will be artificial. But here is one that does reasonably well.
Let vec be a sorted array of numbers, i an integer, and x be another number. In order answer the following questions.
O(1) What is the value of vec[i]?
O(n) Is x in a range from vec by linear search?
O(log(n)) Is x in a range from vec by binary search?
O(n^2) Is x the sum of two elements in a range from of vec by a double loop?
O(n log(n)) Is x the sum of two elements of vec by linear search on the first with a binary search on the second. (Simplifying trick, do a linear search on the smaller and binary on the second. then reuse your code from 3.)
O(2^n) Is x the sum of any subset of elements of vec by recursion?
(pseudopolynomial) Memoize the previous solution. Discuss memory vs speed tradeoffs.

Time complexity for n-ary search.

I am studying time complexity for binary search, ternary search and k-ary search in N elements and have come up with its respective asymptotic worse case run- time. However, I started to wonder what would happen if I divide N elements into n ranges (or aka n-ary search in n elements). Would that be a sorted linear search in an array which would result in a run-time of O(N)? This is a little confusing. Please help me out!
What you say is right.
For a k-ary search we have:
Do k-1 checks in boundaries to isolate one of the k ranges.
Jump into the range obtained from above.
Hence the time complexity is essentially O((k-1)*log_k(N)) where log_k(N) means 'log(N) to base k'. This has a minimum when k=2.
If k = N, the time complexity will be: O((N-1) * log_N(N)) = O(N-1) = O(N), which is the same algorithmically and complexity-wise as linear search.
Translated to the algorithm above, it is:
Do N-1 checks in boundaries (each of the first N-1 elements) to isolate one of the N ranges. This is the same as a linear search in the first N-1 elements.
Jump into the range obtained from above. This is the same as checking the last element (in constant time).

Knapsack with max element count in subsets

i have an integer array, and i need to find a subset of this array with max 3 elements which is equal to W.
Can i solve this problem using knapsack? or i need to calculate every 1-2-3 element combination of the array?
Thanks already
To look for 3 elements that sum to W, this is exactly the 3SUM problem.
It can be solved in O(n2) time by either:
Inserting each number into a hash table, then, for each combination of two numbers a and b, checking whether W-a-b exists in the hash table.
Sorting the array, then, for each element a, looking right for two elements that sums to W-a using 2 iterators from either side.
If the integer range is in the range [-u, u], you can solve your problem using a Fast Fourier transform in O(n + u log u). See the above link for more details on either this or one of the above approaches.
Since your problem is dependent on solving 3SUM, which is a well-known problem, you're very unlikely to find a solution with better running time than the above well-known solutions for 3SUM.
To look for 1 element:
You can do a simple linear search (O(n)).
To look for 2 elements:
This can be solved by simply checking each combination of 2 elements in O(n2) (you needn't do something more complex as the asymptotic running time of 3 elements will result in O(n2) total time regardless of how efficient this is).
It can also be solved in O(n) or O(n log n) using methods identical to those described above for 3SUM.
Overall running time:
O(n2) or O(n + u log u), depending on which method was used to solve the 3SUM part.

Is it possible to find two numbers whose difference is minimum in O(n) time

Given an unsorted integer array, and without making any assumptions on
the numbers in the array:
Is it possible to find two numbers whose
difference is minimum in O(n) time?
Edit: Difference between two numbers a, b is defined as abs(a-b)
Find smallest and largest element in the list. The difference smallest-largest will be minimum.
If you're looking for nonnegative difference, then this is of course at least as hard as checking if the array has two same elements. This is called element uniqueness problem and without any additional assumptions (like limiting size of integers, allowing other operations than comparison) requires >= n log n time. It is the 1-dimensional case of finding the closest pair of points.
I don't think you can to it in O(n). The best I can come up with off the top of my head is to sort them (which is O(n * log n)) and find the minimum difference of adjacent pairs in the sorted list (which adds another O(n)).
I think it is possible. The secret is that you don't actually have to sort the list, you just need to create a tally of which numbers exist. This may count as "making an assumption" from an algorithmic perspective, but not from a practical perspective. We know the ints are bounded by a min and a max.
So, create an array of 2 bit elements, 1 pair for each int from INT_MIN to INT_MAX inclusive, set all of them to 00.
Iterate through the entire list of numbers. For each number in the list, if the corresponding 2 bits are 00 set them to 01. If they're 01 set them to 10. Otherwise ignore. This is obviously O(n).
Next, if any of the 2 bits is set to 10, that is your answer. The minimum distance is 0 because the list contains a repeated number. If not, scan through the list and find the minimum distance. Many people have already pointed out there are simple O(n) algorithms for this.
So O(n) + O(n) = O(n).
Edit: responding to comments.
Interesting points. I think you could achieve the same results without making any assumptions by finding the min/max of the list first and using a sparse array ranging from min to max to hold the data. Takes care of the INT_MIN/MAX assumption, the space complexity and the O(m) time complexity of scanning the array.
The best I can think of is to counting sort the array (possibly combining equal values) and then do the sorted comparisons -- bin sort is O(n + M) (M being the number of distinct values). This has a heavy memory requirement, however. Some form of bucket or radix sort would be intermediate in time and more efficient in space.
Sort the list with radixsort (which is O(n) for integers), then iterate and keep track of the smallest distance so far.
(I assume your integer is a fixed-bit type. If they can hold arbitrarily large mathematical integers, radixsort will be O(n log n) as well.)
It seems to be possible to sort unbounded set of integers in O(n*sqrt(log(log(n))) time. After sorting it is of course trivial to find the minimal difference in linear time.
But I can't think of any algorithm to make it faster than this.
No, not without making assumptions about the numbers/ordering.
It would be possible given a sorted list though.
I think the answer is no and the proof is similar to the proof that you can not sort faster than n lg n: you have to compare all of the elements, i.e create a comparison tree, which implies omega(n lg n) algorithm.
EDIT. OK, if you really want to argue, then the question does not say whether it should be a Turing machine or not. With quantum computers, you can do it in linear time :)

Resources