complexity analysis of kth smallest element in min heap - algorithm

I am working on to find the kth smallest element in min heap. I have got a code for this whose complexity is O(k log k). I tried to improve it to O(k).
Below is the code.
struct heap{
int *array;
int count;
int capacity;
};
int kthsmallestelement(struct heap *h,int i,int k){
if(i<0||i>=h->count)
return INT_MIN;
if(k==1)
return h->array[i];
k--;
int j=2*i+1;
int m=2*i+2;
if(h->array[j] < h->array[m])
{
int x=kthsmallestelement(h,j,k);
if(x==INT_MIN)
return kthsmallestelement(h,m,k);
return x;
}
else
{
int x=kthsmallestelement(h,m,k);
if(x==INT_MIN)
return kthsmallestelement(h,j,k);
return x;
}
}
My code is traversing k elements in heap and thus complexity is O(k).
Is it correct?

Your code, and in fact, its entire approach - are completely wrong, IIUC.
In a classic min-heap, the only thing you know is that each path from the root to the children is non-decreasing. There are no other constraints, in particular no constraints between the paths.
It follows that the k-th smallest element can be anywhere in the first 2k element. If you are just using the entire heap's array built & maintained using the classic heap algorithm, any solution will necessarily be Ω(min(n, 2k)). Anything below that will require additional requirements on the array's structure, an additional data structure, or both.

Related

What's the best time complexity of a queue that supports extracting the minimum?

I ran into the following very difficult interview question:
Consider Queue Data Structure with three operations:
- Add into the front of list (be careful front of list)
- Delete from Tail of the list (end of the list)
- Extract Min (remove)
The best implementation of this data structure has amortized time:
A) three operation at O(1)
B) three operation at O(log n)
C) add and delete in O(1) and Extract-min in O(log n)
D) add and delete in O(log n) and Extract-min in O(n)
After the interview I saw that (C) is the correct answer. Why is this the case?
The first challenge is comparing the options: which option is better than the others and how we can understand the final correct option?
Of the given running times, A is faster than C is faster than B is faster than D.
A is impossible in a comparison-based data structure (the unstated norm here) because it would violate the known Ω(n log n)-time lower bound for comparison sorts by allowing a linear-time sorting algorithm that inserts n elements and then extracts the min n times.
C can be accomplished using an augmented finger tree. Finger trees support queue-like push and pop in amortized constant time, and it's possible to augment each node with the minimum in its sub-tree. To extract the min, we use the augmentations to find the minimum value in the tree, which will be at depth O(log n). Then we extract this minimum by issuing two splits and an append, all of which run in amortized time O(log n).
Another possibility is to represent the sequence as a splay tree whose nodes are augmented by subtree min. Push and pop are O(1) amortized by the dynamic finger theorem.
Fibonacci heaps do not accomplish the same time bound without further examination because deletes cost Θ(log n) amortized regardless of whether the deleted element is the min.
Since C is feasible, there is no need to consider B or D.
Given the limitations on the data structure, we actually don't need the full power of finger trees. The C++ below works by maintaining a list of winner trees, where each tree has size a power of two (ignoring deletion, which we can implement as soft delete without blowing up the amortized running time). The sizes of the trees increase and then decrease, and there are O(log n) of them. This gives the flavor of finger trees with a much lesser implementation headache.
To push on the left, we make a size-1 tree and then merge it until the invariant is restored. The time required is O(1) amortized by the same logic as increasing a binary number by one.
To pop on the right, we split the rightmost winner tree until we find a single element. This may take a while, but we can charge it all to the corresponding push operations.
To extract the max (changed from min for convenience because nullopt is minus infinity, not plus infinity), find the winner tree containing the max (O(log n) since there are O(log n) trees) and then soft delete the max from that winner tree (O(log n) because that's the height of that tree).
#include <stdio.h>
#include <stdlib.h>
#include <list>
#include <optional>
class Node {
public:
using List = std::list<Node *>;
virtual ~Node() = default;
virtual int Rank() const = 0;
virtual std::optional<int> Max() const = 0;
virtual void RemoveMax() = 0;
virtual std::optional<int> PopRight(List &nodes, List::iterator position) = 0;
};
class Leaf : public Node {
public:
explicit Leaf(int value) : value_(value) {}
int Rank() const override { return 0; }
std::optional<int> Max() const override { return value_; }
void RemoveMax() override { value_ = std::nullopt; }
std::optional<int> PopRight(List &nodes, List::iterator position) override {
nodes.erase(position);
return value_;
}
private:
std::optional<int> value_;
};
class Branch : public Node {
public:
Branch(Node *left, Node *right)
: left_(left), right_(right),
rank_(std::max(left->Rank(), right->Rank()) + 1) {
UpdateMax();
}
int Rank() const override { return rank_; }
std::optional<int> Max() const override { return max_; }
void RemoveMax() override {
if (left_->Max() == max_) {
left_->RemoveMax();
} else {
right_->RemoveMax();
}
UpdateMax();
}
std::optional<int> PopRight(List &nodes, List::iterator position) override {
nodes.insert(position, left_);
auto right_position = nodes.insert(position, right_);
nodes.erase(position);
return right_->PopRight(nodes, right_position);
}
private:
void UpdateMax() { max_ = std::max(left_->Max(), right_->Max()); }
Node *left_;
Node *right_;
int rank_;
std::optional<int> max_;
};
class Queue {
public:
void PushLeft(int value) {
Node *first = new Leaf(value);
while (!nodes_.empty() && first->Rank() == nodes_.front()->Rank()) {
first = new Branch(first, nodes_.front());
nodes_.pop_front();
}
nodes_.insert(nodes_.begin(), first);
}
std::optional<int> PopRight() {
while (!nodes_.empty()) {
auto last = --nodes_.end();
if (auto value = (*last)->PopRight(nodes_, last)) {
return value;
}
}
return std::nullopt;
}
std::optional<int> ExtractMax() {
std::optional<int> max = std::nullopt;
for (Node *node : nodes_) {
max = std::max(max, node->Max());
}
for (Node *node : nodes_) {
if (node->Max() == max) {
node->RemoveMax();
break;
}
}
return max;
}
private:
std::list<Node *> nodes_;
};
int main() {
Queue queue;
int choice;
while (scanf("%d", &choice) == 1) {
switch (choice) {
case 1: {
int value;
if (scanf("%d", &value) != 1) {
return EXIT_FAILURE;
}
queue.PushLeft(value);
break;
}
case 2: {
if (auto value = queue.PopRight()) {
printf("%d\n", *value);
} else {
puts("null");
}
break;
}
case 3: {
if (auto value = queue.ExtractMax()) {
printf("%d\n", *value);
} else {
puts("null");
}
break;
}
}
}
}
It sounds like they were probing you for knowing about priority queues implemented by way of fibonacci heaps.
Such implementations has the running times described in answer c.
O(1) time as only one operation has to be performed to locate it, so we can add at the start and delete from the end in a single operation.
O(log n) when we do divide and conquer type of algorithms like binary search, so as we have to extract the minimum instead of doing O(n) and increasing the time complexity we use the O(log n)
You will start by thinking a min-heap for the extract-min operation. This will take O(log n) time, but so will the operations add and delete. You need to think can we have any of these two operations in constant time? Is there any data structure which can do so?
Closest answer is : Fibonacci-heap, used for implementing priority-queues (quite popularly used for implementing Djistra's Algorithm), which has the amortized run-time complexities of O(1) for insert, delete in O(log n) though (but since the operation is always delete from tail, we may be able to achieve this by maintaining a pointer to the last node and doing the decrease-key operation in O(1) time) and O(log n) for delete-min operations.
Internally, fibonacci-heap is a collection of trees, all satisfying the standard min-heap condition (value of parent always lower than its children), where the roots of all these trees are linked using a circular doubly linked list. This section best explains the implementations of each operation alongside its run-time complexity in further detail.
Have a look at this great answer which explains the intuition behind fibonacci-heaps.
Edit: As per your query regarding choosing between B to D, let's discuss them one by one.
(B) would be your first-at-glance answer since it clearly strikes as a min-heap. This eliminates (D) as well since it says extract-min in O(n) time but we clearly know we can do better. Thus leaving (C), with better options for add/delete operations, that is O(1). If you can think of combining multiple min-heaps (roots and children) with a circular doubly linked list, keeping track of a pointer to the root containing the minimum key in a data structure, i.e. a fibonacci-heap, you know that option (C) is possible and since its better than option (B), you have your answer.
Let's explore all the answers.
A is impossible because you can't find the Min in O(1) . Because obviously you should find it before removing it. And you need to do some operations to find it.
B is wrong also because we know that adding is O(1). The delete is also O(1) because we can directly access the last and the 1st elements.
By the same argument D is also wrong.
So we're left with C.

What would be the complexity of this Sorting Algorithm? What are the demerits of using the same?

The sorting algorithm can be described as follows:
1. Create Binary Search Tree from the Array data.
(For multiple occurences, increment occurence variable of the current Node)
2. Traverse BST in inorder fashion.
(Inorder traversal will return Sorted order of elements in array).
3. At each node in inorder traversal, overwrite the array element at current index(index beginning at 0) with current node value.
Here's a Java implementation for the same:
Structure of Node Class
class Node {
Node left;
int data;
int occurence;
Node right;
}
inorder function
(returning type is int just for obtaining correct indices at every call, they serve no other purpose)
public int inorder(Node root,int[] arr,int index) {
if(root == null) return index;
index = inorder(root.left,arr,index);
for(int i = 0; i < root.getOccurence(); i++)
arr[index++] = root.getData();
index = inorder(root.right,arr,index);
return index;
}
main()
public static void main(String[] args) {
int[] arr = new int[]{100,100,1,1,1,7,98,47,13,56};
BinarySearchTree bst = new BinarySearchTree(new Node(arr[0]));
for(int i = 1; i < arr.length; i++)
bst.insert(bst.getRoot(),arr[i]);
int dummy = bst.inorder(bst.getRoot(),arr,0);
System.out.println(Arrays.toString(arr));
}
The space complexity is terrible, I know, but it should not be such a big issue unless the sort is used for an extremely HUGE dataset. However, as I see it, isn't Time Complexity O(n)? (Insertions and Retrieval from BST is O(log n), and each element is touched once, making it O(n)). Correct me if I am wrong as I haven't yet studied Big-O well.
Assuming that the amortized (average) complexity of an insertion is O(log n), then N inserts (construction of the tree) will give O(log(1) + log(2) + ... + log(N-1) + log(N) = O(log(N!)) = O(NlogN) (Stirling's theorem). To read back the sorted array, perform an in-order depth-first traversal, which visits each node once, and is hence O(N). Combining the two you get O(NlogN).
However this requires that the tree is always balanced! This will not be the case in general for the most basic binary tree, as insertions do not check the relative depths of each child tree. There are many variants which are self-balancing - the two most famous being Red-Black trees and AVL trees. However the implementation of balancing is quite complicated and often leads to a higher constant factor in real-life performance.
the goal was to implement an O(n) algorithm to sort an Array of n elements with each element in the range [1, n^2]
In that case Radix sort (counting variation) would be O(n), taking a fixed number of passes (logb(n^2)), where b is the "base" used for the field, and b a function of n, such as b == n, where it would take two passes, or b == sqrt(n), where it would take four passes, or if n is small enough, b == n^2 in where it would take one pass and counting sort could be used. b could be rounded up to the next power of 2 in order to replace division and modulo with binary shift and binary and. Radix sort needs O(n) extra space, but so do the links for a binary tree.

Number of Binary Search Trees of a given Height

How can I find the number of BSTs upto a given height h and discard all the BSTs with height greater than h for a given set of unique numbers?
I have worked out the code using a recursive approach
static int bst(int h,int n){
if(h==0&&n==0)return 1;
else if(h==0&&n==1)return 1;
else if(h==0&&n>1)return 0;
else if(h>0&&n==0)return 1;
else{
int sum=0;
for(int i=1;i<=n;i++)
sum+=bst(h-1,i-1)*bst(h-1,n-i);
return sum;
}
}
You can speed it up by adding memoization as #DavidEisenstat suggested in the comments.
You create a memoization table to store the values of already computed results.
In the example, -1 indicates the value has not been computed yet.
Example in c++
long long memo[MAX_H][MAX_N];
long long bst(int h,int n){
if(memo[h][n] == -1){
memo[h][n] = //Compute the value here using recursion
}
return memo[h][n];
}
...
int main(){
memset(memo,-1,sizeof memo);
bst(102,89);
}
This will execute in O(h*n) as you will only compute bst once for each possible pair of n and h. Another advantage of this technique is that once the table is filled up, bst will respond in O(1) (for the values in the range of the table).
Be careful not to call the function with values above MAX_H and MAN_N. Also keep in mind memoization is a memory-time tradeoff, meaning your program will run faster, but it will use more memory too.
More info: https://en.wikipedia.org/wiki/Memoization

Minimal Number of Extract + Inserts required to sort a list

Context
this problem arises from trying to minimize number of expensive function calls
Problem Definition
Please note that extract_and_insert != swap. In particular, we take the element from position "from", insert it at position "to", and SHIFT all intermediate elements.
int n;
int A[n]; // all elements are integer and distinct
function extract_and_insert(from, to) {
int old_value = A[from]
if (from < to) {
for(int i = from; i < to; ++i)
A[i] = A[i+1];
A[to] = old_value;
} else {
for(int i = from; i > to; --i)
A[i] = A[i-1];
A[to] = old_value;
}
}
Question
We know there are O(n log n) algorithms for sorting a list of numbers.
Now: is there an O(n log n) function, which returns the minimum number of calls to extract_and_insert required to sort the list?
The answer is Yes.
This problem is essentially equivalent to finding the longest increasing subsequence (LIS) in an array, and you can use algorithms to solve that.
Why is this question equivalent to longest increasing subsequence?
Because each extract_and_insert operation will, at its most effective use, correct the relative position of exactly one element in the array. In other words, when we consider the length of the longest increasing subsequence of the array, each operation will increase that length by 1. So, the minimum number of required calls is:
length_of_array - length_of_LIS
and therefore by finding the length of LIS, we will be able to find the minimum number of operations required.
Do read up the linked Wikipedia page to see how to implement the algorithm.

Find the k largest elements in order

What is the fastest way to find the k largest elements in an array in order (i.e. starting from the largest element to the kth largest element)?
One option would be the following:
Using a linear-time selection algorithm like median-of-medians or introsort, find the kth largest element and rearrange the elements so that all elements from the kth element forward are greater than the kth element.
Sort all elements from the kth forward using a fast sorting algorithm like heapsort or quicksort.
Step (1) takes time O(n), and step (2) takes time O(k log k). Overall, the algorithm runs in time O(n + k log k), which is very, very fast.
Hope this helps!
C++ also provides the partial_sort algorithm, which solves the problem of selecting the smallest k elements (sorted), with a time complexity of O(n log k). No algorithm is provided for selecting the greatest k elements since this should be done by inverting the ordering predicate.
For Perl, the module Sort::Key::Top, available from CPAN, provides a set of functions to select the top n elements from a list using several orderings and custom key extraction procedures. Furthermore, the Statistics::CaseResampling module provides a function to calculate quantiles using quickselect.
Python's standard library (since 2.4) includes heapq.nsmallest() and nlargest(), returning sorted lists, the former in O(n + k log n) time, the latter in O(n log k) time.
Radix sort solution:
Sort the array in descending order, using radix sort;
Print first K elements.
Time complexity: O(N*L), where L = length of the largest element, can assume L = O(1).
Space used: O(N) for radix sort.
However, I think radix sort has costly overhead, making its linear time complexity less attractive.
1) Build a Max Heap tree in O(n)
2) Use Extract Max k times to get k maximum elements from the Max Heap O(klogn)
Time complexity: O(n + klogn)
A C++ implementation using STL is given below:
#include <iostream>
#include<bits/stdc++.h>
using namespace std;
int main() {
int arr[] = {4,3,7,12,23,1,8,5,9,2};
//Lets extract 3 maximum elements
int k = 3;
//First convert the array to a vector to use STL
vector<int> vec;
for(int i=0;i<10;i++){
vec.push_back(arr[i]);
}
//Build heap in O(n)
make_heap(vec.begin(), vec.end());
//Extract max k times
for(int i=0;i<k;i++){
cout<<vec.front()<<" ";
pop_heap(vec.begin(),vec.end());
vec.pop_back();
}
return 0;
}
#templatetypedef's solution is probably the fastest one, assuming you can modify or copy input.
Alternatively, you can use heap or BST (set in C++) to store k largest elements at given moment, then read array's elements one by one. While this is O(n lg k), it doesn't modify input and only uses O(k) additional memory. It also works on streams (when you don't know all the data from the beginning).
Here's a solution with O(N + k lg k) complexity.
int[] kLargest_Dremio(int[] A, int k) {
int[] result = new int[k];
shouldGetIndex = true;
int q = AreIndicesValid(0, A.Length - 1) ? RandomizedSelet(0, A.Length-1,
A.Length-k+1) : -1;
Array.Copy(A, q, result, 0, k);
Array.Sort(result, (a, b) => { return a>b; });
return result;
}
AreIndicesValid and RandomizedSelet are defined in this github source file.
There was a question on performance & restricted resources.
Make a value class for the top 3 values. Use such an accumulator for reduction in a parallel stream. Limit the parallelism according to the context (memory, power).
class BronzeSilverGold {
int[] values = new int[] {Integer.MIN_VALUE, Integer.MIN_VALUE, Integer.MIN_VALUE};
// For reduction
void add(int x) {
...
}
// For combining two results of two threads.
void merge(BronzeSilverGold other) {
...
}
}
The parallelism must be restricted in your constellation, hence specify an N_THREADS in:
try {
ForkJoinPool threadPool = new ForkJoinPool(N_THREADS);
threadPool.submit(() -> {
BronzeSilverGold result = IntStream.of(...).parallel().collect(
BronzeSilverGold::new,
(bsg, n) -> BronzeSilverGold::add,
(bsg1, bsg2) -> bsg1.merge(bsg2));
...
});
} catch (InterruptedException | ExecutionException e) {
prrtl();
}

Resources