How does the Root Method Work in Quick-Union? [duplicate] - algorithm

I've been studying the quick union algorithm. the code below was the example for the implementation.
Can someone explain to me what happens inside the root method please?
public class quickUnion {
private int[] id;
public void QuickUnionUF(int N){
id = new int [N];
for(int i = 0; i < N; i++){
id[i] = i;
}
}
private int root(int i){
while (i != id[i]){
i = id[i];
}
return i;
}
public boolean connected(int p, int q){
return root(p) == root(q);
}
public void union(int p, int q){
int i = root(p);
int j = root(q);
id[i] = j;
}
}

The core principle of union find is that each element belongs to a disjoint set of elements. This means that, if you draw a forest (set of trees), the forest will contain all the elements, and no element will be in two different trees.
When building these trees, you can imagine that any node either has a parent or is the root. In this implementation of union find (and in most union find implementations), the parent of each element is stored in an array at that element's index. Thus the element equivalent to id[i] is the parent of i.
You might ask: what if i has no parent (aka is a root)? In this case, the convention is to set i to itself (i is its own parent). Thus, id[i] == i simply checks if we have reached the root of the tree.
Putting this all together, the root function traverses, from the start node, all the way up the tree (parent by parent) until it reaches the root. Then it returns the root.
As an aside:
In order for this algorithm to get to the root more quickly, general implementations will 'flatten' the tree: the fewer parents you need to get through to get to the root, the faster the root function will return. Thus, in many implementations, you will see an additional step where you set the parent of an element to its original grandparent (id[i] = id[id[i]]).

The main point of algorithm here is: always keep root of one vertex equals to itself.
Initialization: Init id[i] = i. Each vertex itself is a root.
Merge Root:
If we merge root 5 and root 6. Assume that we want to merge root 6 into root 5. So id[6] = 5. id[5] = 5. --> 5 is root.
If we continue to merge 4 to 6. id[4] = 4 -> base root. id[6] = 5. -> not base root. We continue to find: id[5] = 5 -> base root. so we assign id[4] = 6
In all cases, we always keep convention: if x is base root, id[x] == x That is the main point of algorithm.

From Pdf file provided in the course Union find
Root of i is id[id[id[...id[i]...]]].
according to the given example
public int root(int p){
while(p != id[p]){
p = id[p];
}
return p;
}
lets consider a situation :
The elements of id[] would look like
Now lets call
root(3)
The dry run of loop inside root method is:

To understand the role of the root method, one needs to understand how this data structure is helping to organise values into disjoint sets.
It does so by building trees. Whenever two independent values 𝑝 and 𝑞 are said to belong to the same set, 𝑝 is made a child of 𝑞 (which then is the parent of 𝑝). If however 𝑝 already has a parent, then we first move to that parent of 𝑝, and the parent of that parent, ...until we find an ancestor which has no parent. This is root(p), lets call it 𝑝'. We do the same with 𝑞 if it has a parent. Let's call that ancestor 𝑞'. Finally, 𝑝' is made a child 𝑞'. By doing that, we implicitly make the original 𝑝 and 𝑞 members of the same tree.
How can we know that 𝑝 and 𝑞 are members of the same tree? By looking up their roots. If they happen to have the same root, then they are necessarily in the same tree, i.e. they belong to the same set.
Example
Let's look at an example run:
QuickUnionUF array = new QuickUnionUF(10);
This will create the following array:
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
This array represents edges. The from-side of an edge is the index in the array (0..9), and the to-side of the same edge is the value found at that index (also 0..9). As you can see the array is initialised in a way that all edges are self-references (loops). You could say that every value is the root of its own tree (which has no other values).
Calling root on any of the values 0..9, will return the same number, as for all i we have id[i] == i. So at this stage root does not give us much.
Now, let's indicate that two values actually belong to the same set:
array.union(2, 9);
This will result in the assignment id[2] = 9 and so we get this array:
[0, 1, 9, 3, 4, 5, 6, 7, 8, 9]
Graphically, this established link be represented as:
9
/
2
If now we call root(2) we will get 9 as return value. This tells us that 2 is in the same set (i.e. tree) as 9, and 9 happens to get the role of root of that tree (that was an arbitrary choice; it could also have been 2).
Let's also link 3 and 4 together. This is a very similar case as above:
array.union(3, 4);
This assigns id[3] = 4 and results in this array and tree representation:
[0, 1, 9, 4, 4, 5, 6, 7, 8, 9]
9 4
/ /
2 3
Now let's make it more interesting. Let's indicate that 4 and 9 belong to the same set:
array.union(4, 9);
Still root(4) and root(9) just return those same numbers (4 and 9). Nothing special yet... The assignment is id[4] = 9. This results in this array and graph:
[0, 1, 9, 4, 9, 5, 6, 7, 8, 9]
9
/ \
2 4
/
3
Note how this single assignment has joined two distinct trees into one tree. If now we want to check whether 2 and 3 are in the same tree, we call
if (connected(2, 3)) /* do something */
Although we never said 2 and 3 belonged to the same set explicitly, it should be implied from the previous actions. connected will now use calls to root to imply that fact. root(2) will return 9, and also root(3) will return 9. We get to see what root is doing... it is walking upwards in the graph towards the root node of the tree it is in. The array has all the information needed to make that walk. Given an index we can read in the array which is the parent (index) of that number. This may have to be repeated to get to the grandparent, ...etc: It can be a short or long walk, depending how many "edges" there are between the given node and the root of the tree it is in.

/**
* Quick Find Java Implementation Eager's Approach
*/
package com.weekone.union.quickfind;
import java.util.Random;
/**
* #author Ishwar Singh
*
*/
public class UnionQuickFind {
private int[] itemsArr;
public UnionQuickFind() {
System.out.println("Calling: " + UnionQuickFind.class);
}
public UnionQuickFind(int n) {
itemsArr = new int[n];
}
// p and q are indexes
public void unionOperation(int p, int q) {
// displayArray(itemsArr);
int tempValue = itemsArr[p];
if (!isConnected(p, q)) {
itemsArr[p] = itemsArr[q];
for (int i = 0; i < itemsArr.length; i++) {
if (itemsArr[i] == tempValue) {
itemsArr[i] = itemsArr[q];
}
}
displayArray(p, q);
} else {
displayArray(p, q, "Already Connected");
}
}
public boolean isConnected(int p, int q) {
return (itemsArr[p] == itemsArr[q]);
}
public void connected(int p, int q) {
if (isConnected(p, q)) {
displayArray(p, q, "Already Connected");
} else {
displayArray(p, q, "Not Connected");
}
}
private void displayArray(int p, int q) {
// TODO Auto-generated method stub
System.out.println();
System.out.print("{" + p + " " + q + "} -> ");
for (int i : itemsArr) {
System.out.print(i + ", ");
}
}
private void displayArray(int p, int q, String message) {
System.out.println();
System.out.print("{" + p + " " + q + "} -> " + message);
}
public void initializeArray() {
Random random = new Random();
for (int i = 0; i < itemsArr.length; i++) {
itemsArr[i] = random.nextInt(9);
}
}
public void initializeArray(int[] receivedArr) {
itemsArr = receivedArr;
}
public void displayArray() {
System.out.println("INDEXES");
System.out.print("{p q} -> ");
for (int i : itemsArr) {
System.out.print(i + ", ");
}
System.out.println();
}
}
Main Class:-
/**
*
*/
package com.weekone.union.quickfind;
/**
* #author Ishwar Singh
*
*/
public class UQFClient {
/**
* #param args
*/
public static void main(String[] args) {
int[] arr = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 };
int n = 10;
UnionQuickFind unionQuickFind = new UnionQuickFind(n);
// unionQuickFind.initializeArray();
unionQuickFind.initializeArray(arr);
unionQuickFind.displayArray();
unionQuickFind.unionOperation(4, 3);
unionQuickFind.unionOperation(3, 8);
unionQuickFind.unionOperation(6, 5);
unionQuickFind.unionOperation(9, 4);
unionQuickFind.unionOperation(2, 1);
unionQuickFind.unionOperation(8, 9);
unionQuickFind.connected(5, 0);
unionQuickFind.unionOperation(5, 0);
unionQuickFind.connected(5, 0);
unionQuickFind.unionOperation(7, 2);
unionQuickFind.unionOperation(6, 1);
}
}
Output:
INDEXES
{p q} -> 0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
{4 3} -> 0, 1, 2, 3, 3, 5, 6, 7, 8, 9,
{3 8} -> 0, 1, 2, 8, 8, 5, 6, 7, 8, 9,
{6 5} -> 0, 1, 2, 8, 8, 5, 5, 7, 8, 9,
{9 4} -> 0, 1, 2, 8, 8, 5, 5, 7, 8, 8,
{2 1} -> 0, 1, 1, 8, 8, 5, 5, 7, 8, 8,
{8 9} -> Already Connected
{5 0} -> Not Connected
{5 0} -> 0, 1, 1, 8, 8, 0, 0, 7, 8, 8,
{5 0} -> Already Connected
{7 2} -> 0, 1, 1, 8, 8, 0, 0, 1, 8, 8,
{6 1} -> 1, 1, 1, 8, 8, 1, 1, 1, 8, 8,

Related

How to filter a flux for all elements having the highest value

How do I filter a publisher for the elements having the highest value without knowing the highest value beforehand?
Here is a little test to illustrate what I'm trying to achieve:
#Test
fun filterForHighestValuesTest() {
val numbers = Flux.just(1, 5, 7, 2, 8, 3, 8, 4, 3)
// what operators to apply to numbers to make the test pass?
StepVerifier.create(numbers)
.expectNext(8)
.expectNext(8)
.verifyComplete()
}
Ive started with the reduce operator:
#Test
fun filterForHighestValuesTestWithReduce() {
val numbers = Flux.just(1, 5, 7, 2, 8, 3, 8, 4, 3)
.reduce { a: Int, b: Int -> if( a > b) a else b }
StepVerifier.create(numbers)
.expectNext(8)
.verifyComplete()
}
and of course that test passes but that will only emit a single Mono whereas I would like to obtain a Flux containing all the elements having the highest values e.g. 8 and 8 in this simple example.
First of all, you'll need state for this so you need to be careful to have per-Subscription state. One way of ensuring that while combining operators is to use compose.
Proposed solution
Flux<Integer> allMatchingHighest = numbers.compose(f -> {
AtomicInteger highestSoFarState = new AtomicInteger(Integer.MIN_VALUE);
AtomicInteger windowState = new AtomicInteger(Integer.MIN_VALUE);
return f.filter(v -> {
int highestSoFar = highestSoFarState.get();
if (v > highestSoFar) {
highestSoFarState.set(v);
return true;
}
if (v == highestSoFar) {
return true;
}
return false;
})
.bufferUntil(i -> i != windowState.getAndSet(i), true)
.log()
.takeLast(1)
.flatMapIterable(Function.identity());
});
Note the whole compose lamdba can be extracted into a method, making the code use a method reference and be more readable.
Explaination
The solution is done in 4 steps, with the two first each having their own AtomicInteger state:
Incrementally find the new "highest" element (so far) and filter out elements that are smaller. This results in a Flux<Integer> of (monotically) increasing numbers, like 1 5 7 8 8.
buffer by chunks of equal number. We use bufferUntil instead of window* or groupBy because the most degenerative case were numbers are all different and already sorted would fail with these
skip all buffers but one (takeLast(1))
"replay" that last buffer, which represents the number of occurrences of our highest value (flatMapIterable)
This correctly pass your StepVerifier test by emitting 8 8. Note the intermediate buffers emitted are:
onNext([1])
onNext([5])
onNext([7, 7, 7])
onNext([8, 8])
More advanced testing, justifying bufferUntil
A far more complex source that would fail with groupBy but not this solution:
Random rng = new Random();
//generate 258 numbers, each randomly repeated 1 to 10 times
//also, shuffle the whole thing
Flux<Integer> numbers = Flux
.range(1, 258)
.flatMap(i -> Mono.just(i).repeat(rng.nextInt(10)))
.collectList()
.map(l -> {
Collections.shuffle(l);
System.out.println(l);
return l;
})
.flatMapIterable(Function.identity())
.hide();
This is one example of what sequence of buffers it could filter into (keep in mind only the last one gets replayed):
onNext([192])
onNext([245])
onNext([250])
onNext([256, 256])
onNext([257])
onNext([258, 258, 258, 258, 258, 258, 258, 258, 258])
onComplete()
Note: If you remove the map that shuffles, then you obtain the "degenerative case" where even windowUntil wouldn't work (the takeLast would result in too many open yet unconsumed windows).
This was a fun one to come up with!
One way to do it is to map the flux of ints to a flux of lists with one int in each, reduce the result, and end with flatMapMany, i.e.
final Flux<Integer> numbers = Flux.just(1, 5, 7, 2, 8, 3, 8, 4, 3);
final Flux<Integer> maxValues =
numbers
.map(
n -> {
List<Integer> list = new ArrayList<>();
list.add(n);
return list;
})
.reduce(
(l1, l2) -> {
if (l1.get(0).compareTo(l2.get(0)) > 0) {
return l1;
} else if (l1.get(0).equals(l2.get(0))) {
l1.addAll(l2);
return l1;
} else {
return l2;
}
})
.flatMapMany(Flux::fromIterable);
One simple solution that worked for me -
Flux<Integer> flux =
Flux.just(1, 5, 7, 2, 8, 3, 8, 4, 3).collectSortedList(Comparator.reverseOrder()).flatMapMany(Flux::fromIterable);
StepVerifier.create(flux).expectNext(8).expectNext(8).expectNext(7).expectNext(5);
One possible solution is to group the Flux prior to the reduction and flatmap the GroupedFlux afterwards like this:
#Test
fun filterForHighestValuesTest() {
val numbers = Flux.just(1, 5, 7, 2, 8, 3, 8, 4, 3)
.groupBy { it }
.reduce { t: GroupedFlux<Int, Int>, u: GroupedFlux<Int, Int> ->
if (t.key()!! > u.key()!!) t else u
}
.flatMapMany {
it
}
StepVerifier.create(numbers)
.expectNext(8)
.expectNext(8)
.verifyComplete()
}

Making sure that two lists have same elements

I have two lists, A and B. I want to check A with B and make sure that A contains only the elements that B contains.
Example: In A={1, 2, 3, 4}, B ={3, 4, 5, 6}. At the end, I want A to be {3, 4, 5, 6}.
Conditions: I don't want to replace A completely with B and I don't want to change B.
public void setA(List B)
{
foreach(x in B)
{
if(!A.Contains(x))
A.Add(x)
}
foreach(x in A)
{
if(!B.Contains(x))
A.Delete(x)
}
}
Is there any better way to do this? (May be in a single for loop or even better)
Try the following:
var listATest = new List<int>() { 1, 2, 3, 4, 34, 3, 2 };
var listBTest = new List<int>() { 3, 4, 5, 6 };
// Make sure listATest is no longer than listBTest first
while (listATest.Count > listBTest.Count)
{
// Remove from end; as I understand it, remove from beginning is O(n)
// and remove from end is O(1) in Microsoft's implementation
// See http://stackoverflow.com/questions/1433307/speed-of-c-sharp-lists
listATest.RemoveAt(listATest.Count - 1);
}
for (int i = 0; i < listBTest.Count; i++)
{
// Handle the case where the listATest is shorter than listBTest
if (i >= listATest.Count)
listATest.Add(listBTest[i]);
// Make sure that the items are different before doing the copy
else if (listATest[i] != listBTest[i])
listATest[i] = listBTest[i];
}

How many white keys are between two keys?

Let's take a look at these types:
public enum KeyTone : int
{
C = 0,
Cs = 1,
D = 2,
Ds = 3,
E = 4,
F = 5,
Fs = 6,
G = 7,
Gs = 8,
A = 9,
As = 10,
H = 11
}
public class PianoKey
{
public KeyTone Tone { get; set; }
public int Octave { get; set; }
}
PianoKey represents a - well, you guessed - key on the piano. I need to check, how many white keys (e.g. these without "s" suffix) are between two specified white keys. The problem is that keyboard is not regular and there are no black keys between E and F and between H and C from the next octave.
There is an obvious brute-force solution - to jump to next white key until the requested second key is reached. But maybe there is some simpler way to calculate that?
First, calculate the base tone for a given key (I assume all keys to be represented by their integers):
if tone <= KeyTone.E
baseTone = tone / 2
else
baseTone = (tone + 1) / 2
Then, add the octave:
baseTone += Octave * 7
Then, you can find the difference in white keys by simple subtraction:
diffInWhiteKeys = baseTone(key1) - baseTone(key2)
E.g. the difference between E2 and G3 is:
baseTone(E2) = 4 / 2 + 2 * 7 = 16
baseTone(G3) = (7 + 1) / 2 + 3 * 7 = 25
diff = 24 - 16 = 9
, which is exactly the number of white keys you have to advance from E2 to reach G3. If you're interested in the number of white keys between to keys, just subtract one.
Create two static arrays that map
each black key to the next white key to the left or right,
and each white key to itself,
numbering white keys from 0 to 6.
static int[] whiteLeft = new int[] {0, 0, 1, 1, 2, 3, 3, 4, 4, 5, 5, 6};
static int[] whiteRight = new int[] {0, 1, 1, 2, 2, 3, 4, 4, 5, 5, 6, 6};
Then the number of white keys between two keys can be computed as simple as
public static int WhiteKeysBetween(PianoKey left, PianoKey right)
{
int wl = whiteLeft[(int)left.Tone];
int wr = whiteRight[(int)right.Tone];
return (right.Octave - left.Octave) * 7 + (wr - wl - 1);
}
Edit: This also works for white keys between any two keys (white or black). I didn't realize that the problem statement only asks for white keys between white keys.

Replacing part of std::vector by smaller std::vector

I wonder what would be the correct way to replace (overwriting) a part of a given std::vector "input" by another, smaller std::vector?
I do neet to keep the rest of the original vector unchanged.
Also I do not need to bother what has been in the original vector and
I don't need to keep the smaller vector afterwards anymore.
Say I have this:
std::vector<int> input = { 0, 0, 1, 1, 2, 22, 3, 33, 99 };
std::vector<int> a = { 1, 2, 3 };
std::vector<int> b = { 4, 5, 6, 7, 8 };
And I want to achieve that:
input = { 1, 2, 3, 4, 5, 6, 7, 8, 99}
What is the right way to do it? I thought of something like
input.replace(input.beginn(), input.beginn()+a.size(), a);
// intermediate input would look like that: input = { 1, 2, 3, 1, 2, 22, 3, 33, 99 };
input.replace(input.beginn()+a.size(), input.beginn()+a.size()+b.size(), b);
There should be a standard way to do it, shouldn't it?
My thoughts on this so far are the following:
I can not use std::vector::assign for it destroys all elements of input
std::vector::push_back would not replace but enlarge the input --> not what I want
std::vector::insert also creates new elements and enlages the input vector but I know for sure that the vectors a.size() + b.size() <= input.size()
std::vector::swap would not work since there is some content of input that needs to remain there ( in the example the last element) also it would not work to add b that way
std::vector::emplace also increases the input.size -> seems wrong as well
Also I would prefer if the solution would not waste performance by unnecessary clears or writing back values into the vectors a or b. My vectors will be very large for real and this is about performance in the end.
Any competent help would be appreciated very much.
You seem to be after std::copy(). This is how you would use it in your example (live demo on Coliru):
#include <algorithm> // Necessary for `std::copy`...
// ...
std::vector<int> input = { 0, 0, 1, 1, 2, 22, 3, 33, 99 };
std::vector<int> a = { 1, 2, 3 };
std::vector<int> b = { 4, 5, 6, 7, 8 };
std::copy(std::begin(a), std::end(a), std::begin(input));
std::copy(std::begin(b), std::end(b), std::begin(input) + a.size());
As Zyx2000 notes in the comments, in this case you can also use the iterator returned by the first call to std::copy() as the insertion point for the next copy:
auto last = std::copy(std::begin(a), std::end(a), std::begin(input));
std::copy(std::begin(b), std::end(b), last);
This way, random-access iterators are no longer required - that was the case when we had the expression std::begin(input) + a.size().
The first two arguments to std::copy() denote the source range of elements you want to copy. The third argument is an iterator to the first element you want to overwrite in the destination container.
When using std::copy(), make sure that the destination container is large enough to accommodate the number of elements you intend to copy.
Also, the source and the target range should not interleave.
Try this:
#include <iostream>
#include <vector>
#include <algorithm>
int main() {
std::vector<int> input = { 0, 0, 1, 1, 2, 22, 3, 33, 99 };
std::vector<int> a = { 1, 2, 3 };
std::vector<int> b = { 4, 5, 6, 7, 8 };
std::set_union( a.begin(), a.end(), b.begin(), b.end(), input.begin() );
for ( std::vector<int>::const_iterator iter = input.begin();
iter != input.end();
++iter )
{
std::cout << *iter << " ";
}
return 0;
}
It outputs:
1 2 3 4 5 6 7 8 99

Implement a queue in which push_rear(), pop_front() and get_min() are all constant time operations

I came across this question:
Implement a queue in which push_rear(), pop_front() and get_min() are all constant time operations.
I initially thought of using a min-heap data structure which has O(1) complexity for a get_min(). But push_rear() and pop_front() would be O(log(n)).
Does anyone know what would be the best way to implement such a queue which has O(1) push(), pop() and min()?
I googled about this, and wanted to point out this Algorithm Geeks thread. But it seems that none of the solutions follow constant time rule for all 3 methods: push(), pop() and min().
Thanks for all the suggestions.
You can implement a stack with O(1) pop(), push() and get_min(): just store the current minimum together with each element. So, for example, the stack [4,2,5,1] (1 on top) becomes [(4,4), (2,2), (5,2), (1,1)].
Then you can use two stacks to implement the queue. Push to one stack, pop from another one; if the second stack is empty during the pop, move all elements from the first stack to the second one.
E.g for a pop request, moving all the elements from first stack [(4,4), (2,2), (5,2), (1,1)], the second stack would be [(1,1), (5,1), (2,1), (4,1)]. and now return top element from second stack.
To find the minimum element of the queue, look at the smallest two elements of the individual min-stacks, then take the minimum of those two values. (Of course, there's some extra logic here is case one of the stacks is empty, but that's not too hard to work around).
It will have O(1) get_min() and push() and amortized O(1) pop().
Okay - I think I have an answer that gives you all of these operations in amortized O(1), meaning that any one operation could take up to O(n), but any sequence of n operations takes O(1) time per operation.
The idea is to store your data as a Cartesian tree. This is a binary tree obeying the min-heap property (each node is no bigger than its children) and is ordered in a way such that an inorder traversal of the nodes gives you back the nodes in the same order in which they were added. For example, here's a Cartesian tree for the sequence 2 1 4 3 5:
1
/ \
2 3
/ \
4 5
It is possible to insert an element into a Cartesian tree in O(1) amortized time using the following procedure. Look at the right spine of the tree (the path from the root to the rightmost leaf formed by always walking to the right). Starting at rightmost node, scan upward along this path until you find the first node smaller than the node you're inserting.
Change that node so that its right child is this new node, then make that node's former right child the left child of the node you just added. For example, suppose that we want to insert another copy of 2 into the above tree. We walk up the right spine past the 5 and the 3, but stop below the 1 because 1 < 2. We then change the tree to look like this:
1
/ \
2 2
/
3
/ \
4 5
Notice that an inorder traversal gives 2 1 4 3 5 2, which is the sequence in which we added the values.
This runs in amortized O(1) because we can create a potential function equal to the number of nodes in the right spine of the tree. The real time required to insert a node is 1 plus the number of nodes in the spine we consider (call this k). Once we find the place to insert the node, the size of the spine shrinks by length k - 1, since each of the k nodes we visited are no longer on the right spine, and the new node is in its place. This gives an amortized cost of 1 + k + (1 - k) = 2 = O(1), for the amortized O(1) insert. As another way of thinking about this, once a node has been moved off the right spine, it's never part of the right spine again, and so we will never have to move it again. Since each of the n nodes can be moved at most once, this means that n insertions can do at most n moves, so the total runtime is at most O(n) for an amortized O(1) per element.
To do a dequeue step, we simply remove the leftmost node from the Cartesian tree. If this node is a leaf, we're done. Otherwise, the node can only have one child (the right child), and so we replace the node with its right child. Provided that we keep track of where the leftmost node is, this step takes O(1) time. However, after removing the leftmost node and replacing it with its right child, we might not know where the new leftmost node is. To fix this, we simply walk down the left spine of the tree starting at the new node we just moved to the leftmost child. I claim that this still runs in O(1) amortized time. To see this, I claim that a node is visited at most once during any one of these passes to find the leftmost node. To see this, note that once a node has been visited this way, the only way that we could ever need to look at it again would be if it were moved from a child of the leftmost node to the leftmost node. But all the nodes visited are parents of the leftmost node, so this can't happen. Consequently, each node is visited at most once during this process, and the pop runs in O(1).
We can do find-min in O(1) because the Cartesian tree gives us access to the smallest element of the tree for free; it's the root of the tree.
Finally, to see that the nodes come back in the same order in which they were inserted, note that a Cartesian tree always stores its elements so that an inorder traversal visits them in sorted order. Since we always remove the leftmost node at each step, and this is the first element of the inorder traversal, we always get the nodes back in the order in which they were inserted.
In short, we get O(1) amortized push and pop, and O(1) worst-case find-min.
If I can come up with a worst-case O(1) implementation, I'll definitely post it. This was a great problem; thanks for posting it!
Ok, here is one solution.
First we need some stuff which provide push_back(),push_front(),pop_back() and pop_front() in 0(1). It's easy to implement with array and 2 iterators. First iterator will point to front, second to back. Let's call such stuff deque.
Here is pseudo-code:
class MyQueue//Our data structure
{
deque D;//We need 2 deque objects
deque Min;
push(element)//pushing element to MyQueue
{
D.push_back(element);
while(Min.is_not_empty() and Min.back()>element)
Min.pop_back();
Min.push_back(element);
}
pop()//poping MyQueue
{
if(Min.front()==D.front() )
Min.pop_front();
D.pop_front();
}
min()
{
return Min.front();
}
}
Explanation:
Example let's push numbers [12,5,10,7,11,19] and to our MyQueue
1)pushing 12
D [12]
Min[12]
2)pushing 5
D[12,5]
Min[5] //5>12 so 12 removed
3)pushing 10
D[12,5,10]
Min[5,10]
4)pushing 7
D[12,5,10,7]
Min[5,7]
6)pushing 11
D[12,5,10,7,11]
Min[5,7,11]
7)pushing 19
D[12,5,10,7,11,19]
Min[5,7,11,19]
Now let's call pop_front()
we got
D[5,10,7,11,19]
Min[5,7,11,19]
The minimum is 5
Let's call pop_front() again
Explanation: pop_front will remove 5 from D, but it will pop front element of Min too, because it equals to D's front element (5).
D[10,7,11,19]
Min[7,11,19]
And minimum is 7. :)
Use one deque (A) to store the elements and another deque (B) to store the minimums.
When x is enqueued, push_back it to A and keep pop_backing B until the back of B is smaller than x, then push_back x to B.
when dequeuing A, pop_front A as return value, and if it is equal to the front of B, pop_front B as well.
when getting the minimum of A, use the front of B as return value.
dequeue and getmin are obviously O(1). For the enqueue operation, consider the push_back of n elements. There are n push_back to A, n push_back to B and at most n pop_back of B because each element will either stay in B or being popped out once from B. Over all there are O(3n) operations and therefore the amortized cost is O(1) as well for enqueue.
Lastly the reason this algorithm works is that when you enqueue x to A, if there are elements in B that are larger than x, they will never be minimums now because x will stay in the queue A longer than any elements in B (a queue is FIFO). Therefore we need to pop out elements in B (from the back) that are larger than x before we push x into B.
from collections import deque
class MinQueue(deque):
def __init__(self):
deque.__init__(self)
self.minq = deque()
def push_rear(self, x):
self.append(x)
while len(self.minq) > 0 and self.minq[-1] > x:
self.minq.pop()
self.minq.append(x)
def pop_front(self):
x = self.popleft()
if self.minq[0] == x:
self.minq.popleft()
return(x)
def get_min(self):
return(self.minq[0])
If you don't mind storing a bit of extra data, it should be trivial to store the minimum value. Push and pop can update the value if the new or removed element is the minimum, and returning the minimum value is as simple as getting the value of the variable.
This is assuming that get_min() does not change the data; if you would rather have something like pop_min() (i.e. remove the minimum element), you can simply store a pointer to the actual element and the element preceding it (if any), and update those accordingly with push_rear() and pop_front() as well.
Edit after comments:
Obviously this leads to O(n) push and pop in the case that the minimum changes on those operations, and so does not strictly satisfy the requirements.
You Can actually use a LinkedList to maintain the Queue.
Each element in LinkedList will be of Type
class LinkedListElement
{
LinkedListElement next;
int currentMin;
}
You can have two pointers One points to the Start and the other points to the End.
If you add an element to the start of the Queue. Examine the Start pointer and the node to insert. If node to insert currentmin is less than start currentmin node to insert currentmin is the minimum. Else update the currentmin with start currentmin.
Repeat the same for enque.
JavaScript implementation
(Credit to adamax's solution for the idea; I loosely based an implementation on it. Jump to the bottom to see fully commented code or read through the general steps below. Note that this finds the maximum value in O(1) constant time rather than the minimum value--easy to change up):
The general idea is to create two Stacks upon construction of the MaxQueue (I used a linked list as the underlying Stack data structure--not included in the code; but any Stack will do as long as it's implemented with O(1) insertion/deletion). One we'll mostly pop from (dqStack) and one we'll mostly push to (eqStack).
Insertion: O(1) worst case
For enqueue, if the MaxQueue is empty, we'll push the value to dqStack along with the current max value in a tuple (the same value since it's the only value in the MaxQueue); e.g.:
const m = new MaxQueue();
m.enqueue(6);
/*
the dqStack now looks like:
[6, 6] - [value, max]
*/
If the MaxQueue is not empty, we push just the value to eqStack;
m.enqueue(7);
m.enqueue(8);
/*
dqStack: eqStack: 8
[6, 6] 7 - just the value
*/
then, update the maximum value in the tuple.
/*
dqStack: eqStack: 8
[6, 8] 7
*/
Deletion: O(1) amortized
For dequeue we'll pop from dqStack and return the value from the tuple.
m.dequeue();
> 6
// equivalent to:
/*
const tuple = m.dqStack.pop() // [6, 8]
tuple[0];
> 6
*/
Then, if dqStack is empty, move all values in eqStack to dqStack, e.g.:
// if we build a MaxQueue
const maxQ = new MaxQueue(3, 5, 2, 4, 1);
/*
the stacks will look like:
dqStack: eqStack: 1
4
2
[3, 5] 5
*/
As each value is moved over, we'll check if it's greater than the max so far and store it in each tuple:
maxQ.dequeue(); // pops from dqStack (now empty), so move all from eqStack to dqStack
> 3
// as dequeue moves one value over, it checks if it's greater than the ***previous max*** and stores the max at tuple[1], i.e., [data, max]:
/*
dqStack: [5, 5] => 5 > 4 - update eqStack:
[2, 4] => 2 < 4 - no update
[4, 4] => 4 > 1 - update
[1, 1] => 1st value moved over so max is itself empty
*/
Because each value is moved to dqStack at most once, we can say that dequeue has O(1) amortized time complexity.
Finding the maximum value: O(1) worst case
Then, at any point in time, we can call getMax to retrieve the current maximum value in O(1) constant time. As long as the MaxQueue is not empty, the maximum value is easily pulled out of the next tuple in dqStack.
maxQ.getMax();
> 5
// equivalent to calling peek on the dqStack and pulling out the maximum value:
/*
const peekedTuple = maxQ.dqStack.peek(); // [5, 5]
peekedTuple[1];
> 5
*/
Code
class MaxQueue {
constructor(...data) {
// create a dequeue Stack from which we'll pop
this.dqStack = new Stack();
// create an enqueue Stack to which we'll push
this.eqStack = new Stack();
// if enqueueing data at construction, iterate through data and enqueue each
if (data.length) for (const datum of data) this.enqueue(datum);
}
enqueue(data) { // O(1) constant insertion time
// if the MaxQueue is empty,
if (!this.peek()) {
// push data to the dequeue Stack and indicate it's the max;
this.dqStack.push([data, data]); // e.g., enqueue(8) ==> [data: 8, max: 8]
} else {
// otherwise, the MaxQueue is not empty; push data to enqueue Stack
this.eqStack.push(data);
// save a reference to the tuple that's next in line to be dequeued
const next = this.dqStack.peek();
// if the enqueueing data is > the max in that tuple, update it
if (data > next[1]) next[1] = data;
}
}
moveAllFromEqToDq() { // O(1) amortized as each value will move at most once
// start max at -Infinity for comparison with the first value
let max = -Infinity;
// until enqueue Stack is empty,
while (this.eqStack.peek()) {
// pop from enqueue Stack and save its data
const data = this.eqStack.pop();
// if data is > max, set max to data
if (data > max) max = data;
// push to dequeue Stack and indicate the current max; e.g., [data: 7: max: 8]
this.dqStack.push([data, max]);
}
}
dequeue() { // O(1) amortized deletion due to calling moveAllFromEqToDq from time-to-time
// if the MaxQueue is empty, return undefined
if (!this.peek()) return;
// pop from the dequeue Stack and save it's data
const [data] = this.dqStack.pop();
// if there's no data left in dequeue Stack, move all data from enqueue Stack
if (!this.dqStack.peek()) this.moveAllFromEqToDq();
// return the data
return data;
}
peek() { // O(1) constant peek time
// if the MaxQueue is empty, return undefined
if (!this.dqStack.peek()) return;
// peek at dequeue Stack and return its data
return this.dqStack.peek()[0];
}
getMax() { // O(1) constant time to find maximum value
// if the MaxQueue is empty, return undefined
if (!this.peek()) return;
// peek at dequeue Stack and return the current max
return this.dqStack.peek()[1];
}
}
#include <iostream>
#include <queue>
#include <deque>
using namespace std;
queue<int> main_queue;
deque<int> min_queue;
void clearQueue(deque<int> &q)
{
while(q.empty() == false) q.pop_front();
}
void PushRear(int elem)
{
main_queue.push(elem);
if(min_queue.empty() == false && elem < min_queue.front())
{
clearQueue(min_queue);
}
while(min_queue.empty() == false && elem < min_queue.back())
{
min_queue.pop_back();
}
min_queue.push_back(elem);
}
void PopFront()
{
int elem = main_queue.front();
main_queue.pop();
if (elem == min_queue.front())
{
min_queue.pop_front();
}
}
int GetMin()
{
return min_queue.front();
}
int main()
{
PushRear(1);
PushRear(-1);
PushRear(2);
cout<<GetMin()<<endl;
PopFront();
PopFront();
cout<<GetMin()<<endl;
return 0;
}
This solution contains 2 queues:
1. main_q - stores the input numbers.
2. min_q - stores the min numbers by certain rules that we'll described (appear in functions MainQ.enqueue(x), MainQ.dequeue(), MainQ.get_min()).
Here's the code in Python. Queue is implemented using a List.
The main idea lies in the MainQ.enqueue(x), MainQ.dequeue(), MainQ.get_min() functions.
One key assumption is that emptying a queue takes o(0).
A test is provided at the end.
import numbers
class EmptyQueueException(Exception):
pass
class BaseQ():
def __init__(self):
self.l = list()
def enqueue(self, x):
assert isinstance(x, numbers.Number)
self.l.append(x)
def dequeue(self):
return self.l.pop(0)
def peek_first(self):
return self.l[0]
def peek_last(self):
return self.l[len(self.l)-1]
def empty(self):
return self.l==None or len(self.l)==0
def clear(self):
self.l=[]
class MainQ(BaseQ):
def __init__(self, min_q):
super().__init__()
self.min_q = min_q
def enqueue(self, x):
super().enqueue(x)
if self.min_q.empty():
self.min_q.enqueue(x)
elif x > self.min_q.peek_last():
self.min_q.enqueue(x)
else: # x <= self.min_q.peek_last():
self.min_q.clear()
self.min_q.enqueue(x)
def dequeue(self):
if self.empty():
raise EmptyQueueException("Queue is empty")
x = super().dequeue()
if x == self.min_q.peek_first():
self.min_q.dequeue()
return x
def get_min(self):
if self.empty():
raise EmptyQueueException("Queue is empty, NO minimum")
return self.min_q.peek_first()
INPUT_NUMS = (("+", 5), ("+", 10), ("+", 3), ("+", 6), ("+", 1), ("+", 2), ("+", 4), ("+", -4), ("+", 100), ("+", -40),
("-",None), ("-",None), ("-",None), ("+",-400), ("+",90), ("-",None),
("-",None), ("-",None), ("-",None), ("-",None), ("-",None), ("-",None), ("-",None), ("-",None))
if __name__ == '__main__':
min_q = BaseQ()
main_q = MainQ(min_q)
try:
for operator, i in INPUT_NUMS:
if operator=="+":
main_q.enqueue(i)
print("Added {} ; Min is: {}".format(i,main_q.get_min()))
print("main_q = {}".format(main_q.l))
print("min_q = {}".format(main_q.min_q.l))
print("==========")
else:
x = main_q.dequeue()
print("Removed {} ; Min is: {}".format(x,main_q.get_min()))
print("main_q = {}".format(main_q.l))
print("min_q = {}".format(main_q.min_q.l))
print("==========")
except Exception as e:
print("exception: {}".format(e))
The output of the above test is:
"C:\Program Files\Python35\python.exe" C:/dev/python/py3_pocs/proj1/priority_queue.py
Added 5 ; Min is: 5
main_q = [5]
min_q = [5]
==========
Added 10 ; Min is: 5
main_q = [5, 10]
min_q = [5, 10]
==========
Added 3 ; Min is: 3
main_q = [5, 10, 3]
min_q = [3]
==========
Added 6 ; Min is: 3
main_q = [5, 10, 3, 6]
min_q = [3, 6]
==========
Added 1 ; Min is: 1
main_q = [5, 10, 3, 6, 1]
min_q = [1]
==========
Added 2 ; Min is: 1
main_q = [5, 10, 3, 6, 1, 2]
min_q = [1, 2]
==========
Added 4 ; Min is: 1
main_q = [5, 10, 3, 6, 1, 2, 4]
min_q = [1, 2, 4]
==========
Added -4 ; Min is: -4
main_q = [5, 10, 3, 6, 1, 2, 4, -4]
min_q = [-4]
==========
Added 100 ; Min is: -4
main_q = [5, 10, 3, 6, 1, 2, 4, -4, 100]
min_q = [-4, 100]
==========
Added -40 ; Min is: -40
main_q = [5, 10, 3, 6, 1, 2, 4, -4, 100, -40]
min_q = [-40]
==========
Removed 5 ; Min is: -40
main_q = [10, 3, 6, 1, 2, 4, -4, 100, -40]
min_q = [-40]
==========
Removed 10 ; Min is: -40
main_q = [3, 6, 1, 2, 4, -4, 100, -40]
min_q = [-40]
==========
Removed 3 ; Min is: -40
main_q = [6, 1, 2, 4, -4, 100, -40]
min_q = [-40]
==========
Added -400 ; Min is: -400
main_q = [6, 1, 2, 4, -4, 100, -40, -400]
min_q = [-400]
==========
Added 90 ; Min is: -400
main_q = [6, 1, 2, 4, -4, 100, -40, -400, 90]
min_q = [-400, 90]
==========
Removed 6 ; Min is: -400
main_q = [1, 2, 4, -4, 100, -40, -400, 90]
min_q = [-400, 90]
==========
Removed 1 ; Min is: -400
main_q = [2, 4, -4, 100, -40, -400, 90]
min_q = [-400, 90]
==========
Removed 2 ; Min is: -400
main_q = [4, -4, 100, -40, -400, 90]
min_q = [-400, 90]
==========
Removed 4 ; Min is: -400
main_q = [-4, 100, -40, -400, 90]
min_q = [-400, 90]
==========
Removed -4 ; Min is: -400
main_q = [100, -40, -400, 90]
min_q = [-400, 90]
==========
Removed 100 ; Min is: -400
main_q = [-40, -400, 90]
min_q = [-400, 90]
==========
Removed -40 ; Min is: -400
main_q = [-400, 90]
min_q = [-400, 90]
==========
Removed -400 ; Min is: 90
main_q = [90]
min_q = [90]
==========
exception: Queue is empty, NO minimum
Process finished with exit code 0
Java Implementation
import java.io.*;
import java.util.*;
public class queueMin {
static class stack {
private Node<Integer> head;
public void push(int data) {
Node<Integer> newNode = new Node<Integer>(data);
if(null == head) {
head = newNode;
} else {
Node<Integer> prev = head;
head = newNode;
head.setNext(prev);
}
}
public int pop() {
int data = -1;
if(null == head){
System.out.println("Error Nothing to pop");
} else {
data = head.getData();
head = head.getNext();
}
return data;
}
public int peek(){
if(null == head){
System.out.println("Error Nothing to pop");
return -1;
} else {
return head.getData();
}
}
public boolean isEmpty(){
return null == head;
}
}
static class stackMin extends stack {
private stack s2;
public stackMin(){
s2 = new stack();
}
public void push(int data){
if(data <= getMin()){
s2.push(data);
}
super.push(data);
}
public int pop(){
int value = super.pop();
if(value == getMin()) {
s2.pop();
}
return value;
}
public int getMin(){
if(s2.isEmpty()) {
return Integer.MAX_VALUE;
}
return s2.peek();
}
}
static class Queue {
private stackMin s1, s2;
public Queue(){
s1 = new stackMin();
s2 = new stackMin();
}
public void enQueue(int data) {
s1.push(data);
}
public int deQueue() {
if(s2.isEmpty()) {
while(!s1.isEmpty()) {
s2.push(s1.pop());
}
}
return s2.pop();
}
public int getMin(){
return Math.min(s1.isEmpty() ? Integer.MAX_VALUE : s1.getMin(), s2.isEmpty() ? Integer.MAX_VALUE : s2.getMin());
}
}
static class Node<T> {
private T data;
private T min;
private Node<T> next;
public Node(T data){
this.data = data;
this.next = null;
}
public void setNext(Node<T> next){
this.next = next;
}
public T getData(){
return this.data;
}
public Node<T> getNext(){
return this.next;
}
public void setMin(T min){
this.min = min;
}
public T getMin(){
return this.min;
}
}
public static void main(String args[]){
try {
FastScanner in = newInput();
PrintWriter out = newOutput();
// System.out.println(out);
Queue q = new Queue();
int t = in.nextInt();
while(t-- > 0) {
String[] inp = in.nextLine().split(" ");
switch (inp[0]) {
case "+":
q.enQueue(Integer.parseInt(inp[1]));
break;
case "-":
q.deQueue();
break;
case "?":
out.println(q.getMin());
default:
break;
}
}
out.flush();
out.close();
} catch(IOException e){
e.printStackTrace();
}
}
static class FastScanner {
static BufferedReader br;
static StringTokenizer st;
FastScanner(File f) {
try {
br = new BufferedReader(new FileReader(f));
} catch (FileNotFoundException e) {
e.printStackTrace();
}
}
public FastScanner(InputStream f) {
br = new BufferedReader(new InputStreamReader(f));
}
String next() {
while (st == null || !st.hasMoreTokens()) {
try {
st = new StringTokenizer(br.readLine());
} catch (IOException e) {
e.printStackTrace();
}
}
return st.nextToken();
}
String nextLine(){
String str = "";
try {
str = br.readLine();
} catch (IOException e) {
e.printStackTrace();
}
return str;
}
int nextInt() {
return Integer.parseInt(next());
}
long nextLong() {
return Long.parseLong(next());
}
double nextDoulbe() {
return Double.parseDouble(next());
}
}
static FastScanner newInput() throws IOException {
if (System.getProperty("JUDGE") != null) {
return new FastScanner(new File("input.txt"));
} else {
return new FastScanner(System.in);
}
}
static PrintWriter newOutput() throws IOException {
if (System.getProperty("JUDGE") != null) {
return new PrintWriter("output.txt");
} else {
return new PrintWriter(System.out);
}
}
}
We know that push and pop are constant time operations [O(1) to be precise].
But when we think of get_min()[i.e to find the current minimum number in the queue] generally the first thing that comes to mind is searching the whole queue every time the request for the minimum element is made. But this will never give the constant time operation, which is the main aim of the problem.
This is generally asked very frequently in the interviews, so you must know the trick
To do this we have to use two more queues which will keep the track of minimum element and we have to go on modifying these 2 queues as we do push and pop operations on the queue so that minimum element is obtained in O(1) time.
Here is the self-descriptive pseudo code based on the above approach mentioned.
Queue q, minq1, minq2;
isMinq1Current=true;
void push(int a)
{
q.push(a);
if(isMinq1Current)
{
if(minq1.empty) minq1.push(a);
else
{
while(!minq1.empty && minq1.top < =a) minq2.push(minq1.pop());
minq2.push(a);
while(!minq1.empty) minq1.pop();
isMinq1Current=false;
}
}
else
{
//mirror if(isMinq1Current) branch.
}
}
int pop()
{
int a = q.pop();
if(isMinq1Current)
{
if(a==minq1.top) minq1.pop();
}
else
{
//mirror if(isMinq1Current) branch.
}
return a;
}

Resources