data mapping and lazy propagation in segment tree - algorithm

Looks like there is only one good article about lazy propagation in Segment Tree on entire internet and it is:
http://www.spoj.pl/forum/viewtopic.php?f=27&t=8296
I understood the concept of updating only query node and marking its child.
But my question is what if I query child node first and parent node later.
In this tree (along with location in array of heap )
0->[0 9]
1->[0 4] 2->[5 9]
3->[0 2] 4->[3 4] 5->[5 7] 6->[8 9]
.....................................
First query, if I update [0 4], its data will be changed and its child will be flagged.
Second query, is read state of segment [0 9].
Here I face the issue. My segment tree implementation is such that value of each node is sum of its left and right child. So when I update node's value I've to update it's all parents.
To fix logical issue, now I'm updating all parent of node (till it reaches root of tree).
But this is taking performance toll and my whole purpose of using segment tree for fast batch update is getting killed.
Can anyone please explain, where I'm going wrong in using segment tree?

I will contrast lazy update operation to a normal update operation and how this changes query operation.
In a normal single update operation you update the root of a tree and then recursively update only the needed part of the tree (thus giving you a O(log(n)) speed). If you will try to use the same logic for a range updates, you can see how it can deteriorate to O(n) (consider very broad ranges and see that you will mostly need to update both parts of the tree).
So in order to overcome this O(n) idea is to update the tree only when you really need it (query/update on the segment which was previously updated, thus making your updates lazy). So here is how it works:
creation of a tree stays absolutely the same. The only minor difference is that you also create an array which holds information about potential updates.
when you update the node of the tree, you also check whether it needs to be updated (from the previous update operation) and if it is - you update it, mark children to be updated in the future and unmark the node (being lazy)
when you query the tree, you also check whether the node needs to be updated and if so update it, mark it's children and unmark it afterwards.
Here is an example of update and query (solving maximum range query). For a full code - check this article.
void update_tree(int node, int a, int b, int i, int j, int value) {
if(lazy[node] != 0) { // This node needs to be updated
tree[node] += lazy[node]; // Update it
if(a != b) {
lazy[node*2] += lazy[node]; // Mark child as lazy
lazy[node*2+1] += lazy[node]; // Mark child as lazy
}
lazy[node] = 0; // Reset it
}
if(a > b || a > j || b < i) // Current segment is not within range [i, j]
return;
if(a >= i && b <= j) { // Segment is fully within range
tree[node] += value;
if(a != b) { // Not leaf node
lazy[node*2] += value;
lazy[node*2+1] += value;
}
return;
}
update_tree(node*2, a, (a+b)/2, i, j, value); // Updating left child
update_tree(1+node*2, 1+(a+b)/2, b, i, j, value); // Updating right child
tree[node] = max(tree[node*2], tree[node*2+1]); // Updating root with max value
}
and query:
int query_tree(int node, int a, int b, int i, int j) {
if(a > b || a > j || b < i) return -inf; // Out of range
if(lazy[node] != 0) { // This node needs to be updated
tree[node] += lazy[node]; // Update it
if(a != b) {
lazy[node*2] += lazy[node]; // Mark child as lazy
lazy[node*2+1] += lazy[node]; // Mark child as lazy
}
lazy[node] = 0; // Reset it
}
if(a >= i && b <= j) // Current segment is totally within range [i, j]
return tree[node];
return max(query_tree(node*2, a, (a+b)/2, i, j), query_tree(1+node*2, 1+(a+b)/2, b, i, j));
}

When you query a node in the segment tree, you need to make sure that all its ancestors, and the node itself, is properly updated. You do this while visiting the query node(s).
While visiting a query node, you traverse the path from the root to the query node, while taking care of all the pending updates. Since there are O(log N) ancestors you need to visit, for any given query node, the you do only O(log N) work.
Here is my code for a segment tree with lazy propagation.
// interval updates, interval queries (lazy propagation)
const int SN = 256; // must be a power of 2
struct SegmentTree {
// T[x] is the (properly updated) sum of indices represented by node x
// U[x] is pending increment for _each_ node in the subtree rooted at x
int T[2*SN], U[2*SN];
SegmentTree() { clear(T,0), clear(U,0); }
// increment every index in [ia,ib) by incr
// the current node is x which represents the interval [a,b)
void update(int incr, int ia, int ib, int x = 1, int a = 0, int b = SN) { // [a,b)
ia = max(ia,a), ib = min(ib,b); // intersect [ia,ib) with [a,b)
if(ia >= ib) return; // [ia,ib) is empty
if(ia == a && ib == b) { // We push the increment to 'pending increments'
U[x] += incr; // And stop recursing
return;
}
T[x] += incr * (ib - ia); // Update the current node
update(incr,ia,ib,2*x,a,(a+b)/2); // And push the increment to its children
update(incr,ia,ib,2*x+1,(a+b)/2, b);
}
int query(int ia, int ib, int x = 1, int a = 0, int b = SN) {
ia = max(ia,a), ib = min(ib,b); // intersect [ia,ib) with [a,b)
if(ia >= ib) return 0; // [ia,ib) is empty
if(ia == a && ib == b)
return U[x]*(b - a) + T[x];
T[x] += (b - a) * U[x]; // Carry out the pending increments
U[2*x] += U[x], U[2*x+1] += U[x]; // Push to the childrens' 'pending increments'
U[x] = 0;
return query(ia,ib,2*x,a,(a+b)/2) + query(ia,ib,2*x+1,(a+b)/2,b);
}
};

Related

When can we have duplicate nodes in the heap in dijikstras algorithm?

So this question came to my mind when solving some dijikstra's based leetcode problems.
We have node, distance pairs in priority queue at each step.
Having duplicate nodes in the heap depends on one thing i.e when we mark the node as visited(i.e confirming that we have found the shortest length for it). If we mark it while pushing into the queue we will not have any duplicates, if we mark after popping from the queue, we may have duplicates in the queue.
https://leetcode.com/problems/network-delay-time/
In this question we can mark the node as visited only after popping from the priority Queue, or we will miss some edge that may lead to a shorter path.
Ex:
[[1,2,1],[2,3,2],[1,3,4]]
3
1
If we add while inserting we will get wrong answer while exploring 1's neighbors what we are doing is ,
1->2 queue={2,1} visited={1,2}
1->3 queue{(2,1), (3,4)}
since all nodes are now visited, we will never encounter the path 1->2->3 distance=1+2=3.
But in other questions we can do a dijikstra with visited marked before the insertion into the priority queue, ex:
https://leetcode.com/problems/swim-in-rising-water/
why is dijikstra with visited marked before the insertion valid here
BFS is for blindly visiting nodes (may assume all weight 1), Dijkstra will prioritize with the least weighted path.
When can we have duplicate nodes in the heap in Dijkstra algorithm?
a
4/ \2
/ \
b ---- c
| 1
4 |
d
1. here start Dijkstra from a. queue = (a, 0)
2. b, c pushed with path cost into p_queue. queue = (b, 4), (c, 2)
3. c popped. b with another cost pushed. queue = (b, 4), (b, 3) here the new (b, 3) has (ac + cb) cost
4. least b popped. d pushed queue = (b, 4), (d, 7)
5. As we check and mark after pop. b popped. queue = (d, 7)
6. But already visited so returned
7. Process d
But in other questions we can do a Dijkstra with visited marked before the insertion into the priority queue, ex: https://leetcode.com/problems/swim-in-rising-water
why is Dijkstra with visited marked before the insertion valid here?
Depends largely on the problem itself. As for this particular problem, we get weight directly from the node, and no matter whether we push it before or after, it will be popped at the same time, though keeping visited checking before push will prevent redundant pushing.
Here is my accepted implementation where you can comment out or keep either after pop or before push marking & checking for visited nodes and both will get accepted.
class Solution {
struct PosWeight {
pair<int, int> pos;
int weight;
PosWeight(pair<int, int> a, int weight): pos(a), weight(weight){}
};
int visited[51][51] = {0};
int traverse(vector<vector<int>>& grid) {
int row = size(grid);
int column = size(grid[0]);
vector<pair<int, int>> directions = { {0,1}, {1,0}, {-1,0}, {0,-1} };
auto isValidTo = [&grid, row, column]
(pair<int, int> direction, pair<int, int> pos) {
if (pos.first + direction.first >= row) return false;
if (pos.first + direction.first < 0) return false;
if (pos.second + direction.second >= column) return false;
if (pos.second + direction.second < 0) return false;
return true;
};
auto comp = [](PosWeight &a, PosWeight &b) {
return a.weight > b.weight;
};
int maxx =INT_MIN;
priority_queue<PosWeight, vector<PosWeight>, decltype(comp)> pq(comp);
pq.emplace(make_pair(0,0), grid[0][0]);
while(!pq.empty()) {
auto elem = pq.top();
pq.pop();
// You can comment out this portion and keep the later one before pushing in queue
if (visited[elem.pos.first][elem.pos.second]) continue;
visited[elem.pos.first][elem.pos.second] = 1;
// ---
maxx = max(maxx, elem.weight);
if (elem.pos.first == row - 1 && elem.pos.second == column - 1)
return maxx;
for(auto direction: directions)
if (isValidTo(direction, elem.pos)) {
pair<int,int> toGo = make_pair( direction.first + elem.pos.first,
direction.second + elem.pos.second );
auto weight = grid[toGo.first][toGo.second];
// You can comment out this portion and keep the later one before pushing in queue
// if (visited[toGo.first][toGo.second]) continue;
// visited[toGo.first][toGo.second] = 1;
// -----
pq.emplace(toGo, weight);
}
}
return maxx;
}
public:
int swimInWater(vector<vector<int>>& grid) {
return traverse(grid);
}
};
But, for https://leetcode.com/problems/network-delay-time this problem, the checking must be after pop as doing so before push will cause early all node visits instead of prioritized shortest as you stated in your question.
class Solution {
public:
int networkDelayTime(vector<vector<int>>& times, int n, int k) {
auto comp = [](pair<int,int> &a, pair<int,int> &b) {
return a.second > b.second;
};
priority_queue<pair<int,int>, vector<pair<int,int>>, decltype(comp)> que(comp);
vector<vector<int>> rel(n, vector<int>(n, -1));
for (auto &time: times)
rel[time[0] - 1][time[1] - 1] = time[2];
vector<int> visit(n, 0);
que.push({k-1, 0});
int sz = n;
while (size(que)) {
auto now = que.top();
que.pop();
if (visit[now.first]) continue;
visit[now.first] = 1;
if (!--sz) return now.second;
auto id = now.first, val = now.second;
for (int i = 0; i < n; ++i)
if (rel[id][i] != -1) {
// cant do checking here
que.push({i, val + rel[id][i]});
}
}
return -1;
}
};
So, bottom line, it depends on the nature and requirement of the problem.

Find the Size of Each Node

I am having a tree rooted at 1 , i need to find the size of it's each node.
I am using this recursive call in order to do
find_size(int curr , int parent){
S[curr]=1
for(int j:Children[curr]){
if(j==parent) continue;
find_size(j,curr)
S[curr]+=S[j];
}
}
How to reduce my solution to non recursive one , using stacks or something ? Since recursive solution does not work for large data set.
You denote the node by indices, therefore I guess you have them represented as two arrays as follows:
int[] parent; // index of parent (the parent of the root is negative, e.g. -1)
int[][] children; // indices of children for each node
You can collect the sums starting from the leaf nodes and proceed upwards as soon as you know the result of all children O(n):
s = new int[parent.length];
int[] processed = new int[parent.length]; // the number of children that are processed
for (int i = 0; i < parent.length; i++) // initialize
s[i] = 1;
for (int i = 0; i < parent.length; i++) {
if (children[i].length == 0) { // leaf node
int p = parent[i], j = i;
while (p >= 0 && processed[j] == children[j].length) { // all children are processed
s[p] += s[j]; // adjust parent score
processed[p]++; // increase the number of processed child nodes for parent
j = p; // parent becomes the current node
p = parent[j]; // and its parent the parent
}
}
}
I will describe one possible iterative approach, which consists of two steps:
use a queue to determine the depth of each node.
process the nodes in decreasing depth order.
This approach is based on a BFS traversal of the tree, thus it does not directly mimic the DFS traversal which is done recursively, and has the advantage of being easier to implement iteratively.
For step 1:
initially, add into the queue only the root node, mark it with depth = 0.
while the queue is not empty, extract the first node from the queue, look at its depth (denoted here as currentDepth), and add its children to the end of the queue by marking each with childDepth = currentDepth + 1.
For step 2:
process the nodes in the reverse depth order. The processing of a node involves computing its sub-tree size, by adding the sizes of all children (plus 1, for the current node).
note that each time a node is processed, the children were already processed (because all nodes with higher depth were already processed), thus we already know the sizes of the children sub-trees.
Remark:
For step 2, sorting the nodes in decreasing depth order can be done efficiently by implementing the queue from step 1 with a list from which we never actually remove elements (e.g. the queue head can be kept using a pointer, and only this pointer can be incremented when polling).
Processing this list in reverse order is all that is needed in order to traverse through the nodes in decreasing depth order. Thus, it is not really necessary to explicitly use the depth field.
The implementation of the above ideas would look like this:
void find_size() {
// Step 1
int queue[numNodes];
queue[0] = 1; // add the root in the queue
int start = 0;
int end = 1;
while (start < end) {
int node = queue[start++]; // poll one node from the queue
for (int i: Children[node]) { // add its children to the end
queue[end++] = i;
}
}
// Step 2
for (int i = end - 1; i >= 0; i--) {
int node = queue[i];
S[node] = 1;
for (int j: Children[node]) {
S[node] += S[j];
}
}
}

Find all the nodes which can be reached from a given node within certain distance

Multiple nodes can also be traversed in sequence to create a path.
struct Node {
float pos [2];
bool visited = false;
};
// Search Space
Node nodes [MAX_NODE_COUNT];
// Function to return all reachable nodes
// nodeCount : Number of nodes
// nodes : Array of nodes
// indexCount : Size of the returned array
// Return : Returns array of all reachable node indices
int* GetReachableNodes (int nodeCount, Node* nodes, int* indexCount)
{
// This is the naive approach
queue <int> indices;
vector <int> indexList;
indices.push (nodes [0]);
nodes [0].visited = true;
// Loop while queue is not empty
while (!indices.empty ())
{
// Pop the front of the queue
int parent = indices.front ();
indices.pop ();
// Loop through all children
for (int i = 0; i < nodeCount; ++i)
{
if (!nodes [i].visited && i != parent && DistanceSqr (nodes [i], nodes [parent]) < maxDistanceSqr)
{
indices.push (i);
nodes [i].visited = true;
indexList.push_back (i);
}
}
}
int* returnData = new int [indexList.size ()];
std::move (indexList.begin (), indexList.end (), returnData);
*indexCount = indexList.size ();
// Caller responsible for delete
return returnData;
}
The problem is I can't use a graph since all nodes are connected to each other.
All the data is just an array of nodes with their positions and the start node.
I can solve this problem easily with BFS but it's n^2.
I have been figuring out how to solve this in less than O(n^2) time complexity.
Some of the approaches I tried were parallel BFS, Dijkstra but it doesn't help a lot.
Any help will be really appreciated.

Efficient data structure for storing a long sequence of (mostly consecutive) integers

I'd like a data structure to efficiently store a long sequence of numbers. The numbers should always be whole integers, let's say Longs.
The feature of the inputs I'd like to leverage (to claim 'efficiency') is that the longs will be mostly consecutive. There can be missing values. And the values can be interacted with out of order.
I'd like the data structure to support the following operations:
addVal(n): add a single value, n
addRange(n, m): add all values between n and m, inclusive
delVal(n): remove a single value, n
delRange(n, m): remove all values between n and m, inclusive
containsVal(n): return whether a single value, n, exists in the structure
containsRange(n, m): return whether all values between n and m, incluse, exist in the structure
In essence this is a more specific kind of Set data structure that can leverage the continuity of the data to use less than O(n) memory, where n is the number of values stored.
To be clear, while I think an efficient implementation of such a data structure will require that we store intervals internally, that isn't visible or relevant to the user. There are some interval trees that store multiple intervals separately and allow operations to find the number of intervals that overlap with a given point or interval. But from the user perspective this should behave exactly like a set (except for the range-based operations so bulk additions and deletions can be handled efficiently).
Example:
dataStructure = ...
dataStructure.addRange(1,100) // [(1, 100)]
dataStructure.addRange(200,300) // [(1, 100), (200, 300)]
dataStructure.addVal(199) // [(1, 100), (199, 300)]
dataStructure.delRange(50,250) // [(1, 49), (251, 300)]
My assumption is this would best be implemented by some tree-based structure but I don't have a great impression about how to do that yet.
I wanted to learn if there was some commonly used data structure that already satisfies this use case, as I'd rather not reinvent the wheel. If not, I'd like to hear how you think this could best be implemented.
If you don't care about duplicates, then your intervals are non-overlapping. You need to create a structure that maintains that invariant. If you need a query like numIntervalsContaining(n) then that is a different problem.
You could use a BST that stores pairs of endpoints, as in a C++ std::set<std::pair<long,long>>. The interpretation is that each entry corresponds to the interval [n,m]. You need a weak ordering - it is the usual integer ordering on the left endpoint. A single int or long n is inserted as [n,n]. We have to maintain the property that all node intervals are non-overlapping. A brief evaluation of the order of your operations is as follows. Since you've already designated n I use N for the size of the tree.
addVal(n): add a single value, n : O(log N), same as a std::set<int>. Since the intervals are non-overlapping, you need to find the predecessor of n, which can be done in O(log n) time (break it down by cases as in https://www.quora.com/How-can-you-find-successors-and-predecessors-in-a-binary-search-tree-in-order). Look at the right endpoint of this predecessor, and extend the interval or add an additional node [n,n] if necessary, which by left-endpoint ordering will always be a right child. Note that if the interval is extended (inserting [n+1,n+1] into a tree with node [a,n] forming the node [a,n+1]) it may now bump into the next left endpoint, requiring another merge. So there are a few edge cases to consider. A little more complicated than a simple BST, but still O(log N).
addRange(n, m): O(log N), process is similar. If the interval inserted intersects nontrivially with another, merge the intervals so that the non-overlapping property is maintained. The worst case performance is O(n) as pointed out below as there is no upper limit on the number of subintervals consumed by the one we are inserting.
delVal(n): O(log N), again O(n) worst case as we don't know how many intervals are contained in the interval we are deleting.
delRange(n, m): remove all values between n and m, inclusive : O(log N)
containsVal(n): return whether a single value, n, exists in the structure : O(log N)
containsRange(n, m): return whether all values between n and m, inclusive, exist in the structure : O(log N)
Note that we can maintain the non-overlapping property with correct add() and addRange() methods, it is already maintained by the delete methods. We need O(n) storage at the worst.
Note that all operations are O(log N), and inserting the range [n,m] is nothing like O(m-n) or O(log(m-n)) or anything like that.
I assume you don't care about duplicates, just membership. Otherwise you may need an interval tree or KD-tree or something, but those are more relevant for float data...
Another alternative might be a rope data structure ( https://en.m.wikipedia.org/wiki/Rope_(data_structure) ), which seems to support the operations you are asking for, implemented in O(log n) time. As opposed to the example in Wikipedia, yours would store [start,end] rather than string subsequences.
What's interesting about the rope is its efficient lookup of index-within-interval. It accomplishes this by ordering all value positions from left to right - a lower to higher positioning (of which your intervals would be a straightforward representation) can be either upwards or downwards as long as the movement is to the right - as well as relying on storing subtree size, which orients current position based on the weight on the left. Engulfing partial intervals by larger encompassing intervals could be accomplished in O(log n) time by updating and unlinking relevant tree segments.
The problem with storing each interval as a (start,end) couple is that if you add a new range that encompasses N previously stored intervals, you have to destroy each of these intervals, which takes N steps, whether the intervals are stored in a tree, a rope or a linked list.
(You could leave them for automatic garbage collection, but that will take time too, and only works in some languages.)
A possible solution for this could be to store the values (not the start and end point of intervals) in an N-ary tree, where each node represents a range, and stores two N-bit maps, representing N sub-ranges and whether the values in those sub-ranges are all present, all absent, or mixed. In the case of mixed, there would be a pointer to a child node which represents this rub-range.
Example: (using a tree with branching factor 8 and height 2)
full range: 0-511 ; store interval 100-300
0-511:
0- 63 64-127 128-191 192-255 256-319 320-383 384-447 448-511
0 mixed 1 1 mixed 0 0 0
64-127:
64- 71 72- 79 80- 87 88- 95 96-103 104-111 112-119 120-127
0 0 0 0 mixed 1 1 1
96-103:
96 97 98 99 100 101 102 103
0 0 0 0 1 1 1 1
256-319:
256-263 264-271 272-279 280-287 288-295 296-303 304-311 312-319
1 1 1 1 1 mixed 0 0
296-303:
296 297 298 299 300 301 302 303
1 1 1 1 1 0 0 0
So the tree would contain these five nodes:
- values: 00110000, mixed: 01001000, 2 pointers to sub-nodes
- values: 00000111, mixed: 00001000, 1 pointer to sub-node
- values: 00001111, mixed: 00000000
- values: 11111000, mixed: 00000100, 1 pointer to sub-node
- values: 11111000, mixed: 00000000
The point of storing the interval this way is that you can discard an interval without having to actually delete it. Let's say you add a new range 200-400; in that case, you'd change the range 256-319 in the root node from "mixed" to "1", without deleting or updating the 256-319 and 296-303 nodes themselves; these nodes can be kept for later re-use (or disconnected and put in a queue of re-usable sub-trees, or deleted in a programmed garbage-collection when the programme is idling or running low on memory).
When looking up an interval, you only have to go as deep down the tree as necessary; when looking up e.g. 225-275, you'd find that 192-255 is all-present, 256-319 is mixed, 256-263 and 264-271 and 272-279 are all-present, and you'd know the answer is true. Since these values would be stored as bitmaps (one for present/absent, one for mixed/solid), all the values in a node could be checked with only a few bitwise comparisons.
Re-using nodes:
If a node has a child node, and the corresponding value is later set from mixed to all-absent or all-present, the child node no longer holds relevant values (but it is being ignored). When the value is changed back to mixed, the child node can be updated by setting all its values to its value in the parent node (before it was changed to mixed) and then making the necessary changes.
In the example above, if we add the range 0-200, this will change the tree to:
- values: 11110000, mixed: 00001000, 2 pointers to sub-nodes
- (values: 00000111, mixed: 00001000, 1 pointer to sub-node)
- (values: 00001111, mixed: 00000000)
- values: 11111000, mixed: 00000100, 1 pointer to sub-node
- values: 11111000, mixed: 00000000
The second and third node now contain outdated values, and are being ignored. If we then delete the range 80-95, the value for range 64-127 in the root node is changed to mixed again, and the node for range 64-127 is re-used. First we set all values in it to all-present (because that was the previous value of the parent node), and then we set the values for 80-87 and 88-95 to all-absent. The third node, for range 96-103 remains out-of-use.
- values: 00110000, mixed: 01001000, 2 pointers to sub-nodes
- values: 11001111, mixed: 00000000, 1 pointer to sub-node
- (values: 00001111, mixed: 00000000)
- values: 11111000, mixed: 00000100, 1 pointer to sub-node
- values: 11111000, mixed: 00000000
If we then added value 100, the value for range 96-103 in the second node would be changed to mixed again, and the third node would be updated to all-absent (its previous value in the second node) and then value 100 would be set to present:
- values: 00110000, mixed: 01001000, 2 pointers to sub-nodes
- values: 11000111, mixed: 00001000, 1 pointer to sub-node
- values: 00001000, mixed: 00000000
- values: 11111000, mixed: 00000100, 1 pointer to sub-node
- values: 11111000, mixed: 00000000
At first it may seem that this data structure uses a lot of storage space compared to solutions which store the intervals as (start,end) pairs. However, let's look at the (theoretical) worst-case scenario, where every even number is present and every odd number is absent, across the whole 64-bit range:
Total range: 0 ~ 18,446,744,073,709,551,615
Intervals: 9,223,372,036,854,775,808
A data structure which stores these as (start,end) pairs would use:
Nodes: 9,223,372,036,854,775,808
Size of node: 16 bytes
TOTAL: 147,573,952,589,676,412,928 bytes
If the data structure uses nodes which are linked via (64-bit) pointers, that would add:
Data: 147,573,952,589,676,412,928 bytes
Pointers: 73,786,976,294,838,206,456 bytes
TOTAL: 221,360,928,884,514,619,384 bytes
An N-ary tree with branching factor 64 (and 16 for the last level, to get a total range of 10×6 + 1×4 = 64 bits) would use:
Nodes (branch): 285,942,833,483,841
Size of branch: 528 bytes
Nodes (leaf): 18,014,398,509,481,984
Size of leaf: 144 bytes
TOTAL: 2,745,051,201,444,873,744 bytes
which is 53.76 times less than (start,end) pair structures (or 80.64 times less including pointers).
The calculation was done with the following N-ary tree:
Branch (9 levels):
value: 64-bit map
mixed: 64-bit map
sub-nodes: 64 pointers
TOTAL: 528 bytes
Leaf:
value: 64-bit map
mixed: 64-bit map
sub-nodes: 64 16-bit maps (more efficient than pointing to sub-node)
TOTAL: 144 bytes
This is of course a worst-case comparison; the average case would depend very much on the specific input.
Here's a first code example I wrote to test the idea. The nodes have branching factor 16, so that every level stores 4 bits of the integers, and common bit depths can be obtained without different leaves and branches. As an example, a tree of depth 3 is created, representing a range of 4×4 = 16 bits.
function setBits(pattern, value, mask) { // helper function (make inline)
return (pattern & ~mask) | (value ? mask : 0);
}
function Node(value) { // CONSTRUCTOR
this.value = value ? 65535 : 0; // set all values to 1 or 0
this.mixed = 0; // set all to non-mixed
this.sub = null; // no pointer array yet
}
Node.prototype.prepareSub = function(pos, mask, value) {
if ((this.mixed & mask) == 0) { // not mixed, child possibly outdated
var prev = (this.value & mask) >> pos;
if (value == prev) return false; // child doesn't require setting
if (!this.sub) this.sub = []; // create array of pointers
if (this.sub[pos]) {
this.sub[pos].value = prev ? 65535 : 0; // update child node values
this.sub[pos].mixed = 0;
}
else this.sub[pos] = new Node(prev); // create new child node
}
return true; // child requires setting
}
Node.prototype.set = function(value, a, b, step) {
var posA = Math.floor(a / step), posB = Math.floor(b / step);
var maskA = 1 << posA, maskB = 1 << posB;
a %= step; b %= step;
if (step == 1) { // node is leaf
var vMask = (maskB | (maskB - 1)) ^ (maskA - 1); // bits posA to posB inclusive
this.value = setBits(this.value, value, vMask);
}
else if (posA == posB) { // only 1 sub-range to be set
if (a == 0 && b == step - 1) // a-b is full sub-range
{
this.value = setBits(this.value, value, maskA);
this.mixed = setBits(this.mixed, 0, maskA);
}
else if (this.prepareSub(posA, maskA, value)) { // child node requires setting
var solid = this.sub[posA].set(value, a, b, step >> 4); // recurse
this.value = setBits(this.value, solid ? value : 0, maskA); // set value
this.mixed = setBits(this.mixed, solid ? 0 : 1, maskA); // set mixed
}
}
else { // multiple sub-ranges to set
var vMask = (maskB - 1) ^ (maskA | (maskA - 1)); // bits between posA and posB
this.value = setBits(this.value, value, vMask); // set inbetween values
this.mixed &= ~vMask; // set inbetween to solid
var solidA = true, solidB = true;
if (a != 0 && this.prepareSub(posA, maskA, value)) { // child needs setting
solidA = this.sub[posA].set(value, a, step - 1, step >> 4);
}
if (b != step - 1 && this.prepareSub(posB, maskB, value)) { // child needs setting
solidB = this.sub[posB].set(value, 0, b, step >> 4);
}
this.value = setBits(this.value, solidA ? value : 0, maskA); // set value
this.value = setBits(this.value, solidB ? value : 0, maskB);
if (solidA) this.mixed &= ~maskA; else this.mixed |= maskA; // set mixed
if (solidB) this.mixed &= ~maskB; else this.mixed |= maskB;
}
return this.mixed == 0 && this.value == 0 || this.value == 65535; // solid or mixed
}
Node.prototype.check = function(a, b, step) {
var posA = Math.floor(a / step), posB = Math.floor(b / step);
var maskA = 1 << posA, maskB = 1 << posB;
a %= step; b %= step;
var vMask = (maskB - 1) ^ (maskA | (maskA - 1)); // bits between posA and posB
if (step == 1) {
vMask = posA == posB ? maskA : vMask | maskA | maskB;
return (this.value & vMask) == vMask;
}
if (posA == posB) {
var present = (this.mixed & maskA) ? this.sub[posA].check(a, b, step >> 4) : this.value & maskA;
return !!present;
}
var present = (this.mixed & maskA) ? this.sub[posA].check(a, step - 1, step >> 4) : this.value & maskA;
if (!present) return false;
present = (this.mixed & maskB) ? this.sub[posB].check(0, b, step >> 4) : this.value & maskB;
if (!present) return false;
return (this.value & vMask) == vMask;
}
function NaryTree(factor, depth) { // CONSTRUCTOR
this.root = new Node();
this.step = Math.pow(factor, depth);
}
NaryTree.prototype.addRange = function(a, b) {
this.root.set(1, a, b, this.step);
}
NaryTree.prototype.delRange = function(a, b) {
this.root.set(0, a, b, this.step);
}
NaryTree.prototype.hasRange = function(a, b) {
return this.root.check(a, b, this.step);
}
var intervals = new NaryTree(16, 3); // create tree for 16-bit range
// CREATE RANDOM DATA AND RUN TEST
document.write("Created N-ary tree for 16-bit range.<br>Randomly adding/deleting 100000 intervals...");
for (var test = 0; test < 100000; test++) {
var a = Math.floor(Math.random() * 61440);
var b = a + Math.floor(Math.random() * 4096);
if (Math.random() > 0.5) intervals.addRange(a,b);
else intervals.delRange(a,b);
}
document.write("<br>Checking a few random intervals:<br>");
for (var test = 0; test < 8; test++) {
var a = Math.floor(Math.random() * 65280);
var b = a + Math.floor(Math.random() * 256);
document.write("Tree has interval " + a + "-" + b + " ? " + intervals.hasRange(a,b),".<br>");
}
I ran a test to check how many nodes are being created, and how many of these are active or dormant. I used a total range of 24-bit (so that I could test the worst-case without running out of memory), divided into 6 levels of 4 bits (so each node has 16 sub-ranges); the number of nodes that need to be checked or updated when adding, deleting or checking an interval is 11 or less. The maximum number of nodes in this scheme is 1,118,481.
The graph below shows the number of active nodes when you keep adding/deleting random intervals with range 1 (single integers), 1~16, 1~256 ... 1~16M (the full range).
Adding and deleting single integers (dark green line) creates active nodes up to close to the maximum 1,118,481 nodes, with almost no nodes being made dormant. The maximum is reached after adding and deleting around 16M integers (= the number of integers in the range).
If you add and delete random intervals in a larger range, the number of nodes that are created is roughly the same, but more of them are being made dormant. If you add random intervals in the full 1~16M range (bright yellow line), less than 64 nodes are active at any time, no matter how many intervals you keep adding or deleting.
This already gives an idea of where this data structure could be useful as opposed to others: the more nodes are being made dormant, the more intervals/nodes would need to be deleted in other schemes.
On the other hand, it shows how this data structure may be too space-inefficient for certain ranges, and types and amounts of input. You could introduce a dormant node recycling system, but that takes away the advantage of the dormant nodes being immediately reusable.
A lot of space in the N-ary tree is taken up by pointers to child nodes. If the complete range is small enough, you could store the tree in an array. For a 32-bit range that would take 580 MB (546 MB for the "value" bitmaps and 34 MB for the "mixed" bitmaps). This is more space-efficient because you only store the bitmaps, and you don't need pointers to child nodes, because everything has a fixed place in the array. You'd have the advantage of a tree with depth 7, so any operation could be done by checking 15 "nodes" or fewer, and no nodes need to be created or deleted during add/delete/check operations.
Here's a code example I used to try out the N-ary-tree-in-an-array idea; it uses 580MB to store a N-ary tree with branching factor 16 and depth 7, for a 32-bit range (unfortunately, a range above 40 bits or so is probably beyond the memory capabilities of any normal computer). In addition to the requested functions, it can also check whether an interval is completely absent, using notValue() and notRange().
#include <iostream>
inline uint16_t setBits(uint16_t pattern, uint16_t mask, uint16_t value) {
return (pattern & ~mask) | (value & mask);
}
class NaryTree32 {
uint16_t value[0x11111111], mixed[0x01111111];
bool set(uint32_t a, uint32_t b, uint16_t value = 0xFFFF, uint8_t bits = 28, uint32_t offset = 0) {
uint8_t posA = a >> bits, posB = b >> bits;
uint16_t maskA = 1 << posA, maskB = 1 << posB;
uint16_t mask = maskB ^ (maskA - 1) ^ (maskB - 1);
// IF NODE IS LEAF: SET VALUE BITS AND RETURN WHETHER VALUES ARE MIXED
if (bits == 0) {
this->value[offset] = setBits(this->value[offset], mask, value);
return this->value[offset] != 0 && this->value[offset] != 0xFFFF;
}
uint32_t offsetA = offset * 16 + posA + 1, offsetB = offset * 16 + posB + 1;
uint32_t subMask = ~(0xFFFFFFFF << bits);
a &= subMask; b &= subMask;
// IF SUB-RANGE A IS MIXED OR HAS WRONG VALUE
if (((this->mixed[offset] & maskA) != 0 || (this->value[offset] & maskA) != (value & maskA))
&& (a != 0 || posA == posB && b != subMask)) {
// IF SUB-RANGE WAS PREVIOUSLY SOLID: UPDATE TO PREVIOUS VALUE
if ((this->mixed[offset] & maskA) == 0) {
this->value[offsetA] = (this->value[offset] & maskA) ? 0xFFFF : 0x0000;
if (bits != 4) this->mixed[offsetA] = 0x0000;
}
// RECURSE AND IF SUB-NODE IS MIXED: SET MIXED BIT AND REMOVE A FROM MASK
if (this->set(a, posA == posB ? b : subMask, value, bits - 4, offsetA)) {
this->mixed[offset] |= maskA;
mask ^= maskA;
}
}
// IF SUB-RANGE B IS MIXED OR HAS WRONG VALUE
if (((this->mixed[offset] & maskB) != 0 || (this->value[offset] & maskB) != (value & maskB))
&& b != subMask && posA != posB) {
// IF SUB-RANGE WAS PREVIOUSLY SOLID: UPDATE SUB-NODE TO PREVIOUS VALUE
if ((this->mixed[offset] & maskB) == 0) {
this->value[offsetB] = (this->value[offset] & maskB) ? 0xFFFF : 0x0000;
if (bits > 4) this->mixed[offsetB] = 0x0000;
}
// RECURSE AND IF SUB-NODE IS MIXED: SET MIXED BIT AND REMOVE A FROM MASK
if (this->set(0, b, value, bits - 4, offsetB)) {
this->mixed[offset] |= maskB;
mask ^= maskB;
}
}
// SET VALUE AND MIXED BITS THAT HAVEN'T BEEN SET YET AND RETURN WHETHER NODE IS MIXED
if (mask) {
this->value[offset] = setBits(this->value[offset], mask, value);
this->mixed[offset] &= ~mask;
}
return this->mixed[offset] != 0 || this->value[offset] != 0 && this->value[offset] != 0xFFFF;
}
bool check(uint32_t a, uint32_t b, uint16_t value = 0xFFFF, uint8_t bits = 28, uint32_t offset = 0) {
uint8_t posA = a >> bits, posB = b >> bits;
uint16_t maskA = 1 << posA, maskB = 1 << posB;
uint16_t mask = maskB ^ (maskA - 1) ^ (maskB - 1);
// IF NODE IS LEAF: CHECK BITS A TO B INCLUSIVE AND RETURN
if (bits == 0) {
return (this->value[offset] & mask) == (value & mask);
}
uint32_t subMask = ~(0xFFFFFFFF << bits);
a &= subMask; b &= subMask;
// IF SUB-RANGE A IS MIXED AND PART OF IT NEEDS CHECKING: RECURSE AND RETURN IF FALSE
if ((this->mixed[offset] & maskA) && (a != 0 || posA == posB && b != subMask)) {
if (this->check(a, posA == posB ? b : subMask, value, bits - 4, offset * 16 + posA + 1)) {
mask ^= maskA;
}
else return false;
}
// IF SUB-RANGE B IS MIXED AND PART OF IT NEEDS CHECKING: RECURSE AND RETURN IF FALSE
if (posA != posB && (this->mixed[offset] & maskB) && b != subMask) {
if (this->check(0, b, value, bits - 4, offset * 16 + posB + 1)) {
mask ^= maskB;
}
else return false;
}
// CHECK INBETWEEN BITS (AND A AND/OR B IF NOT YET CHECKED) WHETHER SOLID AND CORRECT
return (this->mixed[offset] & mask) == 0 && (this->value[offset] & mask) == (value & mask);
}
public:
NaryTree32() { // CONSTRUCTOR: INITIALISES ROOT NODE
this->value[0] = 0x0000;
this->mixed[0] = 0x0000;
}
void addValue(uint32_t a) {this->set(a, a);}
void addRange(uint32_t a, uint32_t b) {this->set(a, b);}
void delValue(uint32_t a) {this->set(a, a, 0);}
void delRange(uint32_t a, uint32_t b) {this->set(a, b, 0);}
bool hasValue(uint32_t a) {return this->check(a, a);}
bool hasRange(uint32_t a, uint32_t b) {return this->check(a, b);}
bool notValue(uint32_t a) {return this->check(a, a, 0);}
bool notRange(uint32_t a, uint32_t b) {return this->check(a, b, 0);}
};
int main() {
NaryTree32 *tree = new NaryTree32();
tree->addRange(4294967280, 4294967295);
std::cout << tree->hasRange(4294967280, 4294967295) << "\n";
tree->delValue(4294967290);
std::cout << tree->hasRange(4294967280, 4294967295) << "\n";
tree->addRange(1000000000, 4294967295);
std::cout << tree->hasRange(4294967280, 4294967295) << "\n";
tree->delRange(2000000000, 4294967280);
std::cout << tree->hasRange(4294967280, 4294967295) << "\n";
return 0;
}
Interval trees seem to be geared toward storing overlapping intervals, while in your case that doesn't make sense. An interval tree could hold millions of small overlapping intervals, which together form only a handful of longer non-overlapping intervals.
If you want to store only non-overlapping intervals, then adding or deleting an interval may involve deleting a number of consecutive intervals that fall within the new interval. So quickly finding consecutive intervals, and efficient deletion of a potentially large number of intervals are important.
That sounds like a job for the humble linked list. When inserting a new interval, you'd:
Search the position of the new interval's starting point.
If it is inside an existing interval, go on to find the position of the end point, while extending the existing interval and deleting all intervals you pass on the way.
If it is inbetween existing intervals, check if the end point comes before the next existing interval. If it does, create a new interval. If the end point comes after the start of the next existing interval, change the starting point of the next interval, and then go on to find the end point as explained in the previous paragraph.
Deleting an interval would be largely the same: you truncate the intervals that the starting point and end point are inside of, and delete all the intervals inbetween.
The average and worst-case complexity of this are N/2 and N, where N is the number of intervals in the linked list. You could improve this by adding a method to avoid having to iterate over the whole list to find the starting point; if you know the range and distribution of the values, this could be something like a hash table; e.g. if the values are from 1 to X and the distribution is uniform, you'd store a table of length Y, where each item points to the interval that starts before the value X/Y. When adding an interval (A,B), you'd look up table[A/Y] and start iterating over the linked list from there. The choice of value for Y would be determined by how much space you want to use, versus how close you want to get to the actual position of the starting point. The complexities would then drop by a factor Y.
(If you work in a language where you can short-circuit a linked list, and just leave the chain of objects you cut loose to be garbage-collected, you could find the location of the starting point and end point independently, connect them, and skip the deletion of all the intervals inbetween. I don't know whether this would actually increase speed in practice.)
Here's a start of a code example, with the three range functions, but without further optimisation:
function Interval(a, b, n) {
this.start = a;
this.end = b;
this.next = n;
}
function IntervalList() {
this.first = null;
}
IntervalList.prototype.addRange = function(a, b) {
if (!this.first || b < this.first.start - 1) {
this.first = new Interval(a, b, this.first); // insert as first element
return;
}
var i = this.first;
while (a > i.end + 1 && i.next && b >= i.next.start - 1) {
i = i.next; // locate starting point
}
if (a > i.end + 1) { // insert as new element
i.next = new Interval(a, b, i.next);
return;
}
var j = i.next;
while (j && b >= j.start - 1) { // locate end point
i.end = j.end;
i.next = j = j.next; // discard overlapping interval
}
if (a < i.start) i.start = a; // update interval start
if (b > i.end) i.end = b; // update interval end
}
IntervalList.prototype.delRange = function(a, b) {
if (!this.first || b < this.first.start) return; // range before first interval
var i = this.first;
while (i.next && a > i.next.start) i = i.next; // a in or after interval i
if (a > i.start) { // a in interval
if (b < i.end) { // range in interval -> split
i.next = new Interval(b + 1, i.end, i.next);
i.end = a - 1;
return;
}
if (a <= i.end) i.end = a - 1; // truncate interval
}
var j = a > i.start ? i.next : i;
while (j && b >= j.end) j = j.next; // b before or in interval j
if (a <= this.first.start) this.first = j; // short-circuit list
else i.next = j;
if (j && b >= j.start) j.start = b + 1; // truncate interval
}
IntervalList.prototype.hasRange = function(a, b) {
if (!this.first) return false; // empty list
var i = this.first;
while (i.next && a > i.end) i = i.next; // a before or in interval i
return a >= i.start && b <= i.end; // range in interval ?
}
IntervalList.prototype.addValue = function(a) {
this.addRange(a, a); // could be optimised
}
IntervalList.prototype.delValue = function(a) {
this.delRange(a, a); // could be optimised
}
IntervalList.prototype.hasValue = function(a) {
return this.hasRange(a, a); // could be optimised
}
IntervalList.prototype.print = function() {
var i = this.first;
if (i) do document.write("(" + i.start + "-" + i.end + ") "); while (i = i.next);
document.write("<br>");
}
var intervals = new IntervalList();
intervals.addRange(100,199);
document.write("+ (100-199) → "); intervals.print();
intervals.addRange(300,399);
document.write("+ (300-399) → "); intervals.print();
intervals.addRange(200,299);
document.write("+ (200-299) → "); intervals.print();
intervals.delRange(225,275);
document.write("− (225-275) → "); intervals.print();
document.write("(150-200) ? " + intervals.hasRange(150,200) + "<br>");
document.write("(200-300) ? " + intervals.hasRange(200,300) + "<br>");
I'm surprised no one has suggested segment trees over the integer domain of stored values. (When used in geometric applications like graphics in 2d and 3d, they're called quadtrees and octrees resp.) Insert, delete, and lookup will have time and space complexity proportional to the number of bits in (maxval - minval), that is log_2 (maxval - minval), the max and min values of the integer data domain.
In a nutshell, we are encoding a set of integers in [minval, maxval]. A node at topmost level 0 represents that entire range. Each successive level's nodes represent sub-ranges of approximate size (maxval - minval) / 2^k. When a node is included, some subset of it's corresponding values are part of the represented set. When it's a leaf, all of its values are in the set. When it's absent, none are.
E.g. if minval=0 and maxval=7, then the k=1 children of the k=0 node represent [0..3] and [4..7]. Their children at level k=2 are [0..1][2..3][4..5], and [6..7], and the k=3 nodes represent individual elements. The set {[1..3], [6..7]} would be the tree (levels left to right):
[0..7] -- [0..3] -- [0..1]
| | `-[1]
| `- [2..3]
` [4..7]
`- [6..7]
It's not hard to see that space for the tree will be O(m log (maxval - minval)) where m is the number of intervals stored in the tree.
It's not common to use segment trees with dynamic insert and delete, but the algorithms turn out to be fairly simple. It takes some care to ensure the number of nodes is minimized.
Here is some very lightly tested java code.
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
public class SegmentTree {
// Shouldn't differ by more than Long.MAX_VALUE to prevent overflow.
static final long MIN_VAL = 0;
static final long MAX_VAL = Long.MAX_VALUE;
Node root;
static class Node {
Node left;
Node right;
Node(Node left, Node right) {
this.left = left;
this.right = right;
}
}
private static boolean isLeaf(Node node) {
return node != null && node.left == null && node.right == null;
}
private static Node reset(Node node, Node left, Node right) {
if (node == null) {
return new Node(left, right);
}
node.left = left;
node.right = right;
return node;
}
/**
* Accept an arbitrary subtree rooted at a node representing a subset S of the range [lo,hi] and
* transform it into a subtree representing S + [a,b]. It's assumed a >= lo and b <= hi.
*/
private static Node add(Node node, long lo, long hi, long a, long b) {
// If the range is empty, the interval tree is always null.
if (lo > hi) return null;
// If this is a leaf or insertion is outside the range, there's no change.
if (isLeaf(node) || a > b || b < lo || a > hi) return node;
// If insertion fills the range, return a leaf.
if (a == lo && b == hi) return reset(node, null, null);
// Insertion doesn't cover the range. Get the children, if any.
Node left = null, right = null;
if (node != null) {
left = node.left;
right = node.right;
}
// Split the range and recur to insert in halves.
long mid = lo + (hi - lo) / 2;
left = add(left, lo, mid, a, Math.min(b, mid));
right = add(right, mid + 1, hi, Math.max(a, mid + 1), b);
// Build a new node, coallescing to leaf if both children are leaves.
return isLeaf(left) && isLeaf(right) ? reset(node, null, null) : reset(node, left, right);
}
/**
* Accept an arbitrary subtree rooted at a node representing a subset S of the range [lo,hi] and
* transform it into a subtree representing range(s) S - [a,b]. It's assumed a >= lo and b <= hi.
*/
private static Node del(Node node, long lo, long hi, long a, long b) {
// If the tree is null, we can't remove anything, so it's still null
// or if the range is empty, the tree is null.
if (node == null || lo > hi) return null;
// If the deletion is outside the range, there's no change.
if (a > b || b < lo || a > hi) return node;
// If deletion fills the range, return an empty tree.
if (a == lo && b == hi) return null;
// Deletion doesn't fill the range.
// Replace a leaf with a tree that has the deleted portion removed.
if (isLeaf(node)) {
return add(add(null, lo, hi, b + 1, hi), lo, hi, lo, a - 1);
}
// Not a leaf. Get children, if any.
Node left = node.left, right = node.right;
long mid = lo + (hi - lo) / 2;
// Recur to delete in child ranges.
left = del(left, lo, mid, a, Math.min(b, mid));
right = del(right, mid + 1, hi, Math.max(a, mid + 1), b);
// Build a new node, coallescing to empty tree if both children are empty.
return left == null && right == null ? null : reset(node, left, right);
}
private static class NotContainedException extends Exception {};
private static void verifyContains(Node node, long lo, long hi, long a, long b)
throws NotContainedException {
// If this is a leaf or query is empty, it's always contained.
if (isLeaf(node) || a > b) return;
// If tree or search range is empty, the query is never contained.
if (node == null || lo > hi) throw new NotContainedException();
long mid = lo + (hi - lo) / 2;
verifyContains(node.left, lo, mid, a, Math.min(b, mid));
verifyContains(node.right, mid + 1, hi, Math.max(a, mid + 1), b);
}
SegmentTree addRange(long a, long b) {
root = add(root, MIN_VAL, MAX_VAL, Math.max(a, MIN_VAL), Math.min(b, MAX_VAL));
return this;
}
SegmentTree addVal(long a) {
return addRange(a, a);
}
SegmentTree delRange(long a, long b) {
root = del(root, MIN_VAL, MAX_VAL, Math.max(a, MIN_VAL), Math.min(b, MAX_VAL));
return this;
}
SegmentTree delVal(long a) {
return delRange(a, a);
}
boolean containsVal(long a) {
return containsRange(a, a);
}
boolean containsRange(long a, long b) {
try {
verifyContains(root, MIN_VAL, MAX_VAL, Math.max(a, MIN_VAL), Math.min(b, MAX_VAL));
return true;
} catch (NotContainedException expected) {
return false;
}
}
private static final boolean PRINT_SEGS_COALESCED = true;
/** Gather a list of possibly coalesced segments for printing. */
private static void gatherSegs(List<Long> segs, Node node, long lo, long hi) {
if (node == null) {
return;
}
if (node.left == null && node.right == null) {
if (PRINT_SEGS_COALESCED && !segs.isEmpty() && segs.get(segs.size() - 1) == lo - 1) {
segs.remove(segs.size() - 1);
} else {
segs.add(lo);
}
segs.add(hi);
} else {
long mid = lo + (hi - lo) / 2;
gatherSegs(segs, node.left, lo, mid);
gatherSegs(segs, node.right, mid + 1, hi);
}
}
SegmentTree print() {
List<Long> segs = new ArrayList<>();
gatherSegs(segs, root, MIN_VAL, MAX_VAL);
Iterator<Long> it = segs.iterator();
while (it.hasNext()) {
long a = it.next();
long b = it.next();
System.out.print("[" + a + "," + b + "]");
}
System.out.println();
return this;
}
public static void main(String [] args) {
SegmentTree tree = new SegmentTree()
.addRange(0, 4).print()
.addRange(6, 7).print()
.delVal(2).print()
.addVal(5).print()
.addRange(0,1000).print()
.addVal(5).print()
.delRange(22, 777).print();
System.out.println(tree.containsRange(3, 20));
}
}

Find the least common parent in a binary tree?

This question might have been asked by a lot of guys but, it is kinda different. We have a binary tree. And you are given two nodes p & q. We have to find the least common parent. But you dont have root node pointer which points to the root. You are provided with two inbuilt functions which are:
1) BOOL same(node *p, node *q); -> returns true if the nodes are same or else false.
2) node* parentNode(node *c); -> returns a node which is the parent of the current node.
If the node c is actually root then parentNode function will return you with aNULL value.
Using the functions we have to find the least common parent of the tree.
Step1: Using parentNode function find the distance d1 of the node p from root. similarly find distance d2 of node q from the root. (say, d2 comes out ot be greater than d1)
Step 2: Move the farther node(whose ever d-value was greater) pointer d2-d1 steps towards root.
Step3: Simultaneously move pointers p and q towards root till they point to same node and return that node.
Basically it will be like finding the merge point of two linked-lists. Check the below link:
Check if two linked lists merge. If so, where?
Time complexity: O(N)
Your code would look somewhat along the lines:
node* LCP(node* p, node *q){
int d1=0, d2=0;
for(node* t= p; t; t = parentNode(p), ++d1);
for(node* t= q; t; t = parentNode(q), ++d2);
if(d1>d2){
swap(d1, d2);
swap(p, q);
}
for(int i=0; i<(d2-d1); ++i)
q = parentNode(q);
if( same(p, q)){
return parentNode(p);
}
while( !same(p, q)){
p = parentNode(p);
q = parentNode(q);
}
return p;
}
Assuming C++:
node* leastCommonParent(node *p, node *q)
{
node *pParent = parentNode(p);
while(pParent != 0)
{
node *qParent = parentNode(q);
while(qParent != 0)
{
if (0 == same(pParent, qParent))
return pParent;
qParent = parentNode(qParent);
}
pParent = parentNode(pParent);
}
return 0;
}
UPDATE: A version without explicitly declared variables using recursion follows. I'm sure it can be improved and would probably never use it in production code in the current form.
node* qParent(node *p, node *q)
{
if (p == 0 || q == 0)
return 0;
if (same(p, q) == 0)
return p;
return qParent(p, q->parent);
}
node* pParent(node *p, node *q)
{
return qParent(p, q) ? qParent(p, q) : pParent(p->parent, q);
}
node * result = pParent(p, q);

Resources