How to find nodes fast in an unordered tree - algorithm

I have an unordered tree in the form of, for example:
Root
A1
A1_1
A1_1_1
A1_1_2
A1_1_2_1
A1_1_2_2
A1_1_2_3
A1_1_3
A1_1_n
A1_2
A1_3
A1_n
A2
A2_1
A2_2
A2_3
A2_n
The tree is unordered
each child can have a random N count of children
each node stores an unique long value.
the value required can be at any position.
My problem: if I need the long value of A1_1_2_3, first time I will traverse the nodes I do depth first search to get it, however: on later calls to the same node I must get its value without a recursive search. Why? If this tree would have hundreds of thousands of nodes until it reaches my A1_1_2_3 node, it would take too much time.
What I thought of, is to leave some pointers after the first traverse. E.g. for my case, when I give back the long value for A1_1_2_3 I also give back an array with information for future searches of the same node and say: to get to A1_1_2_3, I need:
first child of Root, which is A1
first child of A1, which is A1_1
second child of A1_1, which is A1_1_2
third child of A1_1_2, which is what I need: A1_1_2_3
So I figured I would store this information along with the value for A1_1_2_3 as an array of indexes: [0, 0, 1, 2]. By doing so, I could easily recreate the node on subsequent calls to the A1_1_2_3 and avoid recursion each time.
However the nodes can change. On subsequent calls, I might have a new structure, so my indexes stored earlier would not match anymore. But if this happens, I thought whnever I dont find the element anymore, I would recursively go back up a level and search for the item, and so on until I find it again and store the indexes again for future references:
e.g. if my A1_1_2_3 is now situated in this new structure:
A1_1
A1_1_0
A1_1_1
A1_1_2
A1_1_2_1
A1_1_2_2
A1_1_21_22
A1_1_2_3
... in this case the new element A1_1_0 ruined my stored structure, so I would go back up a level and search children again recursively until I find it again.
Does this even make sense, what I thought of here, or am I overcomplicating things? Im talking about an unordered tree which can have max about three hundreds of thousands of nodes, and it is vital that I can jump to nodes as fast as possible. But the tree can also be very small, under 10 nodes.
Is there a more efficient way to search in such a situation?
Thank you for any idea.
edit:
I forgot to add: what I need on subsequent calls is not just the same value, but also its position is important, because I must get the next page of children after that child (since its a tree structure, Im calling paging on nodes after the initially selected one). Hope it makes more sense now.

Related

Is there a heap or heap-like structure that works with pointers, in other words nodes not in an array?

I currently have a double-linked list of objects in descending sorted order. (The list is intrusive--pointers in the objects.) I have a very limited set of operations:
add a node with the highest possible key
remove a node with the highest possible key (doesn't matter which one)
remove a node with key 0 (doesn't matter which one)
increment key of a node with highest current key (doesn't matter which one)
decrement key of any given node whose key is above 0
Operations 1-4 will be constant time, but operation 5 is O(n), where n=number of nodes with same key value. This is because such nodes, when incremented, have to be moved past their siblings with the same key value, and placed after that range. And finding that re-insert place will be O(n).
I thought of the heap (heapsort heap, not malloc heap) as a solution where worst-case would be O(log n) (where n=number of nodes). However, based on my recollection and what Google is finding me, it seems invariably implemented in an array, as opposed to a binary tree. So:
Question: is there an implementation of a heap that uses pointers in the manner of a binary tree, as opposed to an array, that maintains O() of the typical array implementation?
One common way to do this is to use an array-based heap, but:
In the heap you store pointers to nodes;
In each node you store its index in the heap; and
Whenever you swap elements in the heap, you update the indexes in the corresponding nodes;
This preserves the complexity of all the heap operations, and costs around 1.5 pointers and 1 integer per node. (the extra .5 is because of the way growable arrays are implemented).
Alternatively, you can just link the nodes together into a tree with pointers. To support the operations you want, though, this requires 3 pointers per node (parent, left, right)
Both ways work fine, but the array implementation is simpler, faster, and uses a bit less memory.
ETA:
I should point out, though, that if you use pointers then you can use different kinds of heaps. A Fibonacci heap will let you decrement the value of a node in amortized constant time. It's kinda complicated, though, and slow in practice: https://en.wikipedia.org/wiki/Fibonacci_heap
Unfortunately the answer to the written problem isn't an answer to the headline title of the written problem.
Solution 1: amortized O(1) data structure
A solution was found with amortized O(1) implementations of all required operations.
It is simply a double-linked list of double-linked lists. The "main" double-linked list nodes are called parents, and we have at most one parent per key value. The parent nodes keep a double-linked list of child nodes with the same key value. Each child additionally points to its parent.
add a node with the highest possible value: If there is no list head or it's value is not max, add new node to head of main linked list. Otherwise, add it to tail of the head node's child list.
remove a (any) node with the highest possible value: In the case of multiple items with highest value, it doesn't matter which we remove. So, if head parent has children, remove the tail child from the child list. Otherwise, remove the parent from the main list.
remove a (any) node with value 0: Same operations.
increment value of a (any) node with the highest current value: In case of multiple nodes with same key value, we can choose any, so choose the head parent's tail child. Remove it from the child list. If incrementing its value exceeds max value then you're done. Otherwise it's a new head node. If instead there are no children, then increment the head parent in place, and if it exceeds maximum value remove it.
decrement value of any node above 0: If the node is a child, remove from child list, then either add to parent's successor's child list or as a new node after the parent. A parent with no children: if the successor in the main list still has a smaller key, you're done. Otherwise remove it and add as successor's tail child. A parent with children: same but promote the head child to take its place. This is O(n), where n=number of nodes of given size, because you must change the parent pointer for all children. However, if the odds of the node selected for decrement being the parent node of all nodes of given size are 1/n, this amortizes to O(1).
The main downside is that we logically have 7 different pointers for each node. If it's in the parent role we need previous and next parent, and head and tail child. If it's in the child role we need previous and next child, and parent. These can be unionized into two alternate substructures of 4 and 3 pointers, which saves storage, but not CPU time (excepting perhaps the need to zero out unused pointers for cleanliness). Updating them all won't be fast.
Solution 2: Sloppy is Good Enough
Another approach is simply to be sloppy. The application benefits from finding nodes with higher scores but it's not critical that they be absolutely in perfect order. So rather than an O(n) operation to move nodes potentially from one end of the chain to the other, we could accept a solution that does an O(1) albeit at times imperfect job.
This could be the current implementation of a double linked list. It can support all operations except decrement in O(1). It can handle decrement of a unique key value in O(1). Only decrement of a non-unique key value would go O(n), as we need to skip the remaining nodes with the previous key value to find the first with the same or higher key. in the worst case, we could simply cap that search at say 5 or 10 links. This too would provide a nominally O(1) solution. However, some pernicious usage patterns may slowly cause the entire list to become quite unordered.

Segment tree with easy shifting

I need Data Structure based on Segment Tree, but with one difference to clasical segment tree. DS should support easy shifting of elements. I mean I would like to have ds on which I could:
make queries on segments (i.e. sum of elements from index l to index r)
insert new elemnts before any index and then shift all elements on the right side of new elements
It will be nice if all these operations will work in O(logn)
Greetings
Yes, but not sure you can keep the tree balanced.
The basic structure is like this. Every node keeps track of only the distance between the start of the interval it keeps track of and the split between its children. For example if a node keeps the interval [A, B] and has children keeping [A,C] and [C+1, B] then the first node should only store the information C - A. This will allow you to easily change the sizes of the intervals without having to mess with the entire structure. This also means that you invalidate any existing iterators when you shift anything inside and that each iterator keeps track of the interval.
To do a shift operation:
do a search for the insertion point.
pick an appropriate node on the path.
insert a new node above the selected node. This nodes should contain the old + new interval. So set it's split value to the size of the shift. Now the new space is the left child the old space is the right child.
add any children you want to keep for the new space.
update all the parents where the split point was on the left since now there are more values before their split.
Any other operation should be done the same. You should pick a node where the new interval is roughly equal to the size of the node so that you keep the O(logn) for operations. Obviously inserting 1 element over and over again can cause some paths to be considerably longer than others, unless you also add a step to rebalance the tree after you do a shift.
However, if you know the shifts before, I would simply go through the shifts backwards and compute the final location of all the data and queries O(N). Then you can simply do a regular segment tree and not worry about the shifts.

A red black tree with the same key multiple times: store collections in the nodes or store them as multiple nodes?

Apparently you could do either, but the former is more common.
Why would you choose the latter and how does it work?
I read this: http://www.drdobbs.com/cpp/stls-red-black-trees/184410531; which made me think that they did it. It says:
insert_always is a status variable that tells rb_tree whether multiple instances of the same key value are allowed. This variable is set by the constructor and is used by the STL to distinguish between set and multiset and between map and multimap. set and map can only have one occurrence of a particular key, whereas multiset and multimap can have multiple occurrences.
Although now i think it doesnt necessarily mean that. They might still be using containers.
I'm thinking all the nodes with the same key would have to be in a row, because you either have to store all nodes with the same key on the right side or the left side. So if you store equal nodes to the right and insert 1000 1s and one 2, you'd basically have a linked list, which would ruin the properties of the red black tree.
Is the reason why i can't find much on it that it's just a bad idea?
down side of store as multiple nodes:
expands tree size, which make search slower.
if you want to retrieve all values for key K, you need M*log(N) time, where N is number of total nodes, M is number of values for key K, unless you introduce extra code (which complicates the data structure) to implement linked list for these values. (if storing collection, time complexity only take log(N), and it's simple to implement)
more costly to delete. with multi-node method, you'll need to remove node on every delete, but with collection-storage, you only need to remove node K when the last value of key K is deleted.
Can't think of any good side of multi-node method.
Binary Search trees by definition cannot contain duplicates. If you use them to produce a sorted list throwing out the duplicates would produce an incorrect result.
I am working on an implementation of Red Black trees in PHP when I ran into the duplicate issue. We are going to use the tree for sorting and searching.
I am considering adding an occurrence value to the node data type. When a duplicate is encountered just increment occurrence. When walking the tree to produce output just repeat the value by the number of occurrences. I think I would still have a valid BST and avoid having a whole chain of duplicate values which preserve the optimal search time.

return inserted items for a given interval

How would one design a memory efficient system which accepts Items added into it and allows Items to be retrieved given a time interval (i.e. return Items inserted between time T1 and time T2). There is no DB involved. Items stored in-memory. What is the data structure involved and associated algorithm.
Updated:
Assume extremely high insertion rate compared to data query.
You can use a sorted data structure, where key is by time of arrival. Note the following:
items are not remvoed
items are inserted in order [if item i was inserted after item j then key(i)>key(j)].
For this reason, tree is discouraged, since it is "overpower", and insertion in it is O(logn), where you can get an O(1) insertion. I suggest using one of the followings:
(1)Array: the array will be filled up always at its end. When the allocated array is full, reallocate a bigger [double sized] array, and copy existing array to it.
Advantages: good caching is usually expected in arrays, O(1) armotorized insertion, used space is at most 2*elementSize*#elemetns
Disadvantages: high latency: when the array is full, it will take O(n) to add an element, so you need to expect that once in a while, there will be costly operation.
(2)Skip list The skip list also allows you also O(logn) seek and O(1) insertion at the end, but it doesn't have latency issues. However, it will suffer more from cache misses then an array. Space used is on average elementSize*#elements + pointerSize*#elements*2 for a skip list.
Advantages: O(1) insertion, no costly ops.
Distadvantages: bad caching is expected.
Suggestion:
I suggest using an array if latency is not an issue. If it is, you should better use a skip list.
In both, finding the desired interval is:
findInterval(T1,T2):
start <- data.find(T1)
end <- data.find(T2)
for each element in data from T1 to T2:
yield element
Either BTree or Binary Search Tree could be a good in-memory data structure to accomplish the above. Just save the timestamp in each node and you can do a range query.
You can add them all to a simple array and sort them.
Do a binary search to located both T1 and T2. All the array elements between them are what you are looking for.
This is helpful if the searching is done only after all the elements are added. If not you can use an AVL or Red-Black tree
How about a relation interval tree (encode your items as intervals containing only a single element, e.g., [a,a])? Although, it has been said already that the ratio of the anticipated operations matter (a lot actually). But here's my two cents:
I suppose an item X that is inserted at time t(X) is associated with that timestamp, right? Meaning you don't insert an item now which has a timestamp from a week ago or something. If that's the case go for the simple array and do interpolation search or something similar (your items will already be sorted according to the attribute that your query refers to, i.e., the time t(X)).
We already have an answer that suggests trees, but I think we need to be more specific: the only situation in which this is really a good solution is if you are very specific about how you build up the tree (and then I would say it's on par with the skip lists suggested in a different answer; ). The objective is to keep the tree as full as possible to the left - I'll make clearer what that means in the following. Make sure each node has a pointer to its (up to) two children and to its parent and knows the depth of the subtree rooted at that node.
Keep a pointer to the root node so that you are able to do lookups in O(log(n)), and keep a pointer to the last inserted node N (which is necessarily the node with the highest key - its timestamp will be the highest). When you are inserting a node, check how many children N has:
If 0, then replace N with the new node you are inserting and make N its left child. (At this point you'll need to update the tree depth field of at most O(log(n)) nodes.)
If 1, then add the new node as its right child.
If 2, then things get interesting. Go up the tree from N until either you find a node that has only 1 child, or the root. If you find a node with only 1 child (this is necessarily the left child), then add the new node as its new right child. If all nodes up to the root have two children, then the current tree is full. Add the new node as the new root node and the old root node as its left child. Don't change the old tree structure otherwise.
Addendum: in order to make cache behaviour and memory overhead better, the best solution is probably to make a tree or skip list of arrays. Instead of every node having a single time stamp and a single value, make every node have an array of, say, 1024 time stamps and values. When an array fills up you add a new one in the top level data structure, but in most steps you just add a single element to the end of the "current array". This wouldn't affect big-O behaviour with respect to either memory or time, but it would reduce the overhead by a factor of 1024, while latency is still very small.

Indexing count of buckets

So, here is my little problem.
Let's say I have a list of buckets a0 ... an which respectively contain L <= c0 ... cn < H items. I can decide of the L and H limits. I could even update them dynamically, though I don't think it would help much.
The order of the buckets matter. I can't go and swap them around.
Now, I'd like to index these buckets so that:
I know the total count of items
I can look-up the ith element
I can add/remove items from any bucket and update the index efficiently
Seems easy right ? Seeing these criteria I immediately thought about a Fenwick Tree. That's what they are meant for really.
However, when you think about the use cases, a few other use cases creep in:
if a bucket count drops below L, the bucket must disappear (don't worry about the items yet)
if a bucket count reaches H, then a new bucket must be created because this one is full
I haven't figured out how to edit a Fenwick Tree efficiently: remove / add a node without rebuilding the whole tree...
Of course we could setup L = 0, so that removing would become unecessary, however adding items cannot really be avoided.
So here is the question:
Do you know either a better structure for this index or how to update a Fenwick Tree ?
The primary concern is efficiency, and because I do plan to implement it cache/memory considerations are worth worrying about.
Background:
I am trying to come up with a structure somewhat similar to B-Trees and Ranked Skip Lists but with a localized index. The problem of those two structures is that the index is kept along the data, which is inefficient in term of cache (ie you need to fetch multiple pages from memory). Database implementations suggest that keeping the index isolated from the actual data is more cache-friendly, and thus more efficient.
I have understood your problem as:
Each bucket has an internal order and buckets themselves have an order, so all the elements have some ordering and you need the ith element in that ordering.
To solve that:
What you can do is maintain a 'cumulative value' tree where the leaf nodes (x1, x2, ..., xn) are the bucket sizes. The value of a node is the sum of values of its immediate children. Keeping n a power of 2 will make it simple (you can always pad it with zero size buckets in the end) and the tree will be a complete tree.
Corresponding to each bucket you will maintain a pointer to the corresponding leaf node.
Eg, say the bucket sizes are 2,1,4,8.
The tree will look like
15
/ \
3 12
/ \ / \
2 1 4 8
If you want the total count, read the value of the root node.
If you want to modify some xk (i.e. change correspond bucket size), you can walk up the tree following parent pointers, updating the values.
For instance if you add 4 items to the second bucket it will be (the nodes marked with * are the ones that changed)
19*
/ \
7* 12
/ \ / \
2 5* 4 8
If you want to find the ith element, you walk down the above tree, effectively doing the binary search. You already have a left child and right child count. If i > left child node value of current node, you subtract the left child node value and recurse in the right tree. If i <= left child node value, you go left and recurse again.
Say you wanted to find the 9th element in the above tree:
Since left child of root is 7 < 9.
You subtract 7 from 9 (to get 2) and go right.
Since 2 < 4 (the left child of 12), you go left.
You are at the leaf node corresponding to the third bucket. You now need to pick the second element in that bucket.
If you have to add a new bucket, you double the size of your tree (if needed) by adding a new root, making the existing tree the left child and add a new tree with all zero buckets except the one you added (which we be the leftmost leaf of the new tree). This will be amortized O(1) time for adding a new value to the tree. Caveat is you can only add a bucket at the end, and not anywhere in the middle.
Getting the total count is O(1).
Updating single bucket/lookup of item are O(logn).
Adding new bucket is amortized O(1).
Space usage is O(n).
Instead of a binary tree, you can probably do the same with a B-Tree.
I still hope for answers, however here is what I could come up so far, following #Moron suggestion.
Apparently my little Fenwick Tree idea cannot be easily adapted. It's easy to append new buckets at the end of the fenwick tree, but not in it the middle, so it's kind of a lost cause.
We're left with 2 data structures: Binary Indexed Trees (ironically the very name Fenwick used to describe his structure) and Ranked Skip List.
Typically, this does not separate the data from the index, however we can get this behavior by:
Use indirection: the element held by the node is a pointer to a bucket, not the bucket itself
Use pool allocation so that the index elements, even though allocated independently from one another, are still close in memory which shall helps the cache
I tend to prefer Skip Lists to Binary Trees because they are self-organizing, so I'm spared the trouble of constantly re-balancing my tree.
These structures would allow to get to the ith element in O(log N), I don't know if it's possible to get faster asymptotic performance.
Another interesting implementation detail is I have a pointer to this element, but others might have been inserted/removed, how do I know the rank of my element now?
It's possible if the bucket points back to the node that owns it. But this means that either the node should not move or it should update the bucket's pointer when moved around.

Resources