Design a data structure - algorithm

I am trying to design a data structure that stores elements according to some prescribed ordering, each element with its own value, and that supports each of the following
four operations in logarithmic time (amortized or worst-case, your choice):
add a new element of value v in the kth position
delete the kth element
returns the sum of the values of elements i through j
increase by x the values of elements i through j
Any Idea will be appreciated,
Thanks

I suspect you could do it with a red-black tree. Over the classic red-black tree, each node would need the following additional fields:
size
sum
increment
The size field would track the total number of child nodes, allowing for log(n) time insertion and deletion.
The sum field would track the sum of its child nodes, allowing for log(n) time summing.
The increment field would be used to track an increment to each of its child nodes which would be added on when calculating sums. So, when calculating the final sum, we would return sum + size*increment. This is the trickiest one. The increment field would be added on when calculating sums. I think by adding positive and negative increments at the appropriate nodes, it would be possible to alter the returned sum correctly in all cases by altering only log(n) nodes.
Needless to say, implementation would be very tricky. Sum and increment fields would have to be updated after each insertion and deletion, and each would have at least five cases to deal with.
Update: I'm not going to try to solve this completely, but I would note that incrementing i through j by n is equivalent to incrementing the whole tree by n, then decrementing 0 through i by n and decrementing j through to the end by n. A global increment can be done in constant time, with the other two operations being a 'left side decrement' and a 'right side decrement', which are symmetrical. Doing a left side decrement to i would be something like, 'take the count of the left subtree of the root node. If it the count is less than i, decrement the increment field on the left child of root by n. Then apply a left decrement of n to to right sub-tree of the root node up to i - count(left subtree) elements. Alternatively, if the count is greater than i, decrement the increment field of the left-left grandchild of the root by n, then apply a left decrement of n to the left-right subtree of the root up to count (left-left subtree) '. As the tree is balanced, I think the left decrement operation need only be recursively applied ln(n) times. The right decrement would be similar, but reversed.

What you're asking for isn't feasible.
Requirement #3 might be possible, but #4 just can't be done in logarithmic time. You have to edit at most every node. Imagine i is 0 and j is n-1. You'd have to edit every node. Even with constant access that's linear time.
Edit:
Upon further consideration, if you kept track of "mass increases" you could potentially control access to a node, decorating it on the way out with whatever mass increases it required. I still think it would entirely unweildly, but I suppose it's possible.

Requirement 1, 2 and 3 can be satisfied by Binary Indexed Tree (BIT, Fenwick Tree):
http://community.topcoder.com/tc?module=Static&d1=tutorials&d2=binaryIndexedTrees
I am thinking of a way to modify BIT to work with #4 in logarithm complexity.

Related

proving level of median in heap

I need to prove that the median of binary heap (doesn't matter if it is a min heap or max heap) can be in the lowest level of the heap (in the leaf). I am not sure how to prove it. I thought about using the fact that a heap is a complete binary tree but I am not sure about it. How can I prove it?
As #Evg mentioned in the comments, if all elements are the same, this is trivially true. Assume that all elements need to be different, and let us focus on the case with an odd amount of nodes 2H+1 and a min heap (the max heap case is similar). To create the min heap where the median is at the bottom, first insert the smallest H elements.
There are two cases. Case 1; after doing this the binary tree formed by these H elements is completely filled (every layer is filled) then you can just insert the remaining H+1 elements on the last layer (which you can do since the maximum capacity of the last layer equals (#total_nodes+1)/2 which is precisely H+1).
Case 2 The last layer still has some unfilled spaces. In this case, take the smallest remaining nodes from the largest H elements until this layer is filled (note that there will be no upward movement in your heap since these elements are already larger than whatever is in the tree). Then start the next layer by inserting the median. Finally insert the remaining nodes, which won't be moved upwards either since they are larger than whatever is in the layer above, by construction. By the same argument about the capacity of the last layer, you will not need to start a new layer during this process.
In the case where there are an even amount of nodes 2H, you can argue similarly, but you would have to define the median as H+1 smallest node (otherwise the statement you want to prove is false, as you can see by noticing that the only possible min-heap for the set {1,2} is the tree with root at 1 and leaf at 2).
Easiest way to prove it is just to make one:
1
2 3
4 5 6 7
Any complete heap with nodes in level order will have the median at the left-most leaf, but you don't have to prove that.

Data structure for inverting a subarray in log(n)

Build a Data structure that has functions:
set(arr,n) - initialize the structure with array arr of length n. Time O(n)
fetch(i) - fetch arr[i]. Time O(log(n))
invert(k,j) - (when 0 <= k <= j <= n) inverts the sub-array [k,j]. meaning [4,7,2,8,5,4] with invert(2,5) becomes [4,7,4,5,8,2]. Time O(log(n))
How about saving the indices in binary search tree and using a flag saying the index is inverted? But if I do more than 1 invert, it mess it up.
Here is how we can approach designing such a data structure.
Indeed, using a balanced binary search tree is a good idea to start.
First, let us store array elements as pairs (index, value).
Naturally, the elements are sorted by index, so that the in-order traversal of a tree will yield the array in its original order.
Now, if we maintain a balanced binary search tree, and store the size of the subtree in each node, we can already do fetch in O(log n).
Next, let us only pretend we store the index.
Instead, we still arrange elements as we did with (index, value) pairs, but store only the value.
The index is now stored implicitly and can be calculated as follows.
Start from the root and go down to the target node.
Whenever we move to a left subtree, the index does not change.
When moving to a right subtree, add the size of the left subtree plus one (the size of the current vertex) to the index.
What we got at this point is a fixed-length array stored in a balanced binary search tree. It takes O(log n) to access (read or write) any element, as opposed to O(1) for a plain fixed-length array, so it is about time to get some benefit for all the trouble.
The next step is to devise a way to split our array into left and right parts in O(log n) given the required size of the left part, and merge two arrays by concatenation.
This step introduces dependency on our choice of the balanced binary search tree.
Treap is the obvious candidate since it is built on top of the split and merge primitives, so this improvement comes for free.
Perhaps it is also possible to split a Red-black tree or a Splay tree in O(log n) (though I admit I didn't try to figure out the details myself).
Right now, the structure is already more powerful than an array: it allows splitting and concatenation of "arrays" in O(log n), although element access is as slow as O(log n) too.
Note that this would not be possible if we still stored index explicitly at this point, since indices would be broken in the right part of a split or merge operation.
Finally, it is time to introduce the invert operation.
Let us store a flag in each node to signal whether the whole subtree of this node has to be inverted.
This flag will be lazily propagating: whenever we access a node, before doing anything, check if the flag is true.
If this is the case, swap the left and right subtrees, toggle (true <-> false) the flag in the root nodes of both subtrees, and set the flag in the current node to false.
Now, when we want to invert a subarray:
split the array into three parts (before the subarray, the subarray itself, and after the subarray) by two split operations,
toggle (true <-> false) the flag in the root of the middle (subarray) part,
then merge the three parts back in their original order by two merge operations.

RB tree with sum

I have some questions about augmenting data structures:
Let S = {k1, . . . , kn} be a set of numbers. Design an efficient
data structure for S that supports the following two operations:
Insert(S, k) which inserts the
number k into S (you can assume that k is not contained in S yet), and TotalGreater(S, a)
which returns the sum of all keys ki ∈ S which are larger than a, that is, P ki∈S, ki>a ki .
Argue the running time of both operations and give pseudo-code for TotalGreater(S, a) (do not given pseudo-code for Insert(S, k)).
I don't understand how to do this, I was thinking of adding an extra field to the RB-tree called sum, but then it doesn't work because sometimes I need only the sum of the left nodes and sometimes I need the sum of the right nodes too.
So I was thinking of adding 2 fields called leftSum and rightSum and if the current node is > GivenValue then add the cached value of the sum of the sub nodes to the current sum value.
Can someone please help me with this?
You can just add a variable size to each node, which is the number of nodes in the subtree rooted at that node. When finding the node with the smallest value that is larger than the value a, two things can happen on the path to that node: you can go left or right. Every time you go left, you add the size of the right child + 1 to the running total. Every time you go right, you do nothing.
There are two conditions for termination. 1) we find a node containing the exact value a, in which case we add the size of its right child to the total. 2) we reach a leaf, in which case we add 1 if it is larger than a, or nothing if it is smaller.
As Jordi describes: The key-word could be augmented red-black tree.

IOI Qualifier INOI task 2

I can't figure out how to solve question 2 in the following link in an efficient manner:
http://www.iarcs.org.in/inoi/2012/inoi2012/inoi2012-qpaper.pdf
You can do this in On log n) time. (Or linear if you really care to.) First, pad the input array out to the next power of two using some really big negative number. Now, build an interval tree-like data structure; recursively partition your array by dividing it in half. Each node in the tree represents a subarray whose length is a power of two and which begins at a position that is a multiple of its length, and each nonleaf node has a "left half" child and a "right half" child.
Compute, for each node in your tree, what happens when you add 0,1,2,3,... to that subarray and take the maximum element. Notice that this is trivial for the leaves, which represent subarrays of length 1. For internal nodes, this is simply the maximum of the left child with length/2 + right child. So you can build this tree in linear time.
Now we want to run a sequence of n queries on this tree and print out the answers. The queries are of the form "what happens if I add k,k+1,k+2,...n,1,...,k-1 to the array and report the maximum?"
Notice that, when we add that sequence to the whole array, the break between n and 1 either occurs at the beginning/end, or smack in the middle, or somewhere in the left half, or somewhere in the right half. So, partition the array into the k,k+1,k+2,...,n part and the 1,2,...,k-1 part. If you identify all of the nodes in the tree that represent subarrays lying completely inside one of the two sequences but whose parents either don't exist or straddle the break-point, you will have O(log n) nodes. You need to look at their values, add various constants, and take the maximum. So each query takes O(log n) time.

What is the best way to implement this type of binary tree?

Let's say I have a binary tree of height h, that have elements of x1, x2, ... xn.
xi element is initially at the ith leftmost leaf. The tree should support the following methods in O(h) time
add(i, j, k) where 1 <= i <= j =< n. This operation adds value k to the values of all leftmost nodes which are between i and j. For example, add(2,5,3) operation increments all leftmost nodes which are between 2th and 5th nodes by 3.
get(i): return the value of ith leftmost leaf.
What should be stored at the internal nodes for that properties?
Note: I am not looking for an exact answer but any hint on how to approach the problem would be great.
As I understand the question, the position of the xi'th element never changes, and the tree isn't a search tree, the search is based solely on the position of the node.
You can store an offset in the non leaves vertices, indicating the value change of its descendant.
add(i,j,k) will start from the root, and deepen in the tree, and increase a node's value by k if and only if, all its descendants are within the range [i,j]. If it had increased the value, no further deepening will occur.
Note1: In a single add() operation, you might need to add more then one number.
Note2: You actually need to add at most O(logn) = O(logh) values [convince yourself why. Hint: binary presentation of number up to n requires O(logn) bits], which later gives you [again, make sure you understand why] the O(logh) needed complexity.
get(i) is then trivial: summarize the values from the root to the i'th leaf, and return this sum.
Since it seems homework, I will not post a pseudo-code, this guidelines should get you started with this assigment.

Resources