Data structure to represent piecewise continuous range? - data-structures

Say that I have an integer-indexed array of length 400, and I want to drop out a few elements from the beginning, lots from the end, and something from the middle too, but without actually altering the original array. That is, instead of looping through the array using indices {0...399}, I want to use a piecewise continuous range such as
{3...15} ∪ {18...243} ∪ {250...301} ∪ {305...310}
What is a good data structure to describe this kind of index ranges? An obvious solution is to make another "index mediator" array, containing mappings from continuos zero-based indexing to the new coordinates above, but it feels quite wasteful, since almost all elements in it would be simply sequential numbers, with just a few occasional "jumps". Besides, what if I find that, oh, I want to modify the range a bit? The whole index array would have to be rebuilt. Not nice.
A few points to note:
The ranges never overlap. If a new range is added to the data structure, and it overlaps with existing ranges, the whole thing should get merged. That is, if I add to the above example the range {300... 308}, it should instead replace the last two ranges with {250...310}.
It should be quite cheap to simply loop through the whole range.
It should also be relatively cheap to query a value directly: "Give me the original index corresponding to the 42nd index in the mapped coordinates".
It should be possible (though maybe not quite cheap) to work other way round: "Give me the mapped coordinate corresponding to 42 in the original coordinates, or tell if it's mapped at all."
Before rolling my own solution, I'd like to know if there exists a well-known data structure that solves this class of problems elegantly.
Thanks!

Seems like an array or list of integer pairs would be the best data structure. Your choice as to whether the second integer of the pair is a end point or a count from the first integer.
Edit: On further reflection, this problem is exactly what a database index has to do. If the integer pairs don't have to be in numeric order, you can handle splits easier. If the number sequence has to remain in order, you need a data structure that allows you to add integer pairs to the middle of the array or list.
A split would be having to change the (6, 12) integer pair to (6, 9) (11, 12), when 10 is removed, as an example.
Besides, what if I find that, oh, I want to modify the range a bit? The whole index array would have to be rebuilt. Not nice.
True. Perhaps one integer pair needs to change. Worst case, you'd have to rebuild the entire array or list.

Related

Algorithm for selection the most frequent object during factorization

I have N objects, and M sets of those objects. Sets are non-empty, different, and may intersect. Typically M and N are of the same order of magnitude, usually M > N.
Historically my sets were encoded as-is, each just contained a table (array) of its objects, but I'd like to create a more optimized encoding. Typically some objects present in most of the sets, and I want to utilize this.
My idea is to represent sets as stacks (i.e. single-directional linked lists), whereas their bottom parts can be shared across different sets. It can also be defined as a tree, whereas each node/leaf has a pointer to its parent, but not children.
Such a data structure will allow to use the most common subsets of objects as roots, which all the appropriate sets may "inherit".
The most efficient encoding is computed by the following algorithm. I'll write it as a recursive pseudo-code.
BuildAllChains()
{
BuildSubChains(allSets, NULL);
}
BuildSubChains(sets, pParent)
{
if (sets is empty)
return;
trgObj = the most frequent object from sets;
pNode = new Node;
pNode->Object = trgObj;
pNode->pParent = pParent;
newSets = empty;
for (each set in sets that contains the trgObj)
{
remove trgObj from set;
remove set from sets;
if (set is empty)
set->pHead = pNode;
else
newSets.Insert(set);
}
BuildSubChains(sets, pParent);
BuildSubChains(newSets, pNode);
}
Note: the pseudo-code is written in a recursive manner, but technically naive recursion should not be used, because at each point the splitting is not balanced, and in a degenerate case (which is likely, since the source data isn't random) the recursion depth would be O(N).
Practically I use a combination of loop + recursion, whereas recursion always invoked on a smaller part.
So, the idea is to select each time the most common object, create a "subset" which inherits its parent subset, and all the sets that include it, as well as all the predecessors selected so far - should be based on this subset.
Now, I'm trying to figure-out an effective way to select the most frequent object from the sets. Initially my idea was to compute the histogram of all the objects, and sort it once. Then, during the recursion, whenever we remove an object and select only sets that contain/don't contain it - deduce the sorted histogram of the remaining sets. But then I realized that this is not trivial, because we remove many sets, each containing many objects.
Of course we can select each time the most frequent object directly, i.e. O(N*M). But it also looks inferior, in a degenerate case, where an object exists in either almost all or almost none sets we may need to repeat this O(N) times. OTOH for those specific cases in-place adjustment of the sorted histogram may be preferred way to go.
So far I couldn't come up with a good enough solution. Any ideas would be appreciated. Thanks in advance.
Update:
#Ivan: first thanks a lot for the answer and the detailed analysis.
I do store the list of elements within the histogram rather than the count only. Actually I use pretty sophisticated data structures (not related to STL) with intrusive containers, corss-linked pointers and etc. I planned this from the beginning, because than it seemed to me that the histogram adjustment after removing elements would be trivial.
I think the main point of your suggestion, which I didn't figure-out myself, is that at each step the histograms should only contain elements that are still present in the family, i.e. they must not contain zeroes. I thought that in cases where the splitting is very uneven creating a new histogram for the smaller part is too expensive. But restricting it to only existing elements is a really good idea.
So we remove sets of the smaller family, adjust the "big" histogram and build the "small" one. Now, I need some clarifications about how to keep the big histogram sorted.
One idea, which I thought about first, was immediate fix of the histogram after every single element removal. I.e. for every set we remove, for every object in the set, remove it from the histogram, and if the sort is broken - swap the histogram element with its neighbor until the sort is restored.
This seems good if we remove small number of objects, we don't need to traverse the whole histogram, we do a "micro-bubble" sort.
However when removing large number of objects it seems better to just remove all the objects and then re-sort the array via quick-sort.
So, do you have a better idea regarding this?
Update2:
I think about the following: The histogram should be a data structure which is a binary search tree (auto-balanced of course), whereas each element of the tree contains the appropriate object ID and the list of the sets it belongs to (so far). The comparison criteria is the size of this list.
Each set should contain the list of objects it contains now, whereas the "object" has the direct pointer to the element histogram. In addition each set should contain the number of objects matched so far, set to 0 at the beginning.
Technically we need a cross-linked list node, i.e. a structure that exists in 2 linked lists simultaneously: in the list of a histogram element, and in the list of the set. This node also should contain pointers to both the histogram item and the set. I call it a "cross-link".
Picking the most frequent object is just finding the maximum in the tree.
Adjusting such a histogram is O(M log(N)), whereas M is the number of elements that are currently affected, which is smaller than N if only a little number is affected.
And I'll also use your idea to build the smaller histogram and adjust the bigger.
Sounds right?
I denote the total size of sets with T. The solution I present works in time O(T log T log N).
For the clarity I denote with set the initial sets and with family the set of these sets.
Indeed, let's store a histogram. In BuildSubChains function we maintain a histogram of all elements which are presented in the sets at the moment, sorted by frequency. It may be something like std::set of pairs (frequency, value), maybe with cross-references so you could find an element by value. Now taking the most frequent element is straightforward: it is the first element in the histogram. However, maintaining it is trickier.
You split your family of sets into two subfamilies, one containing the most frequent element, one not. Let there total sizes be T' and T''. Take the family with the smallest total size and remove all elements from its sets from the histogram, making the new histogram on the run. Now you have a histogram for both families, and it is built in time O(min(T', T'') log n), where log n comes from operations with std::set.
At the first glance it seems that it works in quadratic time. However, it is faster. Take a look at any single element. Every time we explicitly remove this element from the histogram the size of its family at least halves, so each element will directly participate in no more than log T removals. So there will be O(T log T) operations with histograms in total.
There might be a better solution if I knew the total size of sets. However, no solution can be faster than O(T), and this is only logarithmically slower.
There may be one more improvement: if you store in the histogram not only elements and frequencies, but also the sets that contain the element (simply another std::set for each element) you'll be able to efficiently select all sets that contain the most frequent element.

Find first unique number in an unsorted array

I came across this question while going through previous interview questions. Any direction to approach this ?
Find first unique number in an unsorted array of 32 bit numbers
without using hash tables or array of counters.
Seeing that the input array is unsorted, you can solve the problem by sorting it. This is a bit silly - why give an answer to the question in the question itself? - but the technicalities of the sorting are a little interesting, so maybe this answer isn't trivial after all.
When looking at the array after sorting, you will find several numbers that are not equal to their predecessor and successor; from these, you want to choose the first one in the original array.
To do that efficiently, in your temporary array which is being sorted, for each number, store also the index of that number in the original array. So, at the end, choose the number which is not equal to its predecessor and successor, and which has the lowest index in the original array.
When you have to "do X without using Y", you can sometimes use Z, which has the same effect as Y, and argue that you were not using Y. Or you can disguise Y well enough so no one would recognize using it at first sight.
With that in mind, consider storing repetition counters for all the numbers in a trie. To choose the first number from the set of all unique numbers, store also the indices together with repetition counters.
I can claim that a trie is not an array of repetition counters, because you don't have to allocate and initialize 232 memory cells for the array. This is more like a glorified hashtable, but looks different enough.

A Hash that finds results that are close

I have a table that has about 10000 entries each entry has almost 100 boolean values. A user checkboxes a bunch of the booleans and hopes to get a result that matches their request. If that record doesn't exist, I want to show them maybe 5 records that are close(have only 1 or two values different). Is there a good hash system or data structure that can help me find these results.
Bitmap indices. Google for the paper if you want the complete background, it's not easy but worth a read. Basically build bitmpas for your boolean values like this:
010110101010
110100010100
000101001100
And then just XOR your filter through them, sort by number of matches, return. Since all operations are insanely fast (about one cycle per element, and the data structure uses (edit) 100 bits of memory per element), this will usually work even though it's linear.
Addendum: How to XOR. (fixed a bug)
000101001100 source
000101001010 target
000000000110 result of XOR
int n = 0; if (v) do { n++; } while (v &= (v-1)); return(n);
The two 1's tell you that there are 2 errors and m-2 matches, where m is the number of bits.
What you describe is a nearest neighbor search: based on a record, find the 5 closest records based on an arbitrary distance function (such as the number of different values).
A hashing function intentionally discards any information except "these values are equal", so it's not really the way to go.
Consider using instead a data structure optimized for nearest neighbor searching, such as a kd-tree or vp-tree. If there's a high probability that a record already exists in the list, you could first use a hash table to look for it, and then fall back on the kd-tree if it does not exist.
This builds on the answer from Kdansky.
Create a dynamic array of entries.
Create a cache.
for each lookup,
check the cache.
return the cache entry if the value exists.
otherwise for each value in the dynamic array,
if hamming distance is less than threshold add to the result list
cache the value against the result
return the result
to find the hamming distance:
xor and find the hamming weight http://en.wikipedia.org/wiki/Hamming_weight

Designing small comparable objects

Intro
Consider you have a list of key/value pairs:
(0,a) (1,b) (2,c)
You have a function, that inserts a new value between two current pairs, and you need to give it a key that keeps the order:
(0,a) (0.5,z) (1,b) (2,c)
Here the new key was chosen as the average between the average of keys of the bounding pairs.
The problem is, that you list may have milions of inserts. If these inserts are all put close to each other, you may end up with keys such to 2^(-1000000), which are not easily storagable in any standard nor special number class.
The problem
How can you design a system for generating keys that:
Gives the correct result (larger/smaller than) when compared to all the rest of the keys.
Takes up only O(logn) memory (where n is the number of items in the list).
My tries
First I tried different number classes. Like fractions and even polynomium, but I could always find examples where the key size would grow linear with the number of inserts.
Then I thought about saving pointers to a number of other keys, and saving the lower/greater than relationship, but that would always require at least O(sqrt) memory and time for comparison.
Extra info: Ideally the algorithm shouldn't break when pairs are deleted from the list.
I agree with snowlord. A tree would be ideal in this case. A red-black tree would prevent things from getting unbalanced. If you really need keys, though, I'm pretty sure you can't do better than using the average of the keys on either side of the value you need to insert. That will increase your key length by 1 bit each time. What I recommend is renormalizing the keys periodically. Every x inserts, or whenever you detect keys being generated too close together, renumber everything from 1 to n.
Edit:
You don't need to compare keys if you're inserting by position instead of key. The compare function for the red-black tree would just use the order in the conceptual list, which lines up with in-order in the tree. If you're inserting in position 4 in the list, insert a node at position 4 in the tree (using in-ordering). If you're inserting after a certain node (such as "a"), it's the same. You might have to use your own implementation if whatever language/library you're using requires a key.
I don't think you can avoid getting size O(n) keys without reassigning the key during operation.
As a practical solution I would build an inverted search tree, with pointers from the children to the parents, where each pointer is marked whether it is coming from a left or right child. To compare two elements you need to find the closest common ancestor, where the path to the elements diverges.
Reassigning keys is then rebalancing of the tree, you can do that by some rotation that doesn't change the order.

Remembering the "original" index of elements after sorting

Say, I employ merge sort to sort an array of Integers. Now I need to also remember the positions that elements had in the unsorted array, initially. What would be the best way to do this?
A very very naive and space consuming way to do would be to (in C), to maintain each number as a "structure" with another number storing its index:
struct integer {
int value;
int orig_pos;
};
But, obviously there are better ways. Please share your thoughts and solution if you have already tackled such problems. Let me know if you would need more context. Thank you.
Clearly for an N-long array you do need to store SOMEwhere N integers -- the original position of each item, for example; any other way to encode "1 out of N!" possibilities (i.e., what permutation has in fact occurred) will also take at least O(N) space (since, by Stirling's approximation, log(N!) is about N log(N)...).
So, I don't see why you consider it "space consuming" to store those indices most simply and directly. Of course there are other possibilities (taking similar space): for example, you might make a separate auxiliary array of the N indices and sort THAT auxiliary array (based on the value at that index) leaving the original one alone. This means an extra level of indirectness for accessing the data in sorted order, but can save you a lot of data movement if you're sorting an array of large structures, so there's a performance tradeoff... but the space consumption is basically the same!-)
Is the struct such a bad idea? The alternative, to me, would be an array of pointers.
It feels to me that in this question you have to consider the age old question: speed vs size. In either case, you are keeping both a new representation of your data (the sorted array) and an old representation of your data (the way the array use to look), so inherently your solution will have some data replication. If you are sorting n numbers, and you need to remember after they were sorted where those n numbers were, you will have to store n amount of information somewhere, there is no getting around that.
As long as you accept that you are doubling the amount of space you need to be able to keep this old data, then you should consider the specific application and decide what will be faster. One option is to just make a copy of the array before you sort it, however resolving which was where later might turn into a O(N) problem. From that point of view your suggestion of adding another int to your struct doesn't seem like such a bad idea, if it fits with the way you will be using the data later.
This looks like the case where I use an index sort. The following C# example shows how to do it with a lambda expression. I am new at using lambdas, but they can do some complex tasks very easily.
// first, some data to work with
List<double> anylist = new List<double>;
anylist.Add(2.18); // add a value
... // add many more values
// index sort
IEnumerable<int> serial = Enumerable.Range(0, anylist.Count);
int[] index = serial.OrderBy(item => (anylist[item])).ToArray();
// how to use
double FirstValue = anylist[index[0]];
double SecondValue = anylist[index[1]];
And, of course, anylist is still in the origial order.
you can do it the way you proposed
you can also remain a copy of the original unsorted array (means you may use a not inplace sorting algorithm)
you can create an additional array containing only the original indices
All three ways are equally space consuming, there is no "better" way. you may use short instead of int to safe space if you array wont get >65k elements (but be aware of structure padding with your suggestion).

Resources