We have two offline systems that normally can not communicate with each other. Both systems maintain the same ordered list of items. Only rarely will they be able to communicate with each other to synchronize the list.
Items are marked with a modification timestamp to detect edits. Items are identified by UUIDs to avoid conflicts when inserting new items (as opposed to using auto-incrementing integers). When synchronizing new UUIDs are detected and copied to the other system. Likewise for deletions.
The above data structure is fine for an unordered list, but how can we handle ordering? If we added an integer "rank", that would need renumbering when inserting a new item (thus requiring synchronizing all successor items due to only 1 insertion). Alternatively, we could use fractional ranks (use the average of the ranks of the predecessor and successor item), but that doesn't seem like a robust solution as it will quickly run into accuracy problems when many new items are inserted.
We also considered implementing this as a doubly linked-list with each item holding the UUID of its predecessor and successor item. However, that would still require synchronizing 3 items when 1 new items was inserted (or synchronizing the 2 remaining items when 1 item was deleted).
Preferably, we would like to use a data structure or algorithm where only the newly inserted item needs to be synchronized. Does such a data structure exist?
Edit: we need to be able to handle moving an existing item to a different position too!
There is really no problem with the interpolated rank approach. Just define your own numbering system based on variable length bit vectors representing binary fractions between 0 and 1 with no trailing zeros. The binary point is to the left of the first digit.
The only inconvenience of this system is that the minimum possible key is 0 given by the empty bit vector. Therefore you use this only if you're positive the associated item will forever be the first list element. Normally, just give the first item the key 1. That's equivalent to 1/2, so random insertions in the range (0..1) will tend to minimize bit usage. To interpolate an item before and after,
01 < newly interpolated = 1/4
1
11 < newly interpolated = 3/4
To interpolate again:
001 < newly interpolated = 1/8
01
011 < newly interpolated = 3/8
1
101 < newly interpolated = 5/8
11
111 < newly interpolated = 7/8
Note that if you wish you can omit storing the final 1! All keys (except 0 which you won't normally use) end in 1, so storing it is supefluous.
Comparison of binary fractions is a lot like lexical comparison: 0<1 and the first bit difference in a left-to-right scan tells you which is less. If no differences occur, i.e. one vector is a strict prefix of the other, then the shorter one is smaller.
With these rules it's pretty simple to come up with an algorithm that accepts two bit vectors and computes a result that's roughly (or exactly in some cases) between them. Just add the bit strings, and shift right 1, dropping unnecessary trailing bits, i.e. take the average of the two to split the range between.
In the example above, if deletions had left us with:
01
111
and we need to interpolate these, add 01(0) and and 111 to obtain 1.001, then shift to get 1001. This works fine as an interpolant. But note the final 1 unnecessarily makes it longer than either of the operands. An easy optimization is to drop the final 1 bit along with trailing zeros to get simply 1. Sure enough, 1 is about half way between as we'd hope.
Of course if you do many inserts in the same location (think e.g. of successive inserts at the start of the list), the bit vectors will get long. This is exactly the same phenomenon as inserting at the same point in a binary tree. It grows long and stringy. To fix this, you must "rebalance" during a synchronization by renumbering with the shortest possible bit vectors, e.g. for 14 you'd use the sequence above.
Addition
Though I haven't tried it, the Postgres bit string type seems to suffice for the keys I've described. The thing I'd need to verify is that the collation order is correct.
Also, the same reasoning works just fine with base-k digits for any k>=2. The first item gets key k/2. There is also a simple optimization that prevents the very common cases of appending and prepending elements at the end and front respectively from causing keys of length O(n). It maintains O(log n) for those cases (though inserting at the same place internally can still produce O(p) keys after p insertions). I'll let you work that out. With k=256, you can use indefinite length byte strings. In SQL, I believe you'd want varbinary(max). SQL provides the correct lexicographic sort order. Implementation of the interpolation ops is easy if you have a BigInteger package similar to Java's. If you like human-readable data, you can convert the byte strings to e.g. hex strings (0-9a-f) and store those. Then normal UTF8 string sort order is correct.
You can add two fields to each item - 'creation timestamp' and 'inserted after' (containing the id of the item after which the new item was inserted). Once you synchronize a list, send all the new items. That information is enough for you to be able to construct the list on the other side.
With the list of newly added items received, do this (on the receiving end): sort by creation timestamp, then go one by one, and use the 'inserted after' field to add the new item in the appropriate place.
You may face trouble if an item A is added, then B is added after A, then A is removed. If this can happen, you will need to sync A as well (basically syncing the operations that took place on the list since the last sync, and not just the content of the current list). It's basically a form of log-shipping.
You could have a look at "lenses", which is bidirectional programming concept.
For instance, your problem seems to be solved my "matching lenses", described in this paper.
I think the datastructure that is appropriate here is order statistic tree. In order statistic tree you also need to maintain sizes of subtrees along with other data, the size field helps easy to find element by rank as you need it to be. All operations like rank,delete,change position,insert are O(logn).
I think you can try kind of transactional approach here. For example you do not delete items physically but mark them for deletion and commit changes only during synchronization. I'm not absolutely sure which data type you should choose, it depends on which operations you want to be more productive (insertions, deletions, search or iteration).
Let we have the following initial state on both systems:
|1| |2|
--- ---
|A| |A|
|B| |B|
|C| |C|
|D| |D|
After that the first system marks element A for deletion and the second system inserts element BC between B and C:
|1 | |2 |
------------ --------------
|A | |A |
|B[deleted]| |B |
|C | |BC[inserted]|
|D | |C |
|D |
Both systems continue processing taking into account local changes, System 1 ignores element B and System 2 treats element BC as normal element.
When synchronization occurs:
As I understand, each system receives the list snapshot from other system and both systems freeze processing until synchronization will be finished.
So each system iterates sequentially through received snapshot and local list and writes changes to local list (resolving possible conflicts according to modified timestamp) after that 'transaction is commited', all local changes are finally applied and information about them erases.
For example for system one:
|1 pre-sync| |2-SNAPSHOT | |1 result|
------------ -------------- ----------
|A | <the same> |A | |A |
|B[deleted]| <delete B> |B |
<insert BC> |BC[inserted]| |BC |
|C | <same> |C | |C |
|D | <same> |D | |D |
Systems wake up and continue processing.
Items are sorted by insertion order, moving can be implemented as simultaneous deletion and insertion. Also I think that it will be possible not to transfer the whole list shapshot but only list of items that were actually modified.
I think, broadly, Operational Transformation could be related to the problem you are describing here. For instance, consider the problem of Real-Time Collaborative text editing.
We essentially have a sorted list of items( words) which needs to be kept synchronized, and which could be added/modified/deleted at random within the list. The only major difference I see is in the periodicity of modifications to the list.( You say it does not happen often)
Operational Transformation does happen to be well studied field. I could find this blog article giving pointers and introduction. Plus, for all the problems Google Wave had, they actually made significant advancements to the domain of Operational Transform. Check this out. . There is quite a bit of literature available on this subject. Look at this stackoverflow thread, and about Differential Synchronisation
Another parallel that struck me was the data structure used in Text Editors - Ropes.
So if you have a log of operations,lets say, "Index 5 deleted", "Index 6 modified to ABC","Index 8 inserted",what you might now have to do is to transmit a log of the changes from System A to System B, and then reconstruct the operations sequentially on the other side.
The other "pragmatic Engineer" 's choice would be to simply reconstruct the entire list on System B when System A changes. Depending on actual frequency and size of changes, this might not be as bad as it sounds.
I have tentatively solved a similar problem by including a PrecedingItemID (which can be null if the item is the top/root of the ordered list) on each item, and then having a sort of local cache that keeps a list of all items in sorted order (this is purely for efficiency—so you don't have to recursively query for or build the list based on PrecedingItemIDs every time there is a re-ordering on the local client). Then when it comes time to sync I do the slightly more expensive operation of looking for cases where two items are requesting the same PrecedingItemID. In those cases, I simply order by creation time (or however you want to reconcile which one wins and comes first), put the second (or others) behind it, and move on ordering the list. I then store this new ordering in the local ordering cache and go on using that until the next sync (just making sure to keep the PrecedingItemID updated as you go).
I haven't unit tested this approach yet—so I'm not 100% sure I'm not missing some problematic conflict scenario—but it appears at least conceptually to handle my needs, which sound similar to those of the OP.
Related
I'm currently playing with some ideas wrt CRF-ish work and I have an idea that I need help with.
Minimal Problem
I've got a bunch of function objects (think something expensive like neural nets). They are applied onto a linear buffer (think an array of floats or bytes) but at varying intervals. So they look like that (think of Start and End as "apply Object to buf[Start:End]":
| Object | Start | End |
|--------|-------|-----|
| A | 0 | 4 |
| B | 4 | 10 |
| C | 13 | 15 |
Interval Characteristics
There may be some skips (for example, see the start of C vs the end of B)
There will definitely be changes to the intervals, both positive or negative (for example, B may change from [4:10] to [4:12].
When this happens, the object(s) associated with the intervals may have to be reapplied.
If the interval changes overlaps with another interval, both objects will have to be reapplied. For example, if B changes from [4:10] to [3:12], A would have to be applied to the range [0:3] and B would have to be applied to the range [3:12]
Depending on operation, downstream intervals will have to be updated as well, but the objects will not necessarily have to be reapplied. For example, if it were an insertion that changed the interval range for B, then the interval ranges for C will also increment by 2, but will not trigger a reapplication of C.
Program Characteristics
The intervals change a lot (it's a machine learning training loop).
Supported forms of interval updates are: insert, delete, shiftleft, shiftright. The latter two are the same as insert/delete but applied at the ends of the intervals.
Changes to the interval typically comes as a tuple (index, and size) or as a single index.
Application of function is fairly expensive operation and is CPU bound.
However, being that I am using Go, a couple of mutexes + goroutine solves a majority of the problem (there are some finer points but by large swarths it can be ignored).
One epoch can have anywhere from 5-60ish interval-object pairs.
Buffer is linear, but not necessarily contiguous.
Task
The tasks can be summarized as follows:
Query by index: returns the interval and the object associated with the interval
Update interval: must also update downstream if necessary (which is the majority case)
Insertion of new intervals: must also update downstream
What I've Tried
Map with intervals as a key. This was a bad idea because I had to know if a given index that changed was within a interval or not
Linear structure to keep track of Starts. Discovered a bug immediately when I realized there may be skips.
Linear structures with "holes" to keep track of Starts. This turns out to be similar to a rope.
Ropes and Skip lists. Ended up refactoring what I had into the skiprope package that works for strings. More yak shaving. Yay.
Interval/Segment trees. Implementation is a bitch. I also tried a concrete variant of gods/augmentedtree but couldn't actually get the call-backing to work properly to evaluate it.
The Question
Is there any good data structure that I'm missing out on that would make these tasks easier?
Am I missing out on something blindingly obvious?
A friend suggested I look up incremental compilation methods because it's similar. An analogy used would be that Roslyn would parse/reparse fragments of text in a ranged fashion. That would be quite similar to my problem - just replace linear buffer of floats with linear buffer of tokens.
The problem is I couldn't find any solid useful information about how Roslyn does it.
This solution isn't particularly memory-efficient, but if I understand you correctly, it should allow for a relatively simple implementation of the functionality you want.
Keep an array or slice funcs of all your function objects, so that they each have a canonical integer index, and can be looked up by that index.
Keep a slice of ints s that is always the same size as your buffer of floats; it maps a particular index in your buffer to a "function index" in the slice of functions. You can use -1 to represent a number that is not part of any interval.
Keep a slice of (int, int) pairs intervals such that intervals[i] contains the start-end indices for the function stored at funcs[i].
I believe this enables you to implement your desired functionality without too much hassle. For example, to query by index i, look up s[i], then return funcs[s[i]] and intervals[s[i]]. When changes occur to the buffer, change s as well, cross-referencing between s and the intervals slice to figure out if neighboring intervals are affected. I'm happy to explain this part in more detail, but I don't totally understand the requirements for interval updates. (When you do an interval insert, does it correspond to an insert in the underlying buffer? Or are you just changing which buffer elements are associated with which functions? In which case, does an insert cause a deletion at the beginning of the next interval? Most schemes should work, but it changes the procedure.)
I would like a simple way to represent the order of a list of objects. When an object changes position in that list I would like to update just one record. I don't know if this can be done but I'm interested to ask the SO hive...
Wish-list constraints
the algorithm (or data structure) should allow for items to be repositioned in the list by updating the properties of a single item
the algorithm (or data structure) should require no housekeeping to maintain the integrity of the list
the algorithm (or data structure) should allow for the insertion of new items or the removal of existing items
Why I care about only updating one item at a time...
[UPDATED to clarify question]
The use-case for this algorithm is a web application with a CRUDy, resourceful server setup and a clean (Angular) client.
It's good practice to keep to the pure CRUD actions where possible and makes for cleaner code all round. If I can do this operation in a single resource#update request then I don't need any additional serverside code to handle the re-ordering and it can all be done using CRUD with no alterations.
If more than one item in the list needs to be updated for each move then I need a new action on my controller to handle it. It's not a showstopper but it starts spilling over into Angular and everything becomes less clean than it ideally should be.
Example
Let's say we have a magazine and the magazine has a number of pages :
Original magazine
- double page advert for Ford (page=1)
- article about Jeremy Clarkson (page=2)
- double page advert for Audi (page=3)
- article by James May (page=4)
- article by Richard Hammond (page=5)
- advert for Volkswagen (page=6)
Option 1: Store integer page numbers
... in which we update up to N records per move
If I want to pull Richard Hammond's page up from page 5 to page 2 I can do so by altering its page number. However I also have to alter all the pages which it then displaces:
Updated magazine
- double page advert for Ford (page=1)
- article by Richard Hammond (page=2)(old_value=5)*
- article about Jeremy Clarkson (page=3)(old_value=2)*
- double page advert for Audi (page=4)(old_value=3)*
- article by James May (page=5)(old_value=4)*
- advert for Volkswagen (page=6)
* properties updated
However I don't want to update lots of records
- it doesn't fit my architecture
Let's say this is being done using javascript drag-n-drop re-ordering via Angular.js. I would ideally like to just update a value on the page which has been moved and leave the other pages alone. I want to send an http request to the CRUD resource for Richard Hammond's page saying that it's now been moved to the second page.
- and it doesn't scale
It's not a problem for me yet but at some point I may have 10,000 pages. I'd rather not update 9,999 of them when I move a new page to the front page.
Option 2: a linked list
... in which we update 3 records per move
If instead of storing the page's position, I instead store the page that comes before it then I reduce the number of actions from a maximum of N to 3.
Original magazine
- double page advert for Ford (id = ford, page_before = nil)
- article about Jeremy Clarkson (id = clarkson, page_before = ford)
- article by James May (id = captain_slow, page_before = clarkson)
- double page advert for Audi (id = audi, page_before = captain_slow)
- article by Richard Hammond (id = hamster, page_before = audi)
- advert for Volkswagen (id = vw, page_before = hamster)
again we move the cheeky hamster up...
Updated magazine
- double page advert for Ford (id = ford, page_before = nil)
- article by Richard Hammond (id = hamster, page_before = ford)*
- article about Jeremy Clarkson (id = clarkson, page_before = hamster)*
- article by James May (id = captain_slow, page_before = clarkson)
- double page advert for Audi (id = audi, page_before = captain_slow)
- advert for volkswagen (id = vw, page_before = audi)*
* properties updated
This requires updating three rows in the database: the page we moved, the page just below its old position and the page just below its new position.
It's better but it still involves updating three records and doesn't give me the resourceful CRUD behaviour I'm looking for.
Option 3: Non-integer positioning
...in which we update only 1 record per move (but need to housekeep)
Remember though, I still want to update only one record for each repositioning. In my quest to do this I take a different approach. Instead of storing the page position as an integer I store it as a float. This allows me to move an item by slipping it between two others:
Original magazine
- double page advert for Ford (page=1.0)
- article about Jeremy Clarkson (page=2.0)
- double page advert for Audi (page=3.0)
- article by James May (page=4.0)
- article by Richard Hammond (page=5.0)
- advert for Volkswagen (page=6.0)
and then we move Hamster again:
Updated magazine
- double page advert for Ford (page=1.0)
- article by Richard Hammond (page=1.5)*
- article about Jeremy Clarkson (page=2.0)
- double page advert for Audi (page=3.0)
- article by James May (page=4.0)
- advert for Volkswagen (page=6.0)
* properties updated
Each time we move an item, we chose a value somewhere between the item above and below it (say by taking the average of the two items we're slipping between).
Eventually though you need to reset...
Whatever algorithm you use for inserting the pages into each other will eventually run out of decimal places since you have to keep using smaller numbers. As you move items more and more times you gradually move down the floating point chain and eventually need a new position which is smaller than anything available.
Every now and then you therefore have to do a reset to re-index the list and bring it all back within range. This is ok but I'm interested to see whether there is a way to encode the ordering which doesn't require this housekeeping.
Is there an algorithm which requires only 1 update and no housekeeping?
Does an algorithm (or perhaps more accurately, a data encoding) exist for this problem which requires only one update and no housekeeping? If so can you explain it in plain english how it works (i.g. no reference to directed graphs or vertices...)? Muchos gracias.
UPDATE (post points-awarding)
I've awarded the bounty on this to the question I feel had the most interesting answer. Nobody was able to offer a solution (since from the looks of things there isn't one) so I've not marked any particular question as correct.
Adjusting the no-housekeeping criterion
After having spent even more time thinking about this problem, it occurs to me that the housekeeping criterion should actually be adjusted. The real danger with housekeeping is not that it's a hassle to do but that it should ideally be robust to a client who has an outstanding copy of a pre-housekept set.
Let's say that Joe loads up a page containing a list (using Angular) and then goes off to make a cup of tea. Just after he downloads it the housekeeping happens and re-indexes all items (1000, 2000, 3000 etc).. After he comes back from his cup of tea, he moves an item from 1010 1011. There is a risk at this point that the re-indexing will place his item into a position it wasn't intended to go.
As a note for the future - any housekeeping algorithm should ideally be robust to items submitted across different housekept versions of the list too. Alternatively you should version the housekeeping and create an error if someone tries to update across versions.
Issues with the linked list
While the linked list requires only a few updates it's got some drawbacks too:
it's not trivial to deal with deletions from the list (and you may have to adjust your #destroy method accordingly
it's not easy to order the list for retrieval
The method I would choose
I think that having seen all the discussion, I think I would choose the non-integer (or string) positioning:
it's robust to inserts and deletions
it works of a single update
It does however need housekeeping and as mentioned above, if you're going to be complete you will also need to version each housekeeping and raise an error if someone tries to update based on a previous list version.
You should add one more sensible constraint to your wish-list:
max O(log N) space for each item (N being total number of items)
For example, the linked-list solution holds to this - you need at least N possible values for pointer, so the pointer takes up log N space. If you don't have this limit, trivial solution (growing strings) already mentioned by Lasse Karlsen and tmyklebu are solution to your problem, but the memory grows one character up (in the worst case) for each operation). You need some limit and this is a sensible one.
Then, hear the answer:
No, there is no such algorithm.
Well, this is a strong statement, and not easy to hear, so I guess proof is required :) I tried to figure out general proof, posted a question on Computer Science Theory, but the general proof is really hard to do. Say we make it easier and we will explicitly assume there are two classes of solutions:
absolute addressing - address of each item is specified by some absolute reference (integer, float, string)
relative addressing - address of each item is specified relatively to other items (e.g. the linked list, tree, etc.)
To disprove the existence of absolute addressing algorithm is easy. Just take 3 items, A, B, C, and keep moving the last one between the first two. You will soon run out of the possible combinations for the address of the moved element and will need more bits. You will break the constraint of the limited space.
Disproving the existence of relative addressing is also easy. For non-trivial arrangement, certainly some two different positions exist to which some other items are referring to. Then if you move some item between these two positions, at least two items have to be changed - the one which referred to the old position and the one which will refer to the new position. This violates the constraint of only one item changed.
Q.E.D.
Don't be fascinated by complexity - it doesn't work
Now that we (and you) can admit your desired solution does not exist, why would you complicate your life with complex solution that do not work? They can't work, as we proved above. I think we got lost here. Guys here spent immense effort just to end up with overly complicated solutions that are even worse than the simplest solution proposed:
Gene's rational numbers - they grow 4-6 bits in his example, instead of just 1 bit which is required by the most trivial algorithm (described below). 9/14 has 4 + 4 = 8 bits, 19/21 has 5 + 5 = 10 bits, and the resultant number 65/84 has 7 + 7 = 14 bits!! And if we just look at those numbers, we see that 10/14 or 2/3 are much better solutions. It can be easily proven that the growing string solution is unbeatable, see below.
mhelvens' solution - in the worst case he will add a new correcting item after each operation. This will for sure occupy much more than one bit more.
These guys are very clever but obviously cannot bring something sensible. Someone has to tell them - STOP, there's no solution, and what you do simply can't be better than the most trivial solution you are afraid to offer :-)
Go back to square one, go simple
Now, go back to the list of your restrictions. One of them must be broken, you know that. Go through the list and ask, which one of these is least painful?
1) Violate memory constraint
This is hard to violate infinitely, because you have limited space... so be prepared to also violate the housekeeping constraint from time to time.
The solution to this is the solution already proposed by tmyklebu and mentioned by Lasse Karlsen - growing strings. Just consider binary strings of 0 and 1. You have items A, B and C and moving C between A and B. If there is no space between A and B, i.e. they look
A xxx0
B xxx1
Then just add one more bit for C:
A xxx0
C xxx01
B xxx1
In worst case, you need 1 bit after every operation. You can also work on bytes, not bits. Then in the worst case, you will have to add one byte for every 8 operations. It's all the same. And, it can be easily seen that this solution cannot be beaten. You must add at least one bit, and you cannot add less. In other words, no matter how the solution is complex, it can't be better than this.
Pros:
you have one update per item
can compare any two elements, but slow
Cons:
comparing or sorting will get very very slow as the strings grow
there will be a housekeeping
2) Violate one item modified constraint
This leads to the original linked-list solution. Also, there are plenty of balanced tree data structures, which are even better if you need to look up or compare items (which you didn't mention).
These can go with 3 items modified, balanced trees sometimes need more (when balance operations are needed), but as it is amortized O(1), in a long row of operations the number of modifications per operation is constant. In your case, I would use tree solution only if you need to look up or compare items. Otherwise, the linked-list solution rocks. Throwing it out just because they need 3 operations instead of 1? C'mon :)
Pros:
optimal memory use
fast generation of ordered list (one linear pass), no need to sort
fast operations
no housekeeping
Cons:
cannot easily compare two items. Can easily generate the order of all the items, but given two items randomly, comparing them will take O(N) for list and O(log N) for balanced trees.
3 modified items instead of 1 (... letting up to you how much of a "con" this is)
3) Violate "no housekeeping" constraint
These are the solution with integers and floats, best described by Lasse Karlsen here. Also, the solutions from point 1) will fall here :). The key question was already mentioned by Lasse:
How often will housekeeping have to take place?
If you will use k-bit integers, then from the optimal state, when items are spread evenly in the integer space, the housekeeping will have to take place every k - log N operations, in the worst-case. You might then use more ore less sophisticated algorithms to restrict the number of items you "housekeep".
Pros:
optimal memory use
fast operation
can compare any two elements
one item modified per operation
Cons:
housekeeping
Conclusion - hope never dies
I think the best way, and the answers here prove that, is to decide which one of those constraints is least pain and just take one of those simple solutions formerly frowned upon.
But, hope never dies. When writing this, I realized that there would be your desired solution, if we just were able to ask the server!! Depends on the type of the server of course, but the classical SQL server already has the trees/linked-list implemented - for indices. The server is already doing the operations like "move this item before this one in the tree"!! But the server is doing based on the data, not based on our request. If we were able somehow to ask server to do this without the need to create perverse, endlessly growing data, that would be your desired solution! As I said, the server already does it - the solution is sooo close, but so far. If you can write your own server, you can do it :-)
#tmyklebu has the answer, but he never quite got to the punch line: The answer to your question is "no" unless you are willing to accept a worst case key length of n-1 bits to store n items.
This means that total key storage for n items is O(n^2).
There is an "adversary" information-theoretic argument that says no matter what scheme for assigning keys you choose for a database of n items, I can always come up with a series of n item re-positionings ("Move item k to position p.") that will force you to use a key with n-1 bits. Or by extension, if we start with an empty database, and you give me items to insert, I can choose a sequence of insertion positions that will require you to use at least zero bits for the first, one for the second, etc. indefinitely.
Edit
I earlier had an idea here about using rational numbers for keys. But it was more expensive than just adding one bit of length to split the gap between pairs of keys that differ by one. So I've removed it.
You can also interpret option 3 as storing positions as an unbounded-length string. That way you don't "run out of decimal places" or anything of that nature. Give the first item, say 'foo', position 1. Recursively partition your universe into "the stuff that's less than foo", which get a 0 prefix, and "the stuff that's bigger than foo", which get a 1 prefix.
This sucks in a lot of ways, notably that the position of an object can need as many bits to represent as you've done object moves.
I was fascinated by this question, so I started working on an idea. Unfortunately, it's complicated (you probably knew it would be) and I don't have time to work it all out. I just thought I'd share my progress.
It's based on a doubly-linked list, but with extra bookkeeping information in every moved item. With some clever tricks, I suspect that each of the n items in the set will require less than O(n) extra space, even in the worst case, but I have no proof of this. It will also take extra time to figure out the view order.
For example, take the following initial configuration:
A (-,B|0)
B (A,C|0)
C (B,D|0)
D (C,E|0)
E (D,-|0)
The top-to-bottom ordering is derived purely from the meta-data, which consists of a sequence of states (predecessor,successor|timestamp) for each item.
When moving D between A and B, you push a new state (A,B|1) to the front of its sequence with a fresh timestamp, which you get by incrementing a shared counter:
A (-,B|0)
D (A,B|1) (C,E|0)
B (A,C|0)
C (B,D|0)
E (D,-|0)
As you see, we keep the old information around in order to connect C to E.
Here is roughly how you derive the proper order from the meta-data:
You keep a pointer to A.
A agrees it has no predecessor. So insert A. It leads you to B.
B agrees it wants to be successor to A. So insert B after A. It leads you to C.
C agrees it wants to be successor to B. So insert C after B. It leads you to D.
D disagrees. It wants to be successor to A. Start recursion to insert it and find the real successor:
D wins from B because it has a more recent timestamp. Insert D after A. It leads you to B.
B is already D's successor. Look back in D's history, which leads you to E.
E agrees it wants to be successor to D with timestamp 0. So return E.
So the successor is E. Insert E after C. It tells you it has no successor. You are finished.
This is not exactly an algorithm yet, because it doesn't cover all cases. For example, when you move an item forwards instead of backwards. When moving B between D and E:
A (-,B|0)
C (B,D|0)
D (C,E|0)
B (D,E|1)(A,C|0)
E (D,-|0)
The 'move' operation is the same. But the algorithm to derive the proper order is a bit different. From A it will run into B, able to get the real successor C from it, but with no place to insert B itself yet. You can keep it in reserve as a candidate for insertion after D, where it will eventually match timestamps against E for the privilege of that position.
I wrote some Angular.js code on Plunker that can be used as a starting-point to implement and test this algorithm. The relevant function is called findNext. It doesn't do anything clever yet.
There are optimizations to reduce the amount of metadata. For example, when moving an item away from where it was recently placed, and its neighbors are still linked of their own accord, you won't have to preserve its newest state but can just replace it. And there are probably situations where you can discard all of an item's sufficiently old states (when you move it).
It's a shame I don't have time to fully work this out. It's an interesting problem.
Good luck!
Edit: I felt I needed to clarify the above-mentioned optimization ideas. First, there is no need to push a new history configuration if the original links still hold. For example, it is fine to go from here (moved D between A and B):
A (-,B|0)
D (A,B|1) (C,E|0)
B (A,C|0)
C (B,D|0)
E (D,-|0)
to here (then moved D between B and C):
A (-,B|0)
B (A,C|0)
D (B,C|2) (C,E|0)
C (B,D|0)
E (D,-|0)
We are able to discard the (A,B|1) configuration because A and B were still connected by themselves. Any number of 'unrelated' movements can come inbetween without changing that.
Secondly, imagine that eventually C and E are moved away from each other, so the (C,E|0) configuration can be dropped the next time D is moved. This is trickier to prove, though.
All of this considered, I believe there is a good chance that the list requires less than O(n+k) space (n being the number of items in the list, k being the number of operations) in the worst case; especially in the average case.
The way to prove any of this is to come up with a simpler model for this data-structure, most likely based on graph theory. Again, I regret that I don't have time to work on this.
Your best option is "Option 3", although "non-integer" doesn't necessarily have to be involved.
"Non-integer" can mean anything that have some kind of accuracy definition, which means:
Integers (you just don't use 1, 2, 3, etc.)
Strings (you just tuck on more characters to ensure the proper "sort order")
Floating point values (adding more decimal points, somewhat the same as strings)
In each case you're going to have accuracy problems. For floating point types, there might be a hard limit in the database engine, but for strings, the limit will be the amount of space you allow for this. Please note that your question can be understood to mean "with no limits", meaning that for such a solution to work, you really need infinite accuracy/space for the keys.
However, I think that you don't need that.
Let's assume that you initially allocate every 1000th index to each row, meaning you will have:
1000 A
2000 B
3000 C
4000 D
... and so on
Then you move as follows:
D up between A and B (gets index 1500)
C up between A and D (gets index 1250)
B up between A and C (gets index 1125)
D up between A and B (gets index 1062)
C up between A and D (gets index 1031)
B up between A and C (gets index 1015)
D up between A and B (gets index 1007)
C up between A and D (gets index 1004)
B up between A and C (gets index 1002)
D up between A and B (gets index 1001)
At this point, the list looks like this:
1000 A
1001 D
1002 B
1004 C
Now, then you want to move C up between A and D.
This is currently not possible, so you're going to have to renumber some items.
You can get by by updating B to have number 1003, trying to update the minimum number of rows, and thus you get:
1000 A
1001 C
1002 D
1003 B
but now, if you want to move B up between A and C, you're going to renumber everything except A.
The question is this: How likely is it that you have this pathological sequence of events?
If the answer is very likely then you will have problems, regardless of what you do.
If the answer is likely seldom, then you might decide that the "problems" with the above approach are manageable. Note that renumbering and ordering more than one row will likely be the exceptions here, and you would get something like "amortized 1 row updated per move". Amortized means that you spread the cost of those occasions where you have to update more than one row out over all the other occasions where you don't.
What if you store the original order and don't change it after saving it once and then store the number of increments up the list or down the list?
Then by moving something up 3 levels you would store this action only.
in the database you can then order by a mathematically counted column.
First time insert:
ord1 | ord2 | value
-----+------+--------
1 | 0 | A
2 | 0 | B
3 | 0 | C
4 | 0 | D
5 | 0 | E
6 | 0 | F
Update order, move D up 2 levels
ord1 | ord2 | value | ord1 + ord2
-----+------+-------+-------------
1 | 0 | A | 1
2 | 0 | B | 2
3 | 0 | C | 3
4 | -2 | D | 2
5 | 0 | E | 5
6 | 0 | F | 6
Order by ord1 + ord2
ord1 | ord2 | value | ord1 + ord2
-----+------+-------+-------------
1 | 0 | A | 1
2 | 0 | B | 2
4 | -2 | D | 2
3 | 0 | C | 3
5 | 0 | E | 5
6 | 0 | F | 6
Order by ord1 + ord2 ASC, ord2 ASC
ord1 | ord2 | value | ord1 + ord2
-----+------+-------+-------------
1 | 0 | A | 1
4 | -2 | D | 2
2 | 0 | B | 2
3 | 0 | C | 3
5 | 0 | E | 5
6 | 0 | F | 6
Move E up 4 levels
ord1 | ord2 | value | ord1 + ord2
-----+------+-------+-------------
5 | -4 | E | 1
1 | 0 | A | 1
4 | -2 | D | 2
2 | 0 | B | 2
3 | 0 | C | 3
6 | 0 | F | 6
Something like relative ordering, where ord1 is the absolute order while ord2 is the relative order.
Along with the same idea of just storing the history of movements and sorting based on that.
Not tested, not tried, just wrote down what I thought at this moment, maybe it can point you in some direction :)
I am unsure if you will call this cheating, but why not create a separate page list resource that references the page resources?
If you change the order of the pages you need not update any of the pages, just the list that stores the order if the IDs.
Original page list
[ford, clarkson, captain_slow, audi, hamster, vw]
Update to
[ford, hamster, clarkson, captain_slow, audi, vw]
Leave the page resources untouched.
You could always store the ordering permutation separately as a ln(num_records!)/ln(2) bit bitstring and figure out how to transform/CRUD that yourself so that you'd only need to update a single bit for simple operations, if updating 2/3 records is not good enough for you.
What about the following very simple algorithm:
(let's take the analogy with page numbers in a book)
If you move a page to become the "new" page 3, you now have "at least" one page 3, possibly two, or even more. So, which one is the "right" page 3?
Solution: the "newest". So, we make use of the fact that a record also has an "updated date/time", to determine who the real page 3 is.
If you need to represent the entire list in its right order, you have to sort with two keys, one for the page number, and one for the "updated date/time" field.
Background:
I have read that many DBMSs use write-ahead logging to preserve atomicity and durability of transactions by storing updates as a group of write operations. What I'm trying to accomplish is to create a dbms model with improved concurrency by allowing reads to proceed on 'old' data while writes are pending.
Question:
Is there a data structure that allows me to efficiently (ideally O(1) amortized, at most O(log(n)) look up array elements (or memory locations, if you like), which may or may not have been overwritten by write actions, in reference to some point in time? This would be for about 1TB of data total.
Here is some ascii art to make this a little clearer. The dashes are data, with version 0 being the oldest version. The arrows indicate write operations.
^ ___________________________________Snapshot 2
| V | | V
| -- --- | | -------- Version 2
| | | __________________Snapshot 1
| V | | V
T| -------- | | --------- Version 1
I| | | ___________Snapshot 0
M| V V V V
E|------------------------------------- Version 0
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>
SPACE/ADDRESS
Attempts at solution:
Let N be the data size, M be the number of versions, and P be the average number of updates per version.
The naive algorithm (searching each update) is O(M*P).
Dividing the data into buckets, updating only entire buckets, and searching a bitmask of buckets would be O(N/B*M), where B is bucket size, which isn't much better.
A Bloom filter seems like a good candidate at first glance, except that it requires more data than a simple bitmask of each memory location (which would be bad anyway, since it requires M*N/8 bytes to store.)
A standard hash table also comes to mind, but what would the key be?
Actually, now that I've gone to the trouble of writing this all up, I've thought of a solution that uses a binary search tree. I'll submit it as an answer in a bit, but it's still O(M*log2(P)) in space and time which is not ideal. See below.
The following is the best solution I could come up with, though it is still suboptimal.
The idea is to place each region into a binary search tree, one tree per version, where each inner node contains a memory location, and each leaf node is either Hit or Miss (and possibly lookup information), depending on if updated data exists there. This is O(P*log(P)) to construct for each version, and O(M*log(P)) to look up in.
This is suboptimal for two reasons:
The tree is balanced, but Misses are much more likely than Hits in practice, so it would make sense to put Miss nodes higher in the tree, or arrange nodes by their size. Some kind of Huffman coding comes to mind, but Huffman's algorithm does not preserve the search tree invariants.
It requires M trees (hence O(M*log(P)) lookup). Maybe there is some way to combine the trees.
Please explain what is the advantage of linked list over an array. And also is there any advantage of using array compared to linked list.
Regards,
Shoaib
Both store a sequence of elements, but using different techniques.
An array stores elements in successive order in memory, i.e. it looks like follows:
--------------------------------------------------------------------------------------
| item 1 | item 2 | item 3 | ... ... | item x | //here comes other stuff
--------------------------------------------------------------------------------------
This means, elements are one after another consecutive in memory.
A ((doubly) linked) list, on the other hand, stores the items the following way: It creates an own "list item" for each element; this "list item" holds the actual element and a pointer/reference/hint/etc to the next "list item". If it is doubly linked, it also contains a pointer/reference/hint/etc to the previous "list item":
------------
------------ ---------- | item 4 |
| item 1 | | item 2 | | next ---+----...
| next ---+------->| next | ------------
------------ ---+------ ^
| |
| |
v |
---------- |
| item 3 | |
| next --+---+
----------
This means, the elements can be spread all over the memory and are not stored at specific memory locations.
Now that we know this, we can compare some usual operations on sequences of elements:
Accessing an element at a specific index: Using an array, we simply compute the offset for this index and have the element at the index.
This is very cheap. With a list on the other hand, we do not know a priori where the element at index is stored (since all elements can be anywhere in memory), so we have to walk the list item by item until we find the element at the index. This is an expensive operation.
Adding a new element at the end of the sequence: Using an array this can lead to the following problem: Arrays are usually of fixed size, so if we have the situation that our array is already completely filled (see //here comes other stuff), we probably cant put the new element there because there might already be something else. So, maybe we have to copy the whole array. However, if the array is not filled, we can simply put the element there.
Using a list, we have to generate a new "list item", put the element into it and set the next pointer of the currently last element to this new list item. Usually, we store a reference to the currently last element so that we don't have to search through the whole list to find it. Thus, inserting a new element is no real problem with lists.
Adding a new element somewhere in the middle: Let's first consider arrays: here, we might get into the following situation: We have an array with elements 1 to 1000:
1 | 2 | 3 | 4 | 5 | 6 | ... | 999 | 1000 | free | free
Now, we want to insert 4.5 between 4 and 5: This means, we have to move all elements from 5 to 1000 one position right in order to make space for the 4.5:
1 | 2 | 3 | 4 | free | 5 | 6 | ... | 999 | 1000 | free
||
vv
1 | 2 | 3 | 4 | 4.5 | 5 | 6 | ... | 999 | 1000 | free
Moving all these elements, is a very expensive operation. So better don't do this too often.
Now we consider, how a list can perform this task: Let's say we have currently the following list:
1 -> 2 -> 3 -> 4 -> 5 -> ... -> 999 -> 1000
Again, we want to insert 4.5 between 4 and 5. This can be done very easily: We generate a new list item and update the pointers/references:
1 -> 2 -> 3 -> 4 5 -> ... -> 999 -> 1000
| ^
+-> 4.5 -+
We have simply created a new list element and generated sort of "bypass" to insert it - very cheap (as long as we have a pointer/reference to the list item the new element will be inserted after).
So, let's sum up: Linked lists are really nice when it comes to inserting at random positions (as long as you have a pointer to the adequate list item). If your operation involves adding lots of elements dynamically and traversing all elements anyway, a list might be a good choice.
An array is very good when it comes to index accesses. If your application needs to access elements at specific positions very often, you should rather use an array.
Notable things:
Solving the fixed-size problem for arrays: As mentioned before, (raw) arrays are usually of fixed size. However, this problem is nowadays no real point anymore, since almost all programming languages provide mechanisms to generate arrays that grow (and possibly shrink) dynamically - just as needed. This growing and shrinking can be implemented such that we have amortized runtime of O(1) for inserting and removing elements (at the end of the array) and such that the programmer doesn't have to call grow and shrink explicitly.
Since lists provide such nice properties for insertions, they can be used as underlying data structures for search trees, etc. I.e. you construct a search tree, whose lowest level consists of the linked list.
Arrays have a fixed size but are faster to access: they are allocated in one place and the location of each element is known (you can jump to the right element).
Lists are not limited in size but for the amount of available memory. They are slower to access since to find an element you have to traverse the list.
This is a very short explanation: I would suggest you to get a book on data structures and read it. These are basic concepts that you will need to fully understand.
Since you tagged the question with "data structures" I will answer this in that context.
An array is fixed in size when you declare/create it meaning you can't add more elements to it. Thus, if I have an array of, say, 5 elements, you can do whatever you want with it but you can't add more elements to it.
A linked-list is basically a way to represent a list where you can have as many "items" as you'd like. It consists of a head (the first element), a tail (the last element), and elements ( called nodes ) in-between the list.
There are many types of linked-lists that you will probably encounter in any data structures class.
The key thing you will learn with linked-lists is proficiency when learning how to create fields in your classes to point to other objects, which is the case for linked-lists where you need to construct your list such that each node points to the next node.
Obviously, this is a very generalized answer. It should give you an idea for your class.
Advantages of Array over Linked List
The array has a specific address for each element stored in it and thus we can access any memory directly.
As we know the position of the middle element and other elements are easily accessible too, we can easily perform BINARY SEARCH in the array.
Disadvantages of Array over Linked List
Total number of elements need to be mentioned or the memory allocation needs to be done at the time of array creation
The size of the array, once mentioned, cannot be increased in the program. If the number of elements entered exceed the size of the array ARRAY OVERFLOW EXCEPTION occurs.
Advantages of Linked List over Array
Size of the list doesn't need to be mentioned at the beginning of the program.
As the linked list doesn't have a size limit, we can go on adding new nodes (elements) and increasing the size of the list to any extent.
Disadvantages of Linked List over Array
Nodes do not have their own address. Only the address of the first node is stored and in order to reach any node, we need to traverse the whole list from beginning to the desired node.
As all Nodes don't have their particular address, BINARY SEARCH cannot be performed.
If you don't know the amount of objects you need to store beforehand, a list is probably what you want, since it's very easy to dynamically shrink or grow the list as needed. With this also comes the advantage of being able to easily insert elements mid-list without any need for reallocation.
The disadvantage of a list vis-à-vis an array, on the other hand, is that it's slower to select individual elements, since you need to iterate. With an array, you won't have this problem. Arrays, on the other hand, are troublesome to use if you need to resize it, as this operation is more costly than adding or subtracting elements from a linked list.
Lists should be used more commonly, since the ease of use is often more beneficial than the small performance gain from using a static size array.
While it has been mentioned that arrays have better performance than linked lists, I am surprised to see no mention of the word "cache" anywhere. The problem with linked lists is mainly that they pretty much guarantee that every jump from node to node will be a cache miss, which is incredibly, brutally expensive, performance-wise.
To put it crudely, if performance matters even the slightest bit in your program, you should never use a linked list. Ever. There is no excuse for it. They are far beyond "slow", they are catastrophically slow, and nothing about them can make up for that fact. Using them is like an intentional sabotage to your performance, like inserting a "wait()" function into your code for absolutely no reason.
I'm currently preparing for an interview, and it reminded me of a question I was once asked in a previous interview that went something like this:
"You have been asked to design some software to continuously display the top 10 search terms on Google. You are given access to a feed that provides an endless real-time stream of search terms currently being searched on Google. Describe what algorithm and data structures you would use to implement this. You are to design two variations:
(i) Display the top 10 search terms of all time (i.e. since you started reading the feed).
(ii) Display only the top 10 search terms for the past month, updated hourly.
You can use an approximation to obtain the top 10 list, but you must justify your choices."
I bombed in this interview and still have really no idea how to implement this.
The first part asks for the 10 most frequent items in a continuously growing sub-sequence of an infinite list. I looked into selection algorithms, but couldn't find any online versions to solve this problem.
The second part uses a finite list, but due to the large amount of data being processed, you can't really store the whole month of search terms in memory and calculate a histogram every hour.
The problem is made more difficult by the fact that the top 10 list is being continuously updated, so somehow you need to be calculating your top 10 over a sliding window.
Any ideas?
Frequency Estimation Overview
There are some well-known algorithms that can provide frequency estimates for such a stream using a fixed amount of storage. One is Frequent, by Misra and Gries (1982). From a list of n items, it find all items that occur more than n / k times, using k - 1 counters. This is a generalization of Boyer and Moore's Majority algorithm (Fischer-Salzberg, 1982), where k is 2. Manku and Motwani's LossyCounting (2002) and Metwally's SpaceSaving (2005) algorithms have similar space requirements, but can provide more accurate estimates under certain conditions.
The important thing to remember is that these algorithms can only provide frequency estimates. Specifically, the Misra-Gries estimate can under-count the actual frequency by (n / k) items.
Suppose that you had an algorithm that could positively identify an item only if it occurs more than 50% of the time. Feed this algorithm a stream of N distinct items, and then add another N - 1 copies of one item, x, for a total of 2N - 1 items. If the algorithm tells you that x exceeds 50% of the total, it must have been in the first stream; if it doesn't, x wasn't in the initial stream. In order for the algorithm to make this determination, it must store the initial stream (or some summary proportional to its length)! So, we can prove to ourselves that the space required by such an "exact" algorithm would be Ω(N).
Instead, these frequency algorithms described here provide an estimate, identifying any item that exceeds the threshold, along with some items that fall below it by a certain margin. For example the Majority algorithm, using a single counter, will always give a result; if any item exceeds 50% of the stream, it will be found. But it might also give you an item that occurs only once. You wouldn't know without making a second pass over the data (using, again, a single counter, but looking only for that item).
The Frequent Algorithm
Here's a simple description of Misra-Gries' Frequent algorithm. Demaine (2002) and others have optimized the algorithm, but this gives you the gist.
Specify the threshold fraction, 1 / k; any item that occurs more than n / k times will be found. Create an an empty map (like a red-black tree); the keys will be search terms, and the values will be a counter for that term.
Look at each item in the stream.
If the term exists in the map, increment the associated counter.
Otherwise, if the map less than k - 1 entries, add the term to the map with a counter of one.
However, if the map has k - 1 entries already, decrement the counter in every entry. If any counter reaches zero during this process, remove it from the map.
Note that you can process an infinite amount of data with a fixed amount of storage (just the fixed-size map). The amount of storage required depends only on the threshold of interest, and the size of the stream does not matter.
Counting Searches
In this context, perhaps you buffer one hour of searches, and perform this process on that hour's data. If you can take a second pass over this hour's search log, you can get an exact count of occurrences of the top "candidates" identified in the first pass. Or, maybe its okay to to make a single pass, and report all the candidates, knowing that any item that should be there is included, and any extras are just noise that will disappear in the next hour.
Any candidates that really do exceed the threshold of interest get stored as a summary. Keep a month's worth of these summaries, throwing away the oldest each hour, and you would have a good approximation of the most common search terms.
Well, looks like an awful lot of data, with a perhaps prohibitive cost to store all frequencies. When the amount of data is so large that we cannot hope to store it all, we enter the domain of data stream algorithms.
Useful book in this area:
Muthukrishnan - "Data Streams: Algorithms and Applications"
Closely related reference to the problem at hand which I picked from the above:
Manku, Motwani - "Approximate Frequency Counts over Data Streams" [pdf]
By the way, Motwani, of Stanford, (edit) was an author of the very important "Randomized Algorithms" book. The 11th chapter of this book deals with this problem. Edit: Sorry, bad reference, that particular chapter is on a different problem. After checking, I instead recommend section 5.1.2 of Muthukrishnan's book, available online.
Heh, nice interview question.
This is one of the research project that I am current going through. The requirement is almost exactly as yours, and we have developed nice algorithms to solve the problem.
The Input
The input is an endless stream of English words or phrases (we refer them as tokens).
The Output
Output top N tokens we have seen so
far (from all the tokens we have
seen!)
Output top N tokens in a
historical window, say, last day or
last week.
An application of this research is to find the hot topic or trends of topic in Twitter or Facebook. We have a crawler that crawls on the website, which generates a stream of words, which will feed into the system. The system then will output the words or phrases of top frequency either at overall or historically. Imagine in last couple of weeks the phrase "World Cup" would appears many times in Twitter. So does "Paul the octopus". :)
String into Integers
The system has an integer ID for each word. Though there is almost infinite possible words on the Internet, but after accumulating a large set of words, the possibility of finding new words becomes lower and lower. We have already found 4 million different words, and assigned a unique ID for each. This whole set of data can be loaded into memory as a hash table, consuming roughly 300MB memory. (We have implemented our own hash table. The Java's implementation takes huge memory overhead)
Each phrase then can be identified as an array of integers.
This is important, because sorting and comparisons on integers is much much faster than on strings.
Archive Data
The system keeps archive data for every token. Basically it's pairs of (Token, Frequency). However, the table that stores the data would be so huge such that we have to partition the table physically. Once partition scheme is based on ngrams of the token. If the token is a single word, it is 1gram. If the token is two-word phrase, it is 2gram. And this goes on. Roughly at 4gram we have 1 billion records, with table sized at around 60GB.
Processing Incoming Streams
The system will absorbs incoming sentences until memory becomes fully utilized (Ya, we need a MemoryManager). After taking N sentences and storing in memory, the system pauses, and starts tokenize each sentence into words and phrases. Each token (word or phrase) is counted.
For highly frequent tokens, they are always kept in memory. For less frequent tokens, they are sorted based on IDs (remember we translate the String into an array of integers), and serialized into a disk file.
(However, for your problem, since you are counting only words, then you can put all word-frequency map in memory only. A carefully designed data structure would consume only 300MB memory for 4 million different words. Some hint: use ASCII char to represent Strings), and this is much acceptable.
Meanwhile, there will be another process that is activated once it finds any disk file generated by the system, then start merging it. Since the disk file is sorted, merging would take a similar process like merge sort. Some design need to be taken care at here as well, since we want to avoid too many random disk seeks. The idea is to avoid read (merge process)/write (system output) at the same time, and let the merge process read form one disk while writing into a different disk. This is similar like to implementing a locking.
End of Day
At end of day, the system will have many frequent tokens with frequency stored in memory, and many other less frequent tokens stored in several disk files (and each file is sorted).
The system flush the in-memory map into a disk file (sort it). Now, the problem becomes merging a set of sorted disk file. Using similar process, we would get one sorted disk file at the end.
Then, the final task is to merge the sorted disk file into archive database.
Depends on the size of archive database, the algorithm works like below if it is big enough:
for each record in sorted disk file
update archive database by increasing frequency
if rowcount == 0 then put the record into a list
end for
for each record in the list of having rowcount == 0
insert into archive database
end for
The intuition is that after sometime, the number of inserting will become smaller and smaller. More and more operation will be on updating only. And this updating will not be penalized by index.
Hope this entire explanation would help. :)
You could use a hash table combined with a binary search tree. Implement a <search term, count> dictionary which tells you how many times each search term has been searched for.
Obviously iterating the entire hash table every hour to get the top 10 is very bad. But this is google we're talking about, so you can assume that the top ten will all get, say over 10 000 hits (it's probably a much larger number though). So every time a search term's count exceeds 10 000, insert it in the BST. Then every hour, you only have to get the first 10 from the BST, which should contain relatively few entries.
This solves the problem of top-10-of-all-time.
The really tricky part is dealing with one term taking another's place in the monthly report (for example, "stack overflow" might have 50 000 hits for the past two months, but only 10 000 the past month, while "amazon" might have 40 000 for the past two months but 30 000 for the past month. You want "amazon" to come before "stack overflow" in your monthly report). To do this, I would store, for all major (above 10 000 all-time searches) search terms, a 30-day list that tells you how many times that term was searched for on each day. The list would work like a FIFO queue: you remove the first day and insert a new one each day (or each hour, but then you might need to store more information, which means more memory / space. If memory is not a problem do it, otherwise go for that "approximation" they're talking about).
This looks like a good start. You can then worry about pruning the terms that have > 10 000 hits but haven't had many in a long while and stuff like that.
case i)
Maintain a hashtable for all the searchterms, as well as a sorted top-ten list separate from the hashtable. Whenever a search occurs, increment the appropriate item in the hashtable and check to see if that item should now be switched with the 10th item in the top-ten list.
O(1) lookup for the top-ten list, and max O(log(n)) insertion into the hashtable (assuming collisions managed by a self-balancing binary tree).
case ii)
Instead of maintaining a huge hashtable and a small list, we maintain a hashtable and a sorted list of all items. Whenever a search is made, that term is incremented in the hashtable, and in the sorted list the term can be checked to see if it should switch with the term after it. A self-balancing binary tree could work well for this, as we also need to be able to query it quickly (more on this later).
In addition we also maintain a list of 'hours' in the form of a FIFO list (queue). Each 'hour' element would contain a list of all searches done within that particular hour. So for example, our list of hours might look like this:
Time: 0 hours
-Search Terms:
-free stuff: 56
-funny pics: 321
-stackoverflow: 1234
Time: 1 hour
-Search Terms:
-ebay: 12
-funny pics: 1
-stackoverflow: 522
-BP sucks: 92
Then, every hour: If the list has at least 720 hours long (that's the number of hours in 30 days), look at the first element in the list, and for each search term, decrement that element in the hashtable by the appropriate amount. Afterwards, delete that first hour element from the list.
So let's say we're at hour 721, and we're ready to look at the first hour in our list (above). We'd decrement free stuff by 56 in the hashtable, funny pics by 321, etc., and would then remove hour 0 from the list completely since we will never need to look at it again.
The reason we maintain a sorted list of all terms that allows for fast queries is because every hour after as we go through the search terms from 720 hours ago, we need to ensure the top-ten list remains sorted. So as we decrement 'free stuff' by 56 in the hashtable for example, we'd check to see where it now belongs in the list. Because it's a self-balancing binary tree, all of that can be accomplished nicely in O(log(n)) time.
Edit: Sacrificing accuracy for space...
It might be useful to also implement a big list in the first one, as in the second one. Then we could apply the following space optimization on both cases: Run a cron job to remove all but the top x items in the list. This would keep the space requirement down (and as a result make queries on the list faster). Of course, it would result in an approximate result, but this is allowed. x could be calculated before deploying the application based on available memory, and adjusted dynamically if more memory becomes available.
Rough thinking...
For top 10 all time
Using a hash collection where a count for each term is stored (sanitize terms, etc.)
An sorted array which contains the ongoing top 10, a term/count in added to this array whenever the count of a term becomes equal or greater than the smallest count in the array
For monthly top 10 updated hourly:
Using an array indexed on number of hours elapsed since start modulo 744 (the number of hours during a month), which array entries consist of hash collection where a count for each term encountered during this hour-slot is stored. An entry is reset whenever the hour-slot counter changes
the stats in the array indexed on hour-slot needs to be collected whenever the current hour-slot counter changes (once an hour at most), by copying and flattening the content of this array indexed on hour-slots
Errr... make sense? I didn't think this through as I would in real life
Ah yes, forgot to mention, the hourly "copying/flattening" required for the monthly stats can actually reuse the same code used for the top 10 of all time, a nice side effect.
Exact solution
First, a solution that guarantees correct results, but requires a lot of memory (a big map).
"All-time" variant
Maintain a hash map with queries as keys and their counts as values. Additionally, keep a list f 10 most frequent queries so far and the count of the 10th most frequent count (a threshold).
Constantly update the map as the stream of queries is read. Every time a count exceeds the current threshold, do the following: remove the 10th query from the "Top 10" list, replace it with the query you've just updated, and update the threshold as well.
"Past month" variant
Keep the same "Top 10" list and update it the same way as above. Also, keep a similar map, but this time store vectors of 30*24 = 720 count (one for each hour) as values. Every hour do the following for every key: remove the oldest counter from the vector add a new one (initialized to 0) at the end. Remove the key from the map if the vector is all-zero. Also, every hour you have to calculate the "Top 10" list from scratch.
Note: Yes, this time we're storing 720 integers instead of one, but there are much less keys (the all-time variant has a really long tail).
Approximations
These approximations do not guarantee the correct solution, but are less memory-consuming.
Process every N-th query, skipping the rest.
(For all-time variant only) Keep at most M key-value pairs in the map (M should be as big as you can afford). It's a kind of an LRU cache: every time you read a query that is not in the map, remove the least recently used query with count 1 and replace it with the currently processed query.
Top 10 search terms for the past month
Using memory efficient indexing/data structure, such as tightly packed tries (from wikipedia entries on tries) approximately defines some relation between memory requirements and n - number of terms.
In case that required memory is available (assumption 1), you can keep exact monthly statistic and aggregate it every month into all time statistic.
There is, also, an assumption here that interprets the 'last month' as fixed window.
But even if the monthly window is sliding the above procedure shows the principle (sliding can be approximated with fixed windows of given size).
This reminds me of round-robin database with the exception that some stats are calculated on 'all time' (in a sense that not all data is retained; rrd consolidates time periods disregarding details by averaging, summing up or choosing max/min values, in given task the detail that is lost is information on low frequency items, which can introduce errors).
Assumption 1
If we can not hold perfect stats for the whole month, then we should be able to find a certain period P for which we should be able to hold perfect stats.
For example, assuming we have perfect statistics on some time period P, which goes into month n times.
Perfect stats define function f(search_term) -> search_term_occurance.
If we can keep all n perfect stat tables in memory then sliding monthly stats can be calculated like this:
add stats for the newest period
remove stats for the oldest period (so we have to keep n perfect stat tables)
However, if we keep only top 10 on the aggregated level (monthly) then we will be able to discard a lot of data from the full stats of the fixed period. This gives already a working procedure which has fixed (assuming upper bound on perfect stat table for period P) memory requirements.
The problem with the above procedure is that if we keep info on only top 10 terms for a sliding window (similarly for all time), then the stats are going to be correct for search terms that peak in a period, but might not see the stats for search terms that trickle in constantly over time.
This can be offset by keeping info on more than top 10 terms, for example top 100 terms, hoping that top 10 will be correct.
I think that further analysis could relate the minimum number of occurrences required for an entry to become a part of the stats (which is related to maximum error).
(In deciding which entries should become part of the stats one could also monitor and track the trends; for example if a linear extrapolation of the occurrences in each period P for each term tells you that the term will become significant in a month or two you might already start tracking it. Similar principle applies for removing the search term from the tracked pool.)
Worst case for the above is when you have a lot of almost equally frequent terms and they change all the time (for example if tracking only 100 terms, then if top 150 terms occur equally frequently, but top 50 are more often in first month and lest often some time later then the statistics would not be kept correctly).
Also there could be another approach which is not fixed in memory size (well strictly speaking neither is the above), which would define minimum significance in terms of occurrences/period (day, month, year, all-time) for which to keep the stats. This could guarantee max error in each of the stats during aggregation (see round robin again).
What about an adaption of the "clock page replacement algorithm" (also known as "second-chance")? I can imagine it to work very well if the search requests are distributed evenly (that means most searched terms appear regularly rather than 5mio times in a row and then never again).
Here's a visual representation of the algorithm:
The problem is not universally solvable when you have a fixed amount of memory and an 'infinite' (think very very large) stream of tokens.
A rough explanation...
To see why, consider a token stream that has a particular token (i.e., word) T every N tokens in the input stream.
Also, assume that the memory can hold references (word id and counts) to at most M tokens.
With these conditions, it is possible to construct an input stream where the token T will never be detected if the N is large enough so that the stream contains different M tokens between T's.
This is independent of the top-N algorithm details. It only depends on the limit M.
To see why this is true, consider the incoming stream made up of groups of two identical tokens:
T a1 a2 a3 ... a-M T b1 b2 b3 ... b-M ...
where the a's, and b's are all valid tokens not equal to T.
Notice that in this stream, the T appears twice for each a-i and b-i. Yet it appears rarely enough to be flushed from the system.
Starting with an empty memory, the first token (T) will take up a slot in the memory (bounded by M). Then a1 will consume a slot, all the way to a-(M-1) when the M is exhausted.
When a-M arrives the algorithm has to drop one symbol so let it be the T.
The next symbol will be b-1 which will cause a-1 to be flushed, etc.
So, the T will not stay memory-resident long enough to build up a real count. In short, any algorithm will miss a token of low enough local frequency but high global frequency (over the length of the stream).
Store the count of search terms in a giant hash table, where each new search causes a particular element to be incremented by one. Keep track of the top 20 or so search terms; when the element in 11th place is incremented, check if it needs to swap positions with #10* (it's not necessary to keep the top 10 sorted; all you care about is drawing the distinction between 10th and 11th).
*Similar checks need to be made to see if a new search term is in 11th place, so this algorithm bubbles down to other search terms too -- so I'm simplifying a bit.
sometimes the best answer is "I don't know".
Ill take a deeper stab. My first instinct would be to feed the results into a Q. A process would continually process items coming into the Q. The process would maintain a map of
term -> count
each time a Q item is processed, you simply look up the search term and increment the count.
At the same time, I would maintain a list of references to the top 10 entries in the map.
For the entry that was currently implemented, see if its count is greater than the count of the count of the smallest entry in the top 10.(if not in the list already). If it is, replace the smallest with the entry.
I think that would work. No operation is time intensive. You would have to find a way to manage the size of the count map. but that should good enough for an interview answer.
They are not expecting a solution, that want to see if you can think. You dont have to write the solution then and there....
One way is that for every search, you store that search term and its time stamp. That way, finding the top ten for any period of time is simply a matter of comparing all search terms within the given time period.
The algorithm is simple, but the drawback would be greater memory and time consumption.
What about using a Splay Tree with 10 nodes? Each time you try to access a value (search term) that is not contained in the tree, throw out any leaf, insert the value instead and access it.
The idea behind this is the same as in my other answer. Under the assumption that the search terms are accessed evenly/regularly this solution should perform very well.
edit
One could also store some more search terms in the tree (the same goes for the solution I suggest in my other answer) in order to not delete a node that might be accessed very soon again. The more values one stores in it, the better the results.
Dunno if I understand it right or not.
My solution is using heap.
Because of top 10 search items, I build a heap with size 10.
Then update this heap with new search. If a new search's frequency is greater than heap(Max Heap) top, update it. Abandon the one with smallest frequency.
But, how to calculate the frequency of the specific search will be counted on something else.
Maybe as everyone stated, the data stream algorithm....
Use cm-sketch to store count of all searches since beginning, keep a min-heap of size 10 with it for top 10.
For monthly result, keep 30 cm-sketch/hash-table and min-heap with it, each one start counting and updating from last 30, 29 .., 1 day. As a day pass, clear the last and use it as day 1.
Same for hourly result, keep 60 hash-table and min-heap and start counting for last 60, 59, ...1 minute. As a minute pass, clear the last and use it as minute 1.
Montly result is accurate in range of 1 day, hourly result is accurate in range of 1 min