Sync Algorithm Pattern - algorithm

Let's say I have two sources: A and B. For example, both are disparate data stores for storing TODO lists.
How do I build an algorithm for an operation which ensures the both sources are synced ?
Do I just copy A to B and then copy B to A eliminating duplicates (assuming there is a primary key ID to eliminate duplicates)

For items of both lists you should have set the time of the last sync.
During the next sync you work only with sublists of items, which appeared after the last sync time.
Yes, for these sublists the simple double-, or n-sided join will be enough.
The n-sided sync is more interesting. The better way will be to create a star system - where the syncs are done each time between the end list and the core list. The core list could be that one on server, the end lists will be these set and shown by UI.

Related

Efficient insertion in sorted collection

I have a collection of 10 messages sorted by number of likes message has. Periodically i update that collection with replacing some of the old messages with new that got more likes in meantime, so that collection again contains 10 messages and is sorted by number of likes.
I have api to insert or remove message from collection relative to existing member message. insert(message_id, relative_to, above_or_bellow) and remove(message_id). I want to minimize number of api calls by optimizing position where I insert new messages so that collection is always sorted and 10 long at the end. (in the process length and order is irrelevant, just at the end of process)
I know i can calculate new collection and then replace just messages that dont match their new position but I believe it can be further optimized and algorithms exist already.
Edit:
Note the word "periodically", meaning messages do not come one by one, but in time interval i collect new messages, sort them, and make new collection which i then publish on site via api. So i do have 2 collections, one is simple array in memory, and other is on site.
Idea is to reuse already inserted messages that should be kept and their order in updated collection to save http api calls. I believe there are existing algorithms i could reuse to transform existing collection into already known resulting collection with minimal number of insert, remove operations.
First remove all messages that are no longer in the top 10 liked messages.
In order to get the most from the existing list, we should now look for the longest subsequence of messages that is ordered by their likes (we can use the algorithm mentioned here using number of likes as value How to determine the longest increasing subsequence using dynamic programming? )
We would then remove all other messages (not in subsequence) and insert the missing ones by their order.
I think you only need to keep one list/vector of messages and keep it sorted at all times and up to date with every new message.
Since this collection will always be sorted and assuming it has random access you could use binary search to find the insertion point i.e. O(log_2^M) where M is your maximum list size e.g. 10. But then when you insert here it anyway requires O(M) to shift the elements. Therefore, I would just use a linked list and iterate it while the message to insert (or update) has less likes than the current one.

Synchronize two lists of objects

Problem
I have two lists of objects. Each object contains the following:
GUID (allows to determine if objects are the same — from business
point of view)
Timestamp (updates to current UTC each time the
object changed)
Version (positive integer; increments each time
the object changed)
Deleted (boolean flag; switches to "true" instead
of actual object deleting)
Data (some useful payload)
Any other fields if need
Next, I need to sync two lists according to these rules:
If object with some GUID presented only in one list, it should be copied to another list
If object with some GUID presented in both lists, the instance with less Version should be replaced with one having greater Version (nothing to do if versions are equal)
Real-world requirements:
Each list has 50k+ objects, each object is about 1 Kb
Lists are placed on different machines connected via Internet (e.g., mobile app and remote server), thus, algorithm shouldn't waste the traffic or CPU much
Most of time (say, 96%) lists are already synced before sync process, hence, the algorithm should determine it with minimal effort
If there are any differences, most of time they are pretty small (3-5 objects changed/added)
Should proceed OK if one list is empty (and other still has 50k+ items)
Solution #1 (currently implemented)
Client stores the time-of-last-sync-succeed (say T)
Both lists are asked for all objects having Timestamp > T (i.e. recently modified; in the production it's ... > (T - day) for better robustness)
These lists of recently modified objects are synced naively:
items presented only in first list are saved to second list
items presented only in second list are saved to first list
other items has their Version's compared and saved to appropriative list (if need)
Procs:
Works great with small changes
Almost fits the requirements
Cons:
Depends on T, which makes the algorithm fragile: it's easy to sync last updates, but hard to make sure lists are completely synced (using minimal T like 1970-01-01 just hangs the sync process)
My questions:
Is there any common / best-practice / proved way to sync object lists?
Is there any better [than #1] solutions for my case?
P.S. Already viewed, not duplicates:
Compare Two List Of Objects For Synchronization
Two list synchronization
Summary
All answers has some worth points. To summarize, here is the compiled answer I was looking for, based on finally implemented working sync system:
In general, use Merkle trees. They are dramatically efficient in comparing large amounts of data.
If you can, rebuild your hash tree from scratch every time you need it.
Check the time required to rebuild hash tree. Most likely it's pretty fast (e.g., in my case on Nexus 4 rebuilding tree for 20k items takes ~2 sec: 1.8 sec for fetching data from DB + 0.2 sec for building tree; the server performs ~20x faster), so you don't need to store the tree in the DB and maintain it when data changed (my first try was rebuilding only relevant branches — it's not too complicated to implement, but is very fragile).
Nevertheless, it's ok to cache and reuse tree if no data modifications was done at all. Once modification happened, invalidate the whole cache.
Technical details
GUID is 32 chars long without any hyphens/braces, lowercase;
I use 16-ary tree with the height of 4, where each branch is related to the GUID's char. It may be implemented as actual tree or map:
0000 → (hash of items with GUID 0000*)
0001 → (hash of items with GUID 0001*)
...
ffff → (hash of items with GUID ffff*);
000 → (hash of hashes 000_)
...
00 → (hash of hashes 00_)
...
() → (root hash, i.e. hash of hashes _)
Thus, the tree has 65536 leafs and requires 2 Mb of memory; each leaf covers ~N/65536 data items. Binary trees would be 2x more efficient in terms of memory, but it's a harder to implement.
I had to implement these methods:
getHash() — returns root hash; used for primary check (as mentioned,
in 96% that's all we need to test);
getHashChildren(x) — returns list of hashes x_ (at most 16); used for effective, single-request discovering data difference;
findByIdPrefix(x) — returns items with GUID x*, x must contain exactly 4 chars; used for requesting leaf items;
count(x) — returns number of items with GUID x*; when reasonably small, we can dismiss checking tree branch-by-branch and transfer bunch of items with single request;
As far as syncing is done per-branch transmitting small amounts of data, it's very responsive (you can check the progress at any time) + very robust for unexpected terminating (e.g., due to network failure) and easily restarts from the last point if need.
IMPORTANT: sometimes you will stuck with conflicting state: {version_1 = version_2, but hash_1 != hash_2}: in this case you must make some decision (maybe with user's help or comparing timestamps as last resort) and rewrite some item with another to resolve the conflict, otherwise you'll end up with unsynced and unsyncable hash trees.
Possible improvements
Implement transmitting (GUID, Version) pairs without payload for lightweighting requests.
Two suggestions come to mind, the first one is possibly something you're doing already:
1) Don't send entire lists of items with timestamps > T. Instead, send a list of (UUID, Version) tuples of objects with timestamps > T. Then the other side can figure out which objects it needs to update from that. Send the UUIDs of those back to request the actual objects. This avoids sending full objects if they have timestamp > T, but are nonetheless newer already (or present already with the latest Version) on the other side.
2) Don't process the full list at once, but in chunks, i.e. first sync 10%, then the next 10% etc. to avoid transferring too much data at once for big syncs (and to allow for restarting points if a connection should break). This can be done by e.g. starting with all UUIDs with a checksum equivalent to 1 modulo 10, then 1 modulo 10 etc.
Another possibility would be proactive syncing, e.g. asynchronously posting chances, possibly via UCP (unreliable as opposed to TCP). You would still need to sync when you need current information, but chances are most of it is current.
You need to store not time of last synchronization, but the state of the objects (eg. the hash of object data) at time of last synchronization. Then you compare each list with the stored list and find, what objects have changed on each side.
This is much more reliable than rely on time, cause time requires that both sides have synchronized timer which gives precise time (and this is not the case on most systems). For the same reason your idea of detecting changes based on time + version can be more error-prone than it initially seems.
Also you don't initially transfer object data but only GUIDs.
BTW we've made a framework (free with source) which addresses exactly your problems. I am not giving the link because some alternatively talented people would complain.

Parallel top ten algorithm for distributed data

This is an interview question. Suppose there are a few computers and each computer keeps a very large log file of visited URLs. Find the top ten most visited URLs.
For example: Suppose there are only 3 computers and we need the top two most visited URLs.
Computer A: url1, url2, url1, url3
Computer B: url4, url2, url1, url1
Computer C: url3, url4, url1, url3
url1 appears 5 times in all logs
url2 2
url3 3
url4 2
So the answer is url1, url3
The log files are too large to fit in RAM and copy them by network. As I understand, it is important also to make the computation parallel and use all given computers.
How would you solve it?
This is a pretty standard problem for which there is a well-known solution. You simply sort the log files on each computer by URL and then merge them through a priority queue of size k (the number of items you want) on the "master" computer. This technique has been around since the 1960s, and is still in use today (although slightly modified) in the form of MapReduce.
On each computer, extract the URL and the count from the log file, and sort by URL. Because the log files are larger than will fit into memory, you need to do an on-disk merge. That entails reading a chunk of the log file, sorting by URL, writing the chunk to disk. Reading the next chunk, sorting, writing to disk, etc. At some point, you have M log file chunks, each sorted. You can then do an M-way merge. But instead of writing items to disk, you present them, in sorted order (sorted by URL, that is), to the "master".
Each machine sorts its own log.
The "master" computer merges the data from the separate computers and does the top K selection. This is actually two problems, but can be combined into one.
The master creates two priority queues: one for the merge, and one for the top K selection. The first is of size N, where N is the number of computers it's merging data from. The second is of size K: the number of items you want to select. I use a min heap for this, as it's easy and reasonably fast.
To set up the merge queue, initialize the queue and get the first item from each of the "worker" computers. In the pseudo-code below, "get lowest item from merge queue" means getting the root item from the merge queue and then getting the next item from whichever working computer presented that item. So if the queue contains [1, 2, 3], and the items came from computers B, C, A (in that order), then taking the lowest item would mean getting the next item from computer B and adding it to the priority queue.
The master then does the following:
working = get lowest item from merge queue
while (items left to merge)
{
temp = get lowest item from merge queue
while (temp.url == working.url)
{
working.count += temp.count
temp = get lowest item from merge queue
}
// Now have merged counts for one url.
if (topK.Count < desired_count)
{
// topK queue doesn't have enough items yet.
// so add this one.
topK.Add(working);
}
else if (topK.Peek().count < working.count)
{
// the count for this url is larger
// than the smallest item on the heap
// replace smallest on the heap with this one
topK.RemoveRoot()
topK.Add(working)
}
working = temp;
}
// Here you need to check the last item:
if (topK.Peek().count < working.count)
{
// the count for this url is larger
// than the smallest item on the heap
// replace smallest on the heap with this one
topK.RemoveRoot()
topK.Add(working)
}
At this point, the topK queue has the K items with the highest counts.
So each computer has to do a merge sort, which is O(n log n), where n is the number of items in that computer's log. The merge on the master is O(n), where n is the sum of all the items from the individual computers. Picking the top k items is O(n log k), where n is the number of unique urls.
The sorts are done in parallel, of course, with each computer preparing its own sorted list. But the "merge" part of the sort is done at the same time the master computer is merging, so there is some coordination, and all machines are involved at that stage.
Given the scale of the log files and the generic nature of the question, this is quite a difficult problem to solve. I do not think that there is one best algorithm for all situations. It depends on the nature of the contents of the log files. For example, take the corner case that all URLs are all unique in all log files. In that case, basically any solution will take a long time to draw that conclusion (if it even gets that far...), and there is not even an answer to your question because there is no top-ten.
I do not have a watertight algorithm that I can present, but I would explore a solution that uses histograms of hash values of the URLs as opposed to the URLs themselves. These histograms can be calculated by means of one-pass file reads, so it can deal with arbitrary size log files. In pseudo-code, I would go for something like this:
Use a hash function with a limited target space (say 10,000, note that colliding hash-values are expected) to calculate the hash value of each item in the log file and count how many times each of the has value occurs. Communicate the resulting histogram to a server (although it is probably also possible to avoid a central server at all by multicasting the result to every other node -- but I will stick with the more obvious server-approach here)
The server should merge the histograms and communicate the result back. Depending on the distribution of the URLs, there might be a number of clearly visible peaks already, containing the top-visited URLs.
Each of the nodes should then focus on the peaks in the histogram. It should go trough its log file again, use an additional hash function (again with a limited target space) to calculate a new hash-histogram for those URLs that have their first hash value in one of the peaks (the number of peaks to focus on would be a parameter to be tuned in the algorithm, depending on the distribution of the URLs), and calculate a second histogram with the new hash values. The result should be communicated to the server.
The server should merge the results again and analyse the new histogram versus the original histogram. Depending on clearly visible peaks, it might be able to draw conclusions about the two hash values of the top ten URLs already. Or it might have to instruct the machines to calculate more hash values with the second hash function, and probably after that go through a third pass of hash-calculations with yet another hash function. This has to continue until a conclusion can be drawn from the collective group of histograms what the hash values of the peak URLs are, and then the nodes can identify the different URLs from that.
Note that this mechanism will require tuning and optimization with regard to several aspects of the algorithm and hash-functions. It will also need orchestration by the server as to which calculations should be done at any time. It probably will also need to set some boundaries in order to conclude when no conclusion can be drawn, in other words when the "spectrum" of URL hash values is too flat to make it worth the effort to continue calculations.
This approach should work well if there is a clear distribution in the URLs though. I suspect that, practically speaking, the question only makes sense in that case anyway.
Assuming the conditions below are true:
You need the top n urls of m hosts.
You can't store the files in RAM
There is a master node
I would take the approach below:
Each node reads a portion of the file (ie. MAX urls, where MAX can be, let's say, 1000 urls) and keeps an array arr[MAX]={url,hits}.
When a node has read MAX urls off the file, it sends the list to the master node, and restarts reads until MAX urls is reached again.
When a node reaches the EOF, he sends the remaining list of urls and an EOF flag to the master node.
When the master node receives a list of urls, it compares it with its last list of urls and generates a new, updated one.
When the master node receives the EOF flag from every node and finishes reading his own file, the top n urls of the last version of his list are the ones we're looking for.
Or
A different approach that would release the master from doing all the job could be:
Every node reads its file and stores an array same as above, reading until EOF.
When EOF, the node will send the first url of the list and the number of hits to the master.
When the master has collected the first url and number of hits for each node, it generates a list. If the master node has less than n urls, it will ask the nodes to send the second one and so on. Until the master has the n urls sorted.
Pre-processing: Each computer system processes complete log file and prepares Unique URLs list with count against them.
Getting top URLs:
Calculate URL counts at each computer system
Collating process at a central system(Virtual)
Send URLs with count to a central processing unit one by one in DESC order(i.e from top most)
At central system collate incoming URL details
Repeat until sum of all the counts from incoming URLs is less than count of Tenth URL in the master list. A vital step to be absolutely certain
PS: You'll have top ten URLs across systems not necessarily in that order. To get the actual order you can reverse collation. For a given URL on top ten get individual count from dist-computers and form final order.
On each node count the number of occurrences of URL.
Then use a sharding function to distribute the url to another node which owns the key for URL. Now each node will have unique keys.
On Each node then again reduce to get the number occurrences of URLs and then find the top N URLs. Finally send only top N urls to master node which will find the top N URls among K*N items where K is number of node.
Eg: K=3
N1 - > url1,url2,url3,url1,url2
N2 - > url2,url4,url1,url5,url2
N3 - > url1,url4,url3,url1,url3
Step 1: Count the occurrence per url in each node.
N1 -> (url1,2),(url2,2),(url3,1)
N2 -> (url4,1),(url2,2),(url5,1),(url1,1)
N3 -> (url1,2),(url3,2),(url4,1)
Step 2: Sharding use hash function(for simplicity, let it be url number % K)
N1 -> (url1,2),(url1,1),(url1,2),(url4,1),(url4,1)
N2 -> (url2,2),(url2,2),(url5,1)
N3 -> (url3,2),(url3,1)
Step 4: Find the number of occurrences of each key within the node again.
N1 -> (url1,5),(url4,2)
N2 -> (url2,4),(url5,1)
N3 -> (url3,3)
Step 5: Send only top N to master. Let N=1
Master -> (url1,5),(url2,4),(url3,3)
Sort the result and get top 1 item which is url1
Step 1 is called map side reduce and it is done to avoid huge shuffle which will occur in Step2.
The below description is the idea for the solution. it is not a pseudocode.
Consider you have a collection of systems.
1.for each A: Collections(systems)
1.1) Run a daemonA in each computer which probes on the log file for changes.
1.2) When a change is noticed, wakeup AnalyzerThreadA
1.3) If AnalyzerThreadA finds a URL using some regex, then update localHashMapA with count++.
(key = URL, value = count ).
2) Push topTen entries of localHashMapA to ComputerA where AnalyzeAll daemon will be running.
The above step will be the last step in each system, which will push topTen entries to a master system, say for example: computerA.
3) AnalyzeAll running in computerA will resolve duplicates and update count in masterHashMap of URLs.
4) Print the topTen from masterHashMap.

Have/Want List Matching Algorithm

Have/Want List Matching Algorithm
I am implementing an item trading system on a high-traffic site. I have a large number of users that each maintain a HAVE list and a WANT list for a number of specific items. I am looking for an algorithm that will allow me to efficiently suggest trading partners based on your HAVEs and WANTs matched with theirs. Ideally I want to find partners with the highest mutual trading potential (i.e. I have a ton of things you want, you have a ton of things I want). I don't need to find the global highest-potential pair (which sounds hard), just find the highest-potential pairs for a given user (or even just some high-potential pairs, not the global max).
Example:
User 1 HAS A,C WANTS B,D
User 2 HAS D WANTS A
User 3 HAS A,B,D WANTS C
User 1 goes to the site and clicks a button that says
"Find Trading Partners" and the top-ranked result is
User 3, followed by User 2.
An additional source of complexity is that the items have different values, and I want to match on the highest valued trade possible, rather than on the most number of matches between two traders. So in the example above, if all items are worth 1, but A and D are both worth 10, User 1 now gets matched with User 2 above User 3.
A naive way to do this would to compute the max trade value between the user looking for partners vs. all other users in the database. I'm thinking with some lookup tables on the right things I might be able to do better. I've tried googling around, since this seems like a classical problem, but I don't know the name for it.
Can anyone recommend a good approach to solving this problem? I've seen sites like the Magic Online Trading League that seem to solve it in realtime.
You could do this in O(n*k^2) (n is the number of people, k is the average number of items they have/want) by keeping hash tables (or, in a database, indexes) of all the people who have and want given items, then giving scores for all the people who have items the current user wants, and want items the current user has. Display the top 10 or 20 scores.
[Edit] Example of how this would be implemented in SQL:
-- Get score for #userid wants
SELECT UserHas.UserID, SUM(Items.Weight) AS Score
FROM UserWants
INNER JOIN UserHas ON UserWants.ItemID = UserHas.ItemID
INNER JOIN Items ON Items.ItemID = UserWants.ItemID
WHERE UserWants.UserID = #userid
GROUP BY UserWants.UserID, UserHas.UserID
This gives you a list of other users and their score, based on what items they have that the current user wants. Do the same for items the current user has the others want, then combine them somehow (add the scores or whatever you want) and grab the top 10.
This problem looks pretty similar to stable roomamates problem. I don't see any thing wrong with the SQL implementation that got highest votes but as some else suggested this is like a dating/match making problem similar to the lines of stable marriage problem but here all the participants are in one pool.
The second wikipedia entry also has a link to a practical solution in javascript which could be useful
You could maintain a per-item list (as a complement to per-user list). Item search is then spot on. Now you can allow your self brute force search for most valuable pair by checking most valuable items first. If you want more complex (arguably faster) search you could introduce set of items that often come together as meta-items, and look for them first.
Okay, what about this:
There are basically giant "Pools"
Each "pool" contains "sections." Each "Pool" is dedicated to people who own a specific item. Each section is for people who own that item, and want another.
What I mean:
Pool A (For those requesting A)
--Section B (For those requesting A that have B)
--Section C (For those requesting A that have C, even if they also have B)
Pool B
--Section A
--Section B
Pool C
--Section A
--Section C
Each section is filled with people.
"Deals" would consist of one "Requested" item, and a "Pack," you're willing to give any or all of the items up to get the item you requested.
Every "Deal" is calculated per-pool.... if you want a given item, you go to the pools of the items you'd be willing to give, and it find the Section which belongs to the item you are requesting.
Likewise, your deal is placed in the pools. So you can immediately find all of the applicable people, because you know EXACTLY which pools, and EXACTLY which sections to search in, no sorting necessary once they've entered the system.
And, then, age would have priority, older deals would be picked, rather than new ones.
Let's assume you can hash your items, or at least sort them. Assume your goal is to find the best result for a given user, on request, as in your original example. (Optimizing trading partners to maximize overall trade value is a different question.)
This would be fast. O(log n) for each insertion operation. Worst case O(n) for suggesting trading partners, but you bound this by processing time.
You're already maintaining a list of items per user.
Give each user a score equal to the sum of the values of the items they have.
Maintain a list of user-HAVES and user-WANTS per item (#Dialecticus), sorted by user score. (You can sort on demand, or keep the lists sorted dynamically every time a user changes their HAVE list.)
When a user user1 requests suggested trade partners
Iterate over their items item in order by value.
Iterate over the user-HAVES user2 for each item, in order by user score.
Compute trade value for user1 trades-with user2.
Remember best trade so far.
Keep hash of users processed so far to avoid recomputing value for a user multiple times.
Terminate when you run out of processing time (your real-time guarantee).
Sorting by item value and user score is the approximation that makes this fast. I'm not sure how sub-optimal it would be, though. There are certainly easy examples where this would fail to find the best trade if you don't run it to completion. In practice, it seems like it might be good enough. In the limit, you can make it optimal by letting it run until it exhausts the lists in step 4.1 and 4.2. There's extra memory cost associated with the inverted lists, but you didn't say you were memory constrained. And generally, if you want speed, it's not uncommon to trade-off space to get it.
I mark item by letter and user by number.
m - number of items in all have/want lists (have or want, not have and want)
x - number of users.
For each user you have list of his wants and haves. Left line is want list, right is have list (both will be sorted so we can use binary search).
1 - ABBCDE FFFGH
2 - CFGGH BE
3 - AEEGH BBDF
For each pair of users you generate two values and store them somewhere, you'd only generate it once and than actualize. Sorting first table and generating second, is O(m*x*log(m/x)) + O(log(m)) and will require O(x^2) extra memory. These values are: how many would first user get and how many another (if you want you can modify these values by multiplying them by value of particular item).
1-2 : 1 - 3 (user 1 gets 1) - (user 2 gets 3)
1-3 : 3 - 2
2-3 : 1 - 1
You also compute and store best trader for each user. After you've generated this helpful data you can quickly query.
Adding/Removing item - O(m*log(m/x)) (You loop through user's have/want list and do binary search on have/want list of every other user and actualize data)
Finding best connection - O(1) or O(x) (Depends on whether result stored in cache is correct or needs to be updated. You loop through user's pairs and do whatever you want with data to return to user the best connection)
By m/x I estimate number of items in single user's want/have list.
In this algorithm I'm assuming that all data isn't stored in Database (I don't know if binary search is possible with Databases) and that inserting/removing item into list is O(1).
PS. Sorry for bad english and I hope I've computed it all correctly and that it is working because I also need it.
Of course you could always seperate the system into three categories; "Wants," "Haves," and "Open Offers." So lets say User1 has Item A, User2 has Item B & C and is trading those for item A, but User1 still wants Item D, and User2 wants Item E. So User1 (assuming he's the trade "owner") puts a request, or want for Item D and Item E, thus the offer stands, and goes on the "Open Offers" list. If it isn't accepted or edited within two or so days, it's automatically cancelled. So User3 is looking for Item F and Item G, and searches on the "Have list" for Items F & G, which are split between User1 & User2. He realizes that User1 and User2's open offer includes requests for Items D & E, which he has. So he chooses to "join" the operation, and it's accepted on their terms, trading and swaping they items among them.
Lets say User1 now wants Item H. He simply searches on the "Have" list for the item, and among the results, he finds that User4 will trade Item H for Item I, which User1 happens to have. They trade, all is well.
Just make it BC only. That solves all problems.

How to determine differences in two lists of data

This is an exercise for the CS guys to shine with the theory.
Imagine you have 2 containers with elements. Folders, URLs, Files, Strings, it really doesn't matter.
What is AN algorithm to calculate the added and the removed?
Notice: If there are many ways to solve this problem, please post one per answer so it can be analysed and voted up.
Edit: All the answers solve the matter with 4 containers. Is it possible to use only the initial 2?
Assuming you have two lists of unique items, and the ordering doesn't matter, you can think of them both as sets rather than lists
If you think of a venn diagram, with list A as one circle and list B as the other, then the intersection of these two is the constant pool.
Remove all the elements in this intersection from both A and B, and and anything left in A has been deleted, whilst anything left in B has been added.
So, iterate through A looking for each item in B. If you find it, remove it from both A and B
Then A is a list of things that were deleted, and B is a list of things that were added
I think...
[edit] Ok, with the new "only 2 container" restriction, the same still holds:
foreach( A ) {
if( eleA NOT IN B ) {
DELETED
}
}
foreach( B ) {
if( eleB NOT IN A ) {
ADDED
}
}
Then you aren't constructing a new list, or destroying your old ones...but it will take longer as with the previous example, you could just loop over the shorter list and remove the elements from the longer. Here you need to do both lists
An I'd argue my first solution didn't use 4 containers, it just destroyed two ;-)
I have not done this in a while but I believe the algorithm goes like this...
sort left-list and right-list
adds = {}
deletes = {}
get first right-item from right-list
get first left-item from left-list
while (either list has items)
if left-item < right-item or right-list is empty
add left-item to deletes
get new left-item from left-list
else if left-item > right-item or left-list is empty
add right-item to adds
get new right-item from right-list
else
get new right-item from right-list
get new left-item from left-list
In regards to right-list's relation to left-list, deletes contains items removed and adds now contains new items.
What Joe said. And, if the lists are too large to fit into memory, use an external file sorting utility or a Merge sort.
Missing information: How do you define added/removed? E.g. if the lists (A and B) show the same directory on Server A and Server B, that is in sync. If I now wait for 10 days, generate the lists again and compare them, how can I tell if something has been removed? I cannot. I can only tell there are files on Server A not found on Server B and/or the other way round. Whether that is because a file has been added to Server A (thus the file is not found on B) or a file has been deleted on Server B (thus the file is not found on B anymore) is something I cannot determine by just having a list of file names.
For the solution I suggest, I will just assume that you have one list named OLD and one list named NEW. Everything found on OLD but not on NEW has been removed. Everything found on NEW, but not on OLD has been added (e.g. the content of the same directory on the same server, however lists have been created at different dates).
Further I will assume there are no duplicates. That means every item on either list is unique in the sense of: If I compare this item to any other item on the list (no matter how this compare works), I can always say the item is either smaller or bigger than the one I'm comparing it to, but never equal. E.g. when dealing with strings, I can compare them lexicographically and the same string is never twice in the list.
In that case the simplest (not necessarily best solution, though) is:
Sort the OLD lists. E.g. if the list consists of strings, sort them alphabetically. Sorting is necessary, because it means I can use binary search to quickly find an object in the list, assuming it does exist there (or to quickly determine, it does not exist in the list at all). If the list is unsorted, finding the object has a complexity of O(n) (I need to look at every single item on the list). If the list is sorted, complexity is only O(log n), as after every try to match an item on the list I can always exclude 50% of the items on the list not being a match. Even if the list has 100 items, finding an item (or detecting that the item is not on the list) takes at most 7 tests (or is it 8? Anyway, far less than 100). The NEW list doesn't have to be sorted.
Now we perform list elimination. For every item on the NEW list, try to find this item on the OLD list (using binary search). If the item is found, remove this item from the OLD list and also remove it from the NEW list. This also means the lists get smaller the further the elimination progresses and thus the lookups will become faster and faster. Since removing an item from the a list has no effect on the correct sort order of the lists, there is no need to ever resort the OLD list during the elimination phase.
At the end of elimination, both lists might be empty, in which case they were equal. If they are not empty, all items still on the OLD list are items missing on the NEW list (otherwise we had removed them), hence these are the removed items. All items still on the NEW list are items that were not on the OLD list (again, we had removed them otherwise), hence these are the added items.
Are the objects in the list "unique"? In this case I would first build two maps (hashmaps) and then scan the lists and lookup every object in the maps.
map1
map2
removedElements
addedElements
list1.each |item|
{
map1.add(item)
}
list2.each |item|
{
map2.add(item)
}
list1.each |item|
{
removedElements.add(item) unless map2.contains?(item)
}
list2.each |item|
{
addedElements.add(item) unless map1.contains?(item)
}
Sorry for the horrible meta-language mixing Ruby and Java :-P
In the end removedElements will contain the elements belonging to list1, but not to list2, and addedElements will contain the elements belonging to list2.
The cost of the whole operation is O(4*N) since the lookup in the map/dictionary may be considered constant. On the other hand linear/binary searching each elements in the lists will make that O(N^2).
EDIT: on a second thought moving the last check into the second loop you may remove one of the loops... but that's ugly... :)
list1.each |item|
{
map1.add(item)
}
list2.each |item|
{
map2.add(item)
addedElements.add(item) unless map1.contains?(item)
}
list1.each |item|
{
removedElements.add(item) unless map2.contains?(item)
}

Resources