I am working on a small project which progressively grows a list of links and then processes them through a queue. There exists the likelihood that a link may be entered into the queue twice and I would like to track my progress so I can skip anything that has already been processed. I'm estimating around 10k unique links at most.
For larger projects I would use a database but that seems overkill for the amount of data I am working with and would prefer some form of in-memory solution that can potentially be serialized if I want to save progress across runs.
What data structure would best fit this need?
Update: I am already using a hash to track which links I have completed processing. Is this the most efficient way of doing it?
def process_link(link)
return if #processed_links[link]
# ... processing logic
#processed_links[link] = Time.now # or other state
end
If you aren't concerned about memory, then just use a Hash to check inclusion; insert and lookup times are O(1) average case. Serialization is straightforward (Ruby's Marshal class should take care of that for you, or you could use a format like JSON). Ruby's Set is an array-like object that is backed with a Hash, so you could just use that if you're so inclined.
However, if memory is a concern, then this is a great problem for a Bloom filter! You can achieve set inclusion testing in constant time and the filter uses substantially less memory than a hash would. The tradeoff is the Bloom filters are probabilistic - you can get false inclusion positives. You can eliminate the probability of most false positives with the right bloom filter parameters, but if duplicates are the exception rather than the rule, you could implement something like:
Check for set inclusion in the Bloom filter [O(1)]
If the bloom filter reports that the entry is found, perform an O(n) check of the input data, to see if this item has been found in the array of input data prior to now.
That would get you very fast and memory-efficient lookups for the common case, and you could make the choice to accept the possibility of false negatives (to keep the whole thing small and fast), or you could perform verification of set inclusion when a duplicate is reported (to only do expensive work when you absolutely have to).
https://github.com/igrigorik/bloomfilter-rb is a Bloom filter implementation I've used in the past; it works nicely. There are also redis-backed Bloom filters, if you need something that can perform set membership tracking and testing across multiple app instances.
How about a Set and convert your links to value object (rather than reference object) like Structs. By creating a value object the Set will be able to detect its uniqueness. Alternately, you could use a hash and store links by their PK.
The data structure could be a hash:
current_status = { links: [link3, link4, link5], processed: [link1, link2, link3] }
To track your progress (in percent):
links_count = current_status[:links].length + current_status[:processed].length
progress = (current_status[:processed].length * 100) / links_count # Will give you percent of progress
To process your links:
push any new link you need to process to current_status[:links].
Use shift to take from current_status[:links] the next link to be processed.
After processing a link, push it to current_status[:processed]
EDIT
As I see it (and understand your question), the logic to process your links would be:
# Add any new link that needs to be processed to the queue unless it have been processed
def add_link_to_queue(link)
current_status[:to_process].push(link) unless current_status[:processed].include?(link)
end
# Process next link on the queue
def process_next_link
link = current_status[:to_process].shift # return first link on the queue
# ... login process the link
current_status[:processed].push(link)
end
# shift method will not only return but also remove the link from the original array to avoid duplications
Related
I have some module-level state I want to write once and then never modify.
Specifically, I have a set of strings I want to use to look things up in later. What is an efficient and ordinary way of doing this?
I could make a function that always returns the same set:
my_set() -> sets:from_list(["a", "b", "c"]).
Would the VM optimize this, or would the code for constructing the set be re-run every time? I suspect the set would just get GCd.
In that case, should I cache the set in the process dictionary, keyed on something unique like the module md5?
Key = proplists:get_value(md5, module_info()), put(Key, my_set())
Another solution would be to make the caller to call an init function to get back an opaque chunk of state, then pass that state into each function in the module.
A compile-time constant, like your example list ["a", "b", "c"], will be stored in a constant pool on the side when the module is loaded, and not rebuilt each time you run the expression. (In the old days, the list would have been reconstructed from its elements for each new call.) This goes for all constants no matter how complicated (like lists of lists of tuples). But when you call out to a function like sets:from_list/1, the compiler cannot assume anything about the representation used by the sets module, and the set will be constructed dynamically from that constant list.
While an ETS table would work, it is less efficient for larger constants (like, say, a set or map containing many entries), because an ETS table has the same memory model as a process - data is written and read by copying, as if by sending messages. If the constants are small, the difference between copying them and recreating them locally would be negligible, and if the constants are large, you waste time copying them.
What you want instead is a fairly new feature called Persistent Term Storage: https://erlang.org/doc/man/persistent_term.html (since Erlang/OTP 21). It is similar to the way compile time constants are handled, so there is no copying when looking up a value. (The key could be the name of your module.) Persistent Term is intended as pretty much a write-once-read-many storage - you can update the stored entry, but that's a more expensive operation which may trigger a global GC.
I'm using SortedSetScan to filter some data,my code is below:
db.SortedSetScan("SR.Cache.APP:Termial1", "A*", 50, CommandFlags.None);
but the pagesize is always not work,the result count always be all. What's wrong about my code? or is it a bug?
may anybody can help me,thx!
Yes, the result of that is always "all". The page size simply impacts the number of round-trips, versus the amount of data each call, when issuing the underlying ZSCAN command. The IEnumerable<T>, however, is lazy, etc, so if you only want the first 50 items, use:
db.SortedSetScan(...).Take(50)
instead, which will perform whatever operations it needs in order to get 50 items. Tweaking the page size simply changes how many operations are needed. It would be incorrect to think "I'll make the page size 50 so it only takes one operation" - it doesn't work like that; redis *SCAN commands can return empty pages, or pages with one or two items on, regardless of the page size. The page size is more a "how many things to look at before giving up for this iteration" guidance fore redis. This is described more fully on the redis SCAN documentation - in particular, read what it says about "The COUNT option".
Note that the sequence obtained from all of the SE.Redis scanning operations can be resumed at a later point by casting the IEnumerable<T> or IEnumerator<T> to an IScanningCursor, and obtaining the cursor details to supply as parameters.
You might also want to think whether the "range" methods are more appropriate (note: they don't allow pattern filters).
I have a web app that uses Guids as the PK in the DB for an Employee object and an Association object.
One page in my app returns a large amount of data showing all Associations all Employees may be a part of.
So right now, I am sending to the client essentially a bunch of objects that look like:
{assocation_id: guid, employees: [guid1, guid2, ..., guidN]}
It turns out that many employees belong to many associations, so I am sending down the same Guids for those employees over and over again in these different objects. For example, it is possible that I am sending down 30,000 total guids across all associations in some cases, of which there are only 500 unique employees.
I am wondering if it is worth me building some kind of lookup index that I also send to the client like
{ 1: Guid1, 2: Guid2 ... }
and replacing all of the Guids in the objects I send down with those ints,
or if simply gzipping the response will compress it enough that this extra effort is not worth it?
Note: please don't get caught up in the details of if I should be sending down 30,000 pieces of data or not -- this is not my choice and there is nothing I can do about it (and I also can't change Guids to ints or longs in the DB).
Your wrote at the end of your question the following
Note: please don't get caught up in the details of if I should be
sending down 30,000 pieces of data or not -- this is not my choice and
there is nothing I can do about it (and I also can't change Guids to
ints or longs in the DB).
I think it's your main problem. If you don't solve the main problem you will be able to reduce the size of transferred data to 10 times for example, but you still don't solve the main problem. Let us we think about the question: Why so many data should be sent to the client (to the web browser)?
The data on the client side are needed to display some information to the user. The monitor is not so large to show 30,000 total on one page. No user are able to grasp so much information. So I am sure that you display only small part of the information. In the case you should send only the small part of information which you display.
You don't describe how the guids will be used on the client side. If you need the information during row editing for example. You can transfer the data only when the user start editing. In the case you need transfer the data only for one association.
If you need display the guids directly, then you can't display all the information at once. So you can send the information for one page only. If the user start to scroll or start "next page" button you can send the next portion of data. In the way you can really dramatically reduce the size of transferred data.
If you do have no possibility to redesign the part of application you can implement your original suggestion: by replacing of GUID "{7EDBB957-5255-4b83-A4C4-0DF664905735}" or "7EDBB95752554b83A4C40DF664905735" to the number like 123 you reduce the size of GUID from 34 characters to 3. If you will send additionally array of "guid mapping" elements like
123:"7EDBB95752554b83A4C40DF664905735",
you can reduce the original size of data 30000*34 = 1020000 (1 MB) to 300*39 + 30000*3 = 11700+90000 = 101700 (100 KB). So you can reduce the size of data in 10 times. The usage of compression of dynamic data on the web server can reduce the size of data additionally.
In any way you should examine why your page is so slowly. If the program works in LAN, then the transferring of even 1MB of data can be quick enough. Probably the page is slowly during placing of the data on the web page. I mean the following. If you modify some element on the page the position of all existing elements have to be recalculated. If you would be work with disconnected DOM objects first and then place the whole portion of data on the page you can improve the performance dramatically. You don't posted in the question which technology you use in you web application so I don't include any examples. If you use jQuery for example I could give some example which clear more what I mean.
The lookup index you propose is nothing else than a "custom" compression scheme. As amdmax stated, this will increase your performance if you have a lot of the same GUIDs, but so will gzip.
IMHO, the extra effort of writing the custom coding will not be worth it.
Oleg states correctly, that it might be worth fetching the data only when the user needs it. But this of course depends on your specific requirements.
if simply gzipping the response will compress it enough that this extra effort is not worth it?
The answer is: Yes, it will.
Compressing the data will remove redundant parts as good as possible (depending on the algorithm) until decompression.
To get sure, just send/generate the data uncompressed and compressed and compare the results. You can count the duplicate GUIDs to calculate how big your data block would be with the dictionary compression method. But I guess gzip will be better because it can also compress the syntactic elements like braces, colons, etc. inside your data object.
So what you are trying to accomplish is Dictionary compression, right?
http://en.wikibooks.org/wiki/Data_Compression/Dictionary_compression
What you will get instead of Guids which are 16 bytes long is int which is 4 bytes long. And you will get a dictionary full of key value pairs that will associate each guid to some int value, right?
It will decrease your transfer time when there're many objects with the same id used. But will spend CPU time before transfer to compress and after transfer to decompress. So what is the amount of data you transfer? Is it mb / gb / tb? And is there any good reason to compress it before sending?
I do not know how dynamic is your data, but I would
on a first call send two directories/dictionaries mapping short ids to long GUIDS, one for your associations and on for your employees e.g. {1: AssoGUID1, 2: AssoGUID2,...} and {1: EmpGUID1, 2:EmpGUID2,...}. These directories may also contain additional information on the Associations and Employees instances; I suspect you do not simply display GUIDs
on subsequent calls just send the index of Employees per Association { 1: [2,4,5], 3:[2,4], ...}, the key being the association short id and the ids in the array value, the short ids of the employees. Given your description building the reverse index: Employee to Associations may give better result size wise (but higher processing)
Then its all down to associative arrays manipulations which is straightforward in JS.
Again, if your data is (very) dynamic server side, the two directories will soon be obsolete and maintaining synchronization may cost you a lot.
I would start by answering the following questions:
What are the performance requirements? Are there size requirements? Speed requirements? What is the minimum performance that is truly needed?
What are the current performance metrics? How far are you from the requirements?
You characterized the data as possibly being mostly repeats. Is that the normal case? If not, what is?
The 2 options you listed above sound reasonable and trivial to implement. Try creating a look-up table and see what performance gains you get on actual queries. Try zipping the results (with look-ups and without), and see what gains you get.
In my experience if you're not TOO far from the goal, performance requirements are often trial and error.
If those options don't get you close to the requirements, I would take a step back and see if the requirements are reasonable in the time you have to solve the problem.
What you do next depends on which performance goals are lacking. If it is size, you're starting to be limited if you're required to send the entire association list ever time. Is that truly a requirement? Can you send the entire list once, and then just updates?
I understand what makes bloom filters an attractive data structure; however, I'm finding it difficult to really understand when you can use them since you still have to perform the expensive operation you're trying to avoid to be certain that you haven't found a false positive. Because of this wouldn't they generally just add a lot of overhead? For example the wikipedia article for bloom filters suggests they can be used for data synchronization. I see how it would be great the first time around when the bloom filter is empty but say you haven't changed anything and you go to synchronize your data again. Now every lookup to the bloom filter will report that the file has already been copied but wouldn't we still have to preform the slower lookup task we were trying to avoid to actually make sure that's correct?
Basically, you use Bloom filters to avoid the long and arduous task of proving an item doesn't exist in the data structure. It's almost always harder to determine if something is missing than if it exists, so the filter helps to shore up losses searching for things you won't find anyway. It doesn't always work, but when it does you reap a huge benefit.
Bloom filters are very efficient in the case of membership queries, i.e., to find out whether an element belongs to the set. The number of elements in the set does not affect the query performance.
A common example is when you add an email address to the email list. your application should check whether it’s already in the contacts list, and if it’s not, a popup should appear asking you if you want to add the new recipient. To Implement this, you normally follow those steps in front end application:
Get the list of contacts from a server
Create a local copy for fast lookup
Allow looking up a contact
Provide the option to add a new contact if the lookup is unsuccessful
Sync with the server when a new contact is added or an existing email is updated.
Bloom filter will handle all those steps fast and memory-efficient way. You could use a dictionary for a fast lookup but that would require to save the entire contact list in key-pair. For such a large contact-list you might not have enough storage space in browser
I'm trying to optimize a piece of software which is basically running millions of tests. These tests are generated in such a way that there can be some repetitions. Of course, I don't want to spend time running tests which I already ran if I can avoid it efficiently.
So, I'm thinking about using a Bloom filter to store the tests which have been already ran. However, the Bloom filter errs on the unsafe side for me. It gives false positives. That is, it may report that I've ran a test which I haven't. Although this could be acceptable in the scenario I'm working on, I was wondering if there's an equivalent to a Bloom filter, but erring on the opposite side, that is, only giving false negatives.
I've skimmed through the literature without any luck.
Yes, a lossy hash table or a LRUCache is a data structure with fast O(1) lookup that will only give false negatives -- if you ask if "Have I run test X", it will tell you either "Yes, you definitely have", or "I can't remember".
Forgive the extremely crude pseudocode:
setup_test_table():
create test_table( some large number of entries )
clear each entry( test_table, NEVER )
return test_table
has_test_been_run_before( new_test_details, test_table ):
index = hash( test_details , test_table.length )
old_details = test_table[index].detail
// unconditionally overwrite old details with new details, LRU fashion.
// perhaps some other collision resolution technique might be better.
test_table[index].details = new_test_details
if ( old_details === test_details ) return YES
else if ( old_details === NEVER ) return NEVER
else return PERHAPS
main()
test_table = setup_test_table();
loop
test_details = generate_random_test()
status = has_test_been_run_before( test_details, test_table )
case status of
YES: do nothing;
NEVER: run test (test_details);
PERHAPS: if( rand()&1 ) run test (test_details);
next loop
end.
The exact data structure that accomplishes this task is a Direct-mapped cache, and is commonly used in CPUs.
function set_member(set, item)
set[hash(item) % set.length] = item
function is_member(set, item)
return set[hash(item) % set.length] == item
Is it possible to store the tests that you did not run? This should inverse the filter's behavior.
How about an LRUCache?
I think you're leaving out part of the solution; to avoid false positives entirely you will still have to track which have run, and essentially use the bloom filter as a shortcut to determine the a test definitely has not been run.
That said, since you know the number of tests in advance, you can size the filter in such a way as to provide an acceptable error rate using some well-known formulae; for a 1% probability of returning a false positive you need ~9.5 bits/entry, so for one million entries 1.2 megabytes is sufficient. If you reduce the acceptable error rate to 0.1%, this only increases to 1.8 MB.
The Wikipedia article Bloom Filters gives a great analysis, and an interesting overview of the maths involved.
Use a bit set, as mentioned above. If you know the no. of tests you are going to run beforehand, you will always get correct results (present, not-present) from the data structure.
Do you know what keys you will be hashing? If so, you should run an experiment to see the distribution of the keys in the BloomFilter so you can fine tune it to reproduce false positives, or what have you.
You might want to checkout HyperLogLog as well.
I'm sorry I'm not much help - I don't think its possible. If test execution can't be ordered maybe use a packed format (8 tests per byte!) or a good sparse array library for storing the outcomes in memory.
The data structure you expect does not exist. Because such data structure must be a many-to-one mapping, or say, a limited state set. There must be at least two different inputs mapping to the same internal state. So you can't tell whether both (or more) of them are in the set, only knowing at least one of such input exists.
EDIT This statement is true only when you are looking for a memory efficient data structure. If memory is unlimited, you can always get a data structure to give 100% accurate results, by storing every member item.
No and if you think about it, it wouldn't be very useful. In your case you couldn't be sure that your test run would ever stop, because if there are always 'false negatives' there will always be tests that need to be run...
I would say you just have to use a hash.