What is the most efficient data structure to store a list of integers that need to be looked up a lot in .Net? - data-structures

I need an efficient data structure to store a list of integers. The amount in the list could range from 1 to probably never more than 1000. The list will be queried about 20 times per request. What would be the most efficient collection type to store these in?
UPDATE
To give a little more insight, we'll take www.wikipediamaze.com (a little game I wrote) as an example (not the real scenario but close enough for conversation). For the list of puzzles on any given page, I am currently returning a list from the puzzles table joined to the table that stores which puzzles the current user has played. Instead I want to cache the list of puzzles agnostic to the user. So what I am doing is first loading and caching the list of puzzles from the database. Then I load and cache the list of puzzles the user has played. Then when I am iterating over the puzzles to display them, I want to do this:
protected BestDataStructure<long> PlayedPuzzles {get; set;} //Loaded from session
protected bool HasBeenPlayed(long puzzleId)
{
return PlayedPuzzles.Contains(puzzleId)
}
Whenever they play a new puzzle I will save the record to the database and append it to the list stored in the session.
Thanks!

It depends on how you need to query them, but a simple array, or HashSet<int> springs to mind.
Both are O(1) when you index into them. HashSet.Contains is also O(1).
In response to your comment on the question: Use HashSet, since you need to check whether a specified integer exists. You should use Contains() on HashSet to do this; it will give you the best performance. If you need to store some other value related to the value, perhaps use Dictionary.

If you're looking for a structure that efficiently does an operation such as Contains(), then HashSet<int> is what you're looking for. HashSet<int> is O(1) (constant time) for Contains(), which is as good a complexity as you're going to get.

HashSet(int).

if you need to check for presence of element, then probably bool[] with 1000 elements and set true for existing integer elements?

Related

Data structure to Filter Data Quickly

I am doing a bit of research into making an efficient filtering algorithm when it comes to many properties of specific data. This is kind of a fun project for me to learn new data structures.
for example, say I wanted All RPG's on Playstation Which had English releases.
Now I want to allow for much more complex queries.
Is there a good data structure to handle filtering attributes like this, without the need to give all of the attributes. Instead I can give only a few and still find the correct games?
I currently plan to have "buckets" which will describe an attribute, for example all Genre's game ID's will be in one bucket, and so forth. Then I will use a hash algorithm to add 1 to that game, and only use games which have the correct value after the search.
But I want to try to find a faster or easier method, any suggestions when it comes to filtering many attributes to find sets of items?
Thanks,
What do you mean by "without the need to give all of the attributes"? Are you saying you have N attributes and you want to find the items that match l < N of the attributes, or are you saying that you don't want to compute an index for each attribute?
Hashing each attribute into buckets will give you O(1) time at the expense of O(n) space to store each index.
You could sort your list by one or two attributes to make some lookups O(logn) at the expense of having to do the sorting up front for O(nlogn) time
You could get kinda clever with bloom filters for your attributes and let some attributes overlap. This would lead to some false-positives, but you could filter those out after the fact. This gives you constant-space with constant-time lookup in the average case (but O(n) time in the worse-case).

Suitable data structure for finding a person's phone number, given their name?

Suppose you want to write a program that implements a simple phone book. Given a particular name, you want to be able to retrieve that person's phone number as quickly as possible. What data structure would you use to store the phone book, and why?
the text below answers your question.
In computer science, a hash table or hash map is a data structure that
uses a hash function to map identifying values, known as keys (e.g., a
person's name), to their associated values (e.g., their telephone
number). Thus, a hash table implements an associative array. The hash
function is used to transform the key into the index (the hash) of an
array element (the slot or bucket) where the corresponding value is to
be sought.
the text is from wiki:hashtable.
there are some further discussions, like collision, hash functions... check the wiki page for details.
I respect & love hashtables :) but even a balanced binary tree would be fine for your phone book application giving you in worst case a logarithmic complexity and avoiding you for having good hash functions, collisions etc. which is more suitable for huge amounts of data.
When I talk about huge data what I mean is something related to storage. Every time you fill all of the buckets in a hash-table you will need to allocate new storage and re-hash everything. This can be avoided if you know the size of the data ahead of time. Balanced trees wont let you go into these problems. Domain needs to be considered too while designing data structures, for an example for small devices storage matters a lot.
I was wondering why 'Tries' didn't come up in one of the answers,
Tries is suitable for Phone book kind of data.
Also, saving space compared to HashTable at the same cost(almost) of Retrieval efficiency, (assuming constant size alphabet & constant length Names)
Tries also facilitate the 'Prefix Matches' sometimes required while searching.
A dictionary is both dynamic and fast.
You want a dictionary, where you use the name as the key, and the number as the data stored. Check this out: http://en.wikipedia.org/wiki/Dictionary_%28data_structure%29
Why not use a singly linked list? Each node will have the name, number and link information.
One drawback is that your search might take some time since you'll have to traverse the entire list from link to link. You might order the list at the time of node insertion itself!
PS: To make the search a tad bit faster, maintain a link to the middle of the list. Search can continue to the left or right of the list based on the value of the "name" field at this node. Note that this requires a doubly linked list.

How to match against a large collection efficiently

I have a large collection of objects of type foo. Each object of type foo has say 100 properties (all strings) plus an id. An object of type bar also has these 100 properties.
I want to find the matching object of type foo from the collection where all these properties match with that of bar.
Aside from the brute force method, is there an elegant algorithm where we can calculate a signature for foo objects once and do the same for the bar object and match more efficiently?
The foos are in the thousands and the bars are in the millions.
Darth Vader has a point there... and I never thought that I'd be siding with the dark side!
I'll go over what I think are the best tools for the trade:
Embedded database: Google's LevelDB- it's faster than most database solutions out there.
Hashing function: Google's CityHash- it's fast and it offers excellent hashing!
JSON Serialization
The Embedded Database
The goal of using an embedded database is that you will get performance that will beat most database solutions that you're likely to encounter. We can talk about just how fast LevelDB is, but plenty of other people have already talked about it quite a bit so I won't waste time. The embedded database allows you to store key/value pairs and quickly find them in your database.
The Hashing Function
A good hashing function will be fast and it will provide a good distribution of non-repeatable hashes. CityHash is very fast and it has very good distribution, but again: I won't waste time since a lot of other people have already talked about the performance of CityHash. You would use the hashing function to hash your objects and then use the unique key to look them up in the database.
JSON Serialization
JSON Serialization is the antithesis of what I've shown above: it's very slow and it will diminish any performance gain you achieved with CityHash, but it gives you a very simple way to hash an entire object. You serialize the object to a JSON string, then you hash the string using CityHash. Despite the fact that you've lost the performance gains of CityHash because you spent so much time serializing the object to JSON, you will still reap the benefits of having a really good hashing function.
The Conclusion
You can store billions of records in LevelDB and you will be able to quickly retrieve the exact value you're looking for just by providing the hash for it.
In order to generate a key, you can use JSON serialization and CityHash to hash the JSON string.
Use the key to find the matching object!
Enjoy!
If you have ALL matching properties. That means they are same objects actually. is that correct?
In any case, you want to use a Map/Dictionary/Table with a good hashing algorithm to find matching objects.
Whichever language you are using, you should override the gethashcode and equals methods to implement it.
If you have a good hashing algorithm your access time will be O(1). otherwise it can be upto O(n).
Based on your memory limitation, you want to store foos in the map, storing bars might requite lots of space which you might not have.
Hash is very nice and simple to implement.. But i want suggest you that algorithm:
Map your 100 string properties to one big string(for example concatenate with fixed length for each property) that should unique id of this object. So we have 1000 string in first set, and 1mln strings in second.
The problem reduces to find for each strings in second set if first set contains it.
Make trie data structure on first set
Complicity of checking if string S in the trie is O(|S|). |S| - length of S.
So... Complicity of algorithm is - O(Sum(|Ai|) + Sum(|Bi|)) = O(max(Sum(|Ai|), Sum(|Bi|)) = O(Sum(|Bi|)) for your problem. Ai - string unique id for first set, Bi - string unique id for second set.
UPDATE:
Trie takes O(Sum(|Ai|) * |Alphabet|) space at worst.

Best data structure for a given set of operations - Add, Retrieve Min/Max and Retrieve a specific object

I am looking for the optimal (time and space) optimal data structure for supporting the following operations:
Add Persons (name, age) to a global data store of persons
Fetch Person with minimum and maximum age
Search for Person's age given the name
Here's what I could think of:
Keep an array of Persons, and keep adding to end of array when a new Person is to be added
Keep a hash of Person name vs. age, to assist in fetching person's age with given name
Maintain two objects minPerson and maxPerson for Person with min and max age. Update this if needed, when a new Person is added.
Now, although I keep a hash for better performance of (3), I think it may not be the best way if there are many collisions in the hash. Also, addition of a Person would mean an overhead of adding to the hash.
Is there anything that can be further optimized here?
Note: I am looking for the best (balanced) approach to support all these operations in minimum time and space.
You can get rid of the array as it doesn't provide anything that the other two structures can't do.
Otherwise, a hashtable + min/max is likely to perform well for your use case. In fact, this is precisely what I would use.
As to getting rid of the hashtable because a poor hash function might lead to collisions: well, don't use a poor hash function. I bet that the default hash function for strings that's provided by your programming language of choice is going to do pretty well out of the box.
It looks like that you need a data structure that needs fast inserts and that also supports fast queries on 2 different keys (name and age).
I would suggest keeping two data structures, one a sorted data structure (e.g. a balanced binary search tree) where the key is the age and the value is a pointer to the Person object, the other a hashtable where the key is the name and the value is a pointer to the Person object. Notice we don't keep two copies of the same object.
A balanced binary search tree would provide O(log(n)) inserts and max/min queries, while the hastable would give us O(1) (amortized) inserts and lookups.
When we add a new Person, we just add a pointer to it to both data structures. For a min/max age query, we can retrieve the Object by querying the BST. For a name query we can just query the hashtable.
Your question does not ask for updates/deletes, but those are also doable by suitably updating both data structures.
It sounds like you're expecting the name to be the unique idenitifer; otherwise your operation 3 is ambiguous (What is the correct return result if you have two entries for John Smith?)
Assuming that the uniqueness of a name is guaranteed, I would go with a plain hashtable keyed by names. Operation 1 and 3 are trivial to execute. Operation 2 could be done in O(N) time if you want to search through the data structure manually, or you can do like you suggest and keep track of the min/max and update it as you add/delete entries in the hash table.

The algorithm used to generate recommendations in Google News?

I'm study recommendation engines, and I went through the paper that defines how Google News generates recommendations to users for news items which might be of their interest, based on collaborative filtering.
One interesting technique that they mention is Minhashing. I went through what it does, but I'm pretty sure that what I have is a fuzzy idea and there is a strong chance that I'm wrong. The following is what I could make out of it :-
Collect a set of all news items.
Define a hash function for a user. This hash function returns the index of the first item from the news items which this user viewed, in the list of all news items.
Collect, say "n" number of such values, and represent a user with this list of values.
Based on the similarity count between these lists, we can calculate the similarity between users as the number of common items. This reduces the number of comparisons a lot.
Based on these similarity measures, group users into different clusters.
This is just what I think it might be. In Step 2, instead of defining a constant hash function, it might be possible that we vary the hash function in a way that it returns the index of a different element. So one hash function could return the index of the first element from the user's list, another hash function could return the index of the second element from the user's list, and so on. So the nature of the hash function satisfying the minwise independent permutations condition, this does sound like a possible approach.
Could anyone please confirm if what I think is correct? Or the minhashing portion of Google News Recommendations, functions in some other way? I'm new to internal implementations of recommendations. Any help is appreciated a lot.
Thanks!
I think you're close.
First of all, the hash function first randomly permutes all the news items, and then for any given person looks at the first item. Since everyone had the same permutation, two people have a decent chance of having the same first item.
Then, to get a new hash function, rather than choosing the second element (which would have some confusing dependencies on the first element), they choose a whole new permutation and take the first element again.
People who happen to have the same hash value 2-4 times (that is, the same first element in 2-4 permutations) are put together in a cluster. This algorithm is repeated 10-20 times, so that each person gets put into 10-20 clusters. Finally, recommendations are given based (the small number of) other people in the 10-20 clusters. Since all this work is done by hashing, people are put directly into buckets for their clusters, and large numbers of comparisons aren't needed.

Resources