Best data structure to retrieve by max values and ID? - algorithm

I have quite a big amount of fixed size records. Each record has lots of fields, ID and Value are among them. I am wondering what kind of data structure would be best so that I can
locate a record by ID(unique) very fast,
list the 100 records with the biggest values.
Max-heap seems work, but far from perfect; do you have a smarter solution?
Thank you.

A hybrid data structure will most likely be best. For efficient lookup by ID a good structure is obviously a hash-table. To support top-100 iteration a max-heap or a binary tree is a good fit. When inserting and deleting you just do the operation on both structures. If the 100 for the iteration case is fixed, iteration happens often and insertions/deletions aren't heavily skewed to the top-100, just keep the top 100 as a sorted array with an overflow to a max-heap. That won't modify the big-O complexity of the structure, but it will give a really good constant factor speed-up for the iteration case.

I know you want pseudo-code algorithm, but in Java for example i would use TreeSet, add all the records by ID,value pairs.
The Tree will add them sorted by value, so querying the first 100 will give you the top 100. Retrieving by ID will be straight-forward.
I think the algorithm is called Binary-Tree or Balanced Tree not sure.

Max heap would match the second requirement, but hash maps or balanced search trees would be better for the first one. Make the choice based on frequency of these operations. How often would you need to locate a single item by ID and how often would you need to retrieve top 100 items?
Pseudo code:
add(Item t)
{
//Add the same object instance to both data structures
heap.add(t);
hash.add(t);
}
remove(int id)
{
heap.removeItemWithId(id);//this is gonna be slow
hash.remove(id);
}
getTopN(int n)
{
return heap.topNitems(n);
}
getItemById(int id)
{
return hash.getItemById(id);
}
updateValue(int id, String value)
{
Item t = hash.getItemById(id);
//now t is the same object referred to by the heap and hash
t.value = value;
//updated both.
}

Related

An effective way to perform a prefix search on ranked (sorted) list?

I have a large list of some elements sorted by their probabilities:
data class Element(val value: String, val probability: Float)
val sortedElements = listOf(
Element("dddcccdd", 0.7f),
Element("aaaabb", 0.2f),
Element("bbddee", 0.1f)
)
Now I need to perform a prefix searches on this list to find items that start with one prefix and then with the next prefix and so on (elements still need to be sorted by probabilities)
val filteredElements1 = sortedElements
.filter { it.value.startsWith("aa") }
val filteredElements2 = sortedElements
.filter { it.value.startsWith("bb") }
Each "request" of elements filtered by some prefix takes O(n) time, which is too slow in case of a large list.
If I didn't care about the order of the elements (their probabilities), I could sort the elements lexicographically and perform a binary search: sorting takes O(n*log n) time and each request -- O(log n) time.
Is there any way to speed up the execution of these operations without losing the sorting (probability) of elements at the same time? Maybe there is some kind of special data structure that is suitable for this task?
You can read more about Trie data structure https://en.wikipedia.org/wiki/Trie
This could be really useful for your usecase.
Leetcode has another very detailed explanation on it, which you can find here https://leetcode.com/articles/implement-trie-prefix-tree/
Hope this helps
If your List does not change often, you could create a HashMap where each existing Prefix is a key referring to a collection (sorted by probability) of all entries it is a prefix of.
getting all entries for a given prefix needs ~O(1) then.
Be careful the Map get really big. And creation of the map takes quite some time.

Hash Tables and Separate Chaining: How do you know which value to return from the bucket's list?

We're learning about hash tables in my data structures and algorithms class, and I'm having trouble understanding separate chaining.
I know the basic premise: each bucket has a pointer to a Node that contains a key-value pair, and each Node contains a pointer to the next (potential) Node in the current bucket's mini linked list. This is mainly used to handle collisions.
Now, suppose for simplicity that the hash table has 5 buckets. Suppose I wrote the following lines of code in my main after creating an appropriate hash table instance.
myHashTable["rick"] = "Rick Sanchez";
myHashTable["morty"] = "Morty Smith";
Let's imagine whatever hashing function we're using just so happens to produce the same bucket index for both string keys rick and morty. Let's say that bucket index is index 0, for simplicity.
So at index 0 in our hash table, we have two nodes with values of Rick Sanchez and Morty Smith, in whatever order we decide to put them in (the first pointing to the second).
When I want to display the corresponding value for rick, which is Rick Sanchez per our code here, the hashing function will produce the bucket index of 0.
How do I decide which node needs to be returned? Do I loop through the nodes until I find the one whose key matches rick?
To resolve Hash Tables conflicts, that's it, to put or get an item into the Hash Table whose hash value collides with another one, you will end up reducing a map to the data structure that is backing the hash table implementation; this is generally a linked list. In the case of a collision this is the worst case for the Hash Table structure and you will end up with an O(n) operation to get to the correct item in the linked list. That's it, a loop as you said, that will search the item with the matching key. But, in the cases that you have a data structure like a balanced tree to search, it can be O(logN) time, as the Java8 implementation.
As JEP 180: Handle Frequent HashMap Collisions with Balanced Trees says:
The principal idea is that once the number of items in a hash bucket
grows beyond a certain threshold, that bucket will switch from using a
linked list of entries to a balanced tree. In the case of high hash
collisions, this will improve worst-case performance from O(n) to
O(log n).
This technique has already been implemented in the latest version of
the java.util.concurrent.ConcurrentHashMap class, which is also slated
for inclusion in JDK 8 as part of JEP 155. Portions of that code will
be re-used to implement the same idea in the HashMap and LinkedHashMap
classes.
I strongly suggest to always look at some existing implementation. To say about one, you could look at the Java 7 implementation. That will increase your code reading skills, that is almost more important or you do more often than writing code. I know that it is more effort but it will pay off.
For example, take a look at the HashTable.get method from Java 7:
public synchronized V get(Object key) {
Entry<?,?> tab[] = table;
int hash = key.hashCode();
int index = (hash & 0x7FFFFFFF) % tab.length;
for (Entry<?,?> e = tab[index] ; e != null ; e = e.next) {
if ((e.hash == hash) && e.key.equals(key)) {
return (V)e.value;
}
}
return null;
}
Here we see that if ((e.hash == hash) && e.key.equals(key)) is trying to find the correct item with the matching key.
And here is the full source code: HashTable.java

Data structure for occurrence counting in long tail distribution

I have a big list of elements (tens of millions).
I am trying to count the number of occurrence of several subset of these elements.
The occurrence distribution is long-tailed.
The data structure currently looks like this (in an OCaml-ish flavor):
type element_key
type element_aggr_key
type raw_data = element_key list
type element_stat =
{
occurrence : (element_key, int) Hashtbl.t;
}
type stat =
{
element_stat_hashtable : (element_aggr_key, element_stat) Hashtbl.t;
}
Element_stat currently use hashtable where the key is each elements and the value is an integer. However, this is inefficient because when many elements have a single occurrence, the occurrence hashtable is resized many times.
I cannot avoid resizing the occurrence hashtable by setting a big initial size because there actually are many element_stat instances (the size of hashtable in stat is big).
I would like to know if there is a more efficient (memory-wise and/or insertion-wise) data structure for this use-case. I found a lot of existing data structure like trie, radix tree, Judy array. But I have trouble understanding their differences and whether they fit my problem.
What you have here is a table mapping element_aggr_key to tables that in turn map element_key to int. For all practical purposes, this is equivalent to a single table that maps element_aggr_key * element_key to int, so you could do:
type stat = (element_aggr_key * element_key, int) Hashtbl.t
Then you have a single hash table, and you can give it a huge initial size.

Count frequency of items in array - without two for loops

Need to know is there a way to count the frequency of items in a array without using two loops. This is without knowing the size of the array. If I know the size of the array I can use switch without looping. But I need more versatile than that. I think modifying the quicksort may give better results.
Array[n];
TwoDArray[n][2];
First loop will go on Array[], while second loop is to find the element and increase it count in two-d array.
max = 0;
for(int i=0;i<Array.length;i++){
found= false;
for(int j=0;j<TwoDArray[max].length;j++){
if(TwoDArray[j][0]==Array[i]){
TwoDArray[j][1]+=;
found = true;
break;
}
}
if(found==false){
TwoDArray[max+1][0]=Array[i];
TwoDArray[max+1][1]=1;
max+=;
}
If you can comment or provide better solution would be very helpful.
Use map or hash table to implement this. Insert key as the array item and value as the frequency.
Alternatively you can use array too if the range of array elements are not too large. Increase the count of value at indexes corresponding to the array element.
I would build a map keyed by the item in the array and with a value that is the count of that item. One pass over the array to build the map that contains the counts. For each item, look it's count up in the map, increment the count, and put the new count back into the map.
The map put and get operations can be constant time (e.g., if you use a hash map implementation with a good hash function and properly sized backing store). This means you can compute the frequencies in time proportional to the number of elements in your array.
I'm not saying this is better than using a map or hash table (especially not when there are lots of duplicates, though in that case you can get close to O(n) sorting with certain techniques, so this is not too bad either), it's just an alternative.
Sort the array
Use a (single) for-loop to iterate through the sorted array
If you find the same element as the previous one, increment the current count
If you find a different element, store the previous element and its count and set the count to 1
At the end of the loop, store the previous element and its count

Dictionary Lookup (O(1)) vs Linq where

What is faster and should I sacrifice the Linq standard to achieve speed (assuming Dictionary lookup is truly faster)? So let me elaborate:
I have the following:
List<Product> products = GetProductList();
I have a need to search for a product based on some attribute, for example, the serial number. I could first create a dictionary, and then populate it as follow:
Dictionary<string, Product> dict = new Dictionary<string, Product>();
foreach(Product p in products)
{
dict.Add(p.serial, p);
}
When it's time to find a product, take advantage of O(1) offered by the Dictionary look-up:
string some_serial = ...;
try { Product p = dict[some_serial]; } catch(KeyNotFoundException) { }
Alternatively, using Linq:
Product p = products.Where(p => p.serial.Equals(some_serial)).FirstOrDefault();
The drawback with the Dict approach is of course this requires more space in memory, more code to write, less elegant, etc (though most of this is debatable). Assume that's non-factor. Should I take the first approach?
To conclude, I would like to confirm if the complexity of the Linq approach above is indeed O(n) and I don't see how it can be better than that.
Assuming you are starting with an enumeration of objects and are only doing this once ...
It will be faster to do the Where method as opposed to adding to a Dictionary<TKey,TValue> and then looking it back up. The reason why is that the dictionary method is not O(1). In this scenario you are adding items to the dictionary and then looking it up. The adding part is O(N) which is just as expensive as the Where method with additional memory overhead.
Another minor point to be aware of is that Dictionary<TKey,TValue> is not truly O(1). It instead approaches O(1) but can degrade to lesser performance in certain circumstances (lots of clashing keys for instance).

Resources