Hash and reduce to bucket algorithm - algorithm

The problem
We have a set of symbol sequences, which should be mapped to a pre-defined number of bucket-indexes.
Prerequisites
The symbol sequences are restricted in length (64 characters/bytes), and the hash algorithm used is the Delphi implementation of the Bob Jenkins hash for a 32bit hashvalue.
To further distribute the these hashvalues over a certain number of buckets we use the formula:
bucket_number := (hashvalue mod (num_buckets - 2)) + 2);
(We don't want {0,1} to be in the result set)
The question
A colleague had some doubts, that we need to choose a prime number for num_buckets to achieve an optimal1 distribution in mapping the symbol sequences to the bucket_numbers.
The majority of the team believe that's more an unproven assumption, though our team mate just claimed that's mathematically intrinsic (without more in depth explanation).
I can imagine, that certain symbol sequence patterns we use (that's just a very limited subset of what's actually allowed) may prefer certain hashvalues, but generally I don't believe that's really significant for a large number of symbol sequences.
The hash algo should already distribute the hashvalues optimally, and I doubt that a prime number mod divisor would really make a significant difference (couldn't measure that empirically either), especially since Bob Jenkins hash calculus doesn't involve any prime numbers as well, as far I can see.
[TL;DR]
Does a prime number mod divisor matter for this case, or not?
1)
optimal simply means a stable average value of number-of-sequences per bucket, which doesn't change (much) with the total number of sequences

Your colleague is simply wrong.
If a hash works well, all hash values should be equally likely, with a relationship that is not obvious from the input data.
When you take the hash mod some value, you are then mapping equally likely hash inputs to a reduced number of output buckets. The result is now not evenly distributed to the extent that outputs can be produced by different numbers of inputs. As long as the number of buckets is small relative to the range of hash values, this discrepancy is small. It is on the order of # of buckets / # of hash values. Since the number of buckets is typically under 10^6 and the number of hash values is more than 10^19, this is very small indeed. But if the number of buckets divides the range of hash values, there is no discrepancy.
Primality doesn't enter into it except from the point that you get the best distribution when the number of buckets divides the range of the hash function. Since the range of the hash function is usually a power of 2, a prime number of buckets is unlikely to do anything for you.

Related

Random number generator with freely chosen period

I want a simple (non-cryptographic) random number generation algorithm where I can freely choose the period.
One candidate would be a special instance of LCG:
X(n+1) = (aX(n)+c) mod m (m,c relatively prime; (a-1) divisible by all prime factors of m and also divisible by 4 if m is).
This has period m and does not restrict possible values of m.
I intend to use this RNG to create a permutation of an array by generating indices into it. I tried the LCG and it might be OK. However, it may not be "random enough" in that distances between adjacent outputs have very few possible values (i.e, plotting x(n) vs n gives a wrapped line). The arrays I want to index into have some structure that has to do with this distance and I want to avoid potential issues with this.
Of course, I could use any good PRNG to shuffle (using e.g. Fisher–Yates) an array [1,..., m]. But I don't want to have to store this array of indices. Is there some way to capture the permuted indices directly in an algorithm?
I don't really mind the method ending up biased w.r.t choice of RNG seed. Only the period matters and the permuted sequence (for a given seed) being reasonably random.
Encryption is a one-to-one operation. If you encrypt a range of numbers, you will get the same count of apparently random numbers back. In this case the period will be the size of the chosen range. So for a period of 20, encrypt the numbers 0..19.
If you want the output numbers to be in a specific range, then pick a block cipher with an appropriately sized block and use Format Preserving Encryption if needed, as #David Eisenstat suggests.
It is not difficult to set up a cipher with almost any reasonable block size, so long as it is an even number of bits, using the Feistel structure. If you don't require cryptographic security then four or six Feistel rounds should give you enough randomness.
Changing the encryption key will give you a different ordering of the numbers.

What hash function produces the maximum number of collisions when hashing n keys?

My question has to do with collisions. What is the maximum number of collisions that may result from hashing n keys? I believe you would be able to find this by taking n-1. But I am unsure if this is correct. I'm specifically trying to figure out a hash function that would produce that many collisions. I'm just having a hard time understand the concept of the question. Any help on the subject would be appreciated!
The maximum number of collisions is equal to the number of items you hash.
Example:
hash function: h(x) = 3
All items will be hashed to key 3.
Notice that number of keys, n in your case, doesn't affect the answer, since no matter how many keys you have, your items are always going to be hashed in key 3, with the h(x) I provided above.
Visualization:
Usually, hashing looks like this:
but if I want to have the maximum number of collisions, then, by using the h(x) provided above I will get all my items (the names in the picture) all hashed to the vary same key, i.e. key 3.
So in that case the maximum number of collisions is the number of names, 5.
Maximum expected number of collision in uniform hashing is $$O(\frac{log(n)}{loglog(n)})$$

Generating non-colliding random numbers from the combination of 2 numbers in a set?

I have a set of 64-bit unsigned integers with length >= 2. I pick 2 random integers, a, b from that set. I apply a deterministic operation to combine a and b into different 64-bit unsigned integers, c_1, c_2, c_3, etc. I add those c_ns to the set. I repeat that process.
What procedure can I use to guarantee that c will practically never collide with an existing bitstring on the set, even after millions of steps?
Since you're generating multiple 64-bit values from a pair of 64-bit numbers, I would suggest that you select two numbers at random, and use them to initialize a 64 bit xorshift random number generator with 128 bits of state. See https://en.wikipedia.org/wiki/Xorshift#xorshift.2B for an example.
However, it's rather difficult to predict the collision probability when you're using multiple random number generators. With a single PRNG, the rule of thumb is that you'll have a 50% chance of a collision after generating the square root of the range. For example, if you were generating 32-bit random numbers, your collision probability reaches 50% after about 70,000 numbers generated. Square root of 2^32 is 65,536.
With a single 64-bit PRNG, you could generate more than a billion random numbers without too much worry about collisions. In your case, you're picking two numbers from a potentially small pool, then initializing a PRNG and generating a relatively small number of values that you add back to the pool. I don't know how to calculate the collision probability in that case.
Note, however, that whatever the probability of collision, the possibility of collision always exists. That "one in a billion" chance does in fact occur: on average once every billion times you run the program. You're much better off saving your output numbers in a hash set or other data structure that won't allow you to store duplicates.
I think the best you can do without any other given constraints is to use a pseudo-random function that maps two 64-bit integers to a 64-bit integer. Depending on whether the order of a and b matter for your problem or not (i.e. (3, 5) should map to something else than (5, 3)) you shouldn't or should sort them before.
The natural choice for a pseudo-random function that maps a larger input to a smaller input is a hash function. You can select any hash function that produces an output of at least 64-bit and truncate it. (My favorite in this case would be SipHash with an arbitrary fixed key, it is fast and has public domain implementations in many languages, but you might just use whatever is available.)
The expected amount of numbers you can generate before you get a collision is determined by the birthday bound, as you are essentially selecting values at random. The linked article contains a table for the probabilities for 64-bit values. As an example, if you generate about 6 million entries, you have a collision probability of one in a million.
I don't think it is possible to beat this approach in the general case, as you could encode an arbitrary amount of information in the sequence of elements you combine while the amount of information in the output value is fixed to 64-bit. Thus you have to consider collisions, and a random function spreads out the probability evenly among all possible sequences.

Find medians in multiple sub ranges of a unordered list

E.g. given a unordered list of N elements, find the medians for sub ranges 0..100, 25..200, 400..1000, 10..500, ...
I don't see any better way than going through each sub range and run the standard median finding algorithms.
A simple example: [5 3 6 2 4]
The median for 0..3 is 5 . (Not 4, since we are asking the median of the first three elements of the original list)
INTEGER ELEMENTS:
If the type of your elements are integers, then the best way is to have a bucket for each number lies in any of your sub-ranges, where each bucket is used for counting the number its associated integer found in your input elements (for example, bucket[100] stores how many 100s are there in your input sequence). Basically you can achieve it in the following steps:
create buckets for each number lies in any of your sub-ranges.
iterate through all elements, for each number n, if we have bucket[n], then bucket[n]++.
compute the medians based on the aggregated values stored in your buckets.
Put it in another way, suppose you have a sub-range [0, 10], and you would like to compute the median. The bucket approach basically computes how many 0s are there in your inputs, and how many 1s are there in your inputs and so on. Suppose there are n numbers lies in range [0, 10], then the median is the n/2th largest element, which can be identified by finding the i such that bucket[0] + bucket[1] ... + bucket[i] greater than or equal to n/2 but bucket[0] + ... + bucket[i - 1] is less than n/2.
The nice thing about this is that even your input elements are stored in multiple machines (i.e., the distributed case), each machine can maintain its own buckets and only the aggregated values are required to pass through the intranet.
You can also use hierarchical-buckets, which involves multiple passes. In each pass, bucket[i] counts the number of elements in your input lies in a specific range (for example, [i * 2^K, (i+1) * 2^K]), and then narrow down the problem space by identifying which bucket will the medium lies after each step, then decrease K by 1 in the next step, and repeat until you can correctly identify the medium.
FLOATING-POINT ELEMENTS
The entire elements can fit into memory:
If your entire elements can fit into memory, first sorting the N element and then finding the medians for each sub ranges is the best option. The linear time heap solution also works well in this case if the number of your sub-ranges is less than logN.
The entire elements cannot fit into memory but stored in a single machine:
Generally, an external sort typically requires three disk-scans. Therefore, if the number of your sub-ranges is greater than or equal to 3, then first sorting the N elements and then finding the medians for each sub ranges by only loading necessary elements from the disk is the best choice. Otherwise, simply performing a scan for each sub-ranges and pick up those elements in the sub-range is better.
The entire elements are stored in multiple machines:
Since finding median is a holistic operator, meaning you cannot derive the final median of the entire input based on the medians of several parts of input, it is a hard problem that one cannot describe its solution in few sentences, but there are researches (see this as an example) have been focused on this problem.
I think that as the number of sub ranges increases you will very quickly find that it is quicker to sort and then retrieve the element numbers you want.
In practice, because there will be highly optimized sort routines you can call.
In theory, and perhaps in practice too, because since you are dealing with integers you need not pay n log n for a sort - see http://en.wikipedia.org/wiki/Integer_sorting.
If your data are in fact floating point and not NaNs then a little bit twiddling will in fact allow you to use integer sort on them - from - http://en.wikipedia.org/wiki/IEEE_754-1985#Comparing_floating-point_numbers - The binary representation has the special property that, excluding NaNs, any two numbers can be compared like sign and magnitude integers (although with modern computer processors this is no longer directly applicable): if the sign bit is different, the negative number precedes the positive number (except that negative zero and positive zero should be considered equal), otherwise, relative order is the same as lexicographical order but inverted for two negative numbers; endianness issues apply.
So you could check for NaNs and other funnies, pretend the floating point numbers are sign + magnitude integers, subtract when negative to correct the ordering for negative numbers, and then treat as normal 2s complement signed integers, sort, and then reverse the process.
My idea:
Sort the list into an array (using any appropriate sorting algorithm)
For each range, find the indices of the start and end of the range using binary search
Find the median by simply adding their indices and dividing by 2 (i.e. median of range [x,y] is arr[(x+y)/2])
Preprocessing time: O(n log n) for a generic sorting algorithm (like quick-sort) or the running time of the chosen sorting routine
Time per query: O(log n)
Dynamic list:
The above assumes that the list is static. If elements can freely be added or removed between queries, a modified Binary Search Tree could work, with each node keeping a count of the number of descendants it has. This will allow the same running time as above with a dynamic list.
The answer is ultimately going to be "in depends". There are a variety of approaches, any one of which will probably be suitable under most of the cases you may encounter. The problem is that each is going to perform differently for different inputs. Where one may perform better for one class of inputs, another will perform better for a different class of inputs.
As an example, the approach of sorting and then performing a binary search on the extremes of your ranges and then directly computing the median will be useful when the number of ranges you have to test is greater than log(N). On the other hand, if the number of ranges is smaller than log(N) it may be better to move elements of a given range to the beginning of the array and use a linear time selection algorithm to find the median.
All of this boils down to profiling to avoid premature optimization. If the approach you implement turns out to not be a bottleneck for your system's performance, figuring out how to improve it isn't going to be a useful exercise relative to streamlining those portions of your program which are bottlenecks.

Hash Functions and Tables of size of the form 2^p

While calculating the hash table bucket index from the hash code of a key, why do we avoid use of remainder after division (modulo) when the size of the array of buckets is a power of 2?
When calculating the hash, you want as much information as you can cheaply munge things into with good distribution across the entire range of bits: e.g. 32-bit unsigned integers are usually good, unless you have a lot (>3 billion) of items to store in the hash table.
It's converting the hash code into a bucket index that you're really interested in. When the number of buckets n is a power of two, all you need to do is do an AND operation between hash code h and (n-1), and the result is equal to h mod n.
A reason this may be bad is that the AND operation is simply discarding bits - the high-level bits - from the hash code. This may be good or bad, depending on other things. On one hand, it will be very fast, since AND is a lot faster than division (and is the usual reason why you would choose to use a power of 2 number of buckets), but on the other hand, poor hash functions may have poor entropy in the lower bits: that is, the lower bits don't change much when the data being hashed changes.
Let us say that the table size is m = 2^p.
Let k be a key.
Then, whenever we do k mod m, we will only get the last p bits of the binary representation of k. Thus, if I put in several keys that have the same last p bits, the hash function will perform VERY VERY badly as all keys will be hashed to the same slot in the table. Thus, avoid powers of 2

Resources