How to generate a function that will algebraically encode a sequence? - algorithm

Is there any way to generate a function F that, given a sequence, such as:
seq = [1 2 4 3 0 5 4 2 6]
Then F(seq) will return a function that generates that sequence? That is,
F(seq)(0) = 1
F(seq)(1) = 2
F(seq)(2) = 4
... and so on
Also, if it is, what is the function of lowest complexity that does so, and what is the complexity of the generated functions?
EDIT
It seems like I'm not clear, so I'll try to exemplify:
F(seq([1 3 5 7 9])}
# returns something like:
F(x) = 1 + 2*x
# limited to the domain x ∈ [1 2 3 4 5]
In other words, I want to compute a function that can be used to algebraically, using mathematical functions such as +, *, etc, restore a sequence of integers, even if you cleaned it from memory. I don't know if it is possible, but, as one could easily code an approximation for such function for trivial cases, I'm wondering how far it goes and if there is some actual research concerning that.
EDIT 2 Answering another question, I'm only interested in sequences of integers - if that is important.
Please let me know if it is still not clear!

Well, if you just want to know a function with "+ and *", that is to say, a polynomial, you can go and check Wikipedia for Lagrange Polynomial (https://en.wikipedia.org/wiki/Lagrange_polynomial).
It gives you the lowest degree polynomial that encodes your sequence.
Unfortenately, you probably won't be able to store less than before, as the probability of the polynom being of degree d=n-1 where n is the size of the array is very high with random integers.
Furthermore, you will have to store rational numbers instead of integers.
And finally, the access to any number of the array will be in O(d) (using Horner algorithm for polynomial evaluation), in comparison to O(1) with the array.
Nevertheless, if you know that your sequences may be very simple and very long, it might be an option.

If the sequence comes from a polynomial with a low degree, an easy way to find the unique polynomial that generates it is using Newton's series. Constructing the polynomial for a n numbers has O(n²) time complexity, and evaluating it has O(n).
In Newton's series the polynomial is expressed in terms of x, x(x-1), x(x-1)(x-2) etc instead of the more familiar x, x², x³. To get the coefficients, basically you compute the differences between subsequent items in the sequence, then the differences between the differences, until only one is left or you get a sequence of all zeros. The numbers you get along the bottom, divided by factorial of the degree of the term, give you the coefficients. For example with the first sequence you get these differences:
1 2 4 3 0 5 4 2 6
1 2 -1 -3 5 -1 -2 4
1 -3 -2 8 -6 -1 6
-4 1 10 -14 5 7
5 9 -24 19 2
4 -33 43 -17
-37 76 -60
113 -136
-249
The polynomial that generates this sequence is therefore:
f(x) = 1 + x(1 + (x-1)(1/2 + (x-2)(-4/6 + (x-3)(5/24 + (x-4)(4/120
+ (x-5)(-37/720 + (x-6)(113/5040 + (x-7)(-249/40320))))))))
It's the same polynomial you get using other techniques, like Lagrange interpolation; this is just the easiest way to generate it as you get the coefficients for a polynomial form that can be evaluated with Horner's method, unlike the Lagrange form for example.

There is no magic if you say that the sequence could be completely random. And yet, it is always possible, but won't save you memory. Any interpolation method requires the same amount of memory in the worst case. Because, if it didn't, it would be possible to compress everything to a single bit.
On the other hand, it is sometimes possible to use a brute force, some heuristics (like genetic algorithms), or numerical methods to reproduce some kind of mathematical expression having a specified type, but good luck with that :)
Just use some archiving tools instead in order to save memory usage.
I think it will be useful for you to read about this: http://en.wikipedia.org/wiki/Entropy_(information_theory)

Related

Double hashing using composite numbers in second hash function

I realize that the best practice is to use the largest prime number (smaller then the size of the array) in the mod function of the second hash function is best practice.
But my question is regarding the use of numbers that are not prime numbers.
I'm not interested in a pseudo-code just the idea behind the concept.
Let's say I have an array m=20, and I have to choose between 6,9,12 and 15 as the values that will be entered in the second hash function. Which of them will give me the best 'spread'?
My first thought is to go for the same idea as choosing a prime number, only slightly modified, which means using the largest number the has the minimum amount of permutations:
6 -> 2,3
9 -> 3,3 = 3
12 -> 2,3,4,6
15 -> 3,5
Right of the bat I can rule 6 (a larger number with the same amount of permutations exists) and 12 (too many permutations) out.
Now the question arises, should I use 9 - has the least amount permutations, or should I choose 15 - although it has more permutations it is much larger the 9 and a lot closer to the size of the array (m=20).
Am I correct in using this approach? or is there a better way of choosing a number, given I can only choose from the numbers stated above?
I have found the answer I was looking for, so I'm leaving the question here with the correct answer in case anyone else ever needs it.
If we are forced to choose a number that is not a prime number as the number to be used in the second hash function (in the mod of that function):
The correct approach is to use the GCD function (Greatest Common Denominator), to find numbers that are "prime with respect to each other". This means that we are looking for any number that its gcd with 20 will result in 1.
In this case:
gcd(20,6)= 2
gcd(20,9)= 1
gcd(20,12)= 3
gcd(20,15)= 5
As we can see, the gcd between 20 and 9 is 1, which means that they have no common factors other than 1. Therefore, 9 is the correct answer.

Guidance on Algorithmic Thinking (4 fours equation)

I recently saw a logic/math problem called 4 Fours where you need to use 4 fours and a range of operators to create equations that equal to all the integers 0 to N.
How would you go about writing an elegant algorithm to come up with say the first 100...
I started by creating base calculations like 4-4, 4+4, 4x4, 4/4, 4!, Sqrt 4 and made these values integers.
However, I realized that this was going to be a brute force method testing the combinations to see if they equal, 0 then 1, then 2, then 3 etc...
I then thought of finding all possible combinations of the above values, checking that the result was less than 100 and filling an array and then sorting it...again inefficient because it may find 1000s of numbers over 100
Any help on how to approach a problem like this would be helpful...not actual code...but how to think through this problem
Thanks!!
This is an interesting problem. There are a couple of different things going on here. One issue is how to describe the sequence of operations and operands that go into an arithmetic expression. Using parentheses to establish order of operations is quite messy, so instead I suggest thinking of an expression as a stack of operations and operands, like - 4 4 for 4-4, + 4 * 4 4 for (4*4)+4, * 4 + 4 4 for (4+4)*4, etc. It's like Reverse Polish Notation on an HP calculator. Then you don't have to worry about parentheses, having the data structure for expressions will help below when we build up larger and larger expressions.
Now we turn to the algorithm for building expressions. Dynamic programming doesn't work in this situation, in my opinion, because (for example) to construct some numbers in the range from 0 to 100 you might have to go outside of that range temporarily.
A better way to conceptualize the problem, I think, is as breadth first search (BFS) on a graph. Technically, the graph would be infinite (all positive integers, or all integers, or all rational numbers, depending on how elaborate you want to get) but at any time you'd only have a finite portion of the graph. A sparse graph data structure would be appropriate.
Each node (number) on the graph would have a weight associated with it, the minimum number of 4's needed to reach that node, and also the expression which achieves that result. Initially, you would start with just the node (4), with the number 1 associated with it (it takes one 4 to make 4) and the simple expression "4". You can also throw in (44) with weight 2, (444) with weight 3, and (4444) with weight 4.
To build up larger expressions, apply all the different operations you have to those initial node. For example, unary negation, factorial, square root; binary operations like * 4 at the bottom of your stack for multiply by 4, + 4, - 4, / 4, ^ 4 for exponentiation, and also + 44, etc. The weight of an operation is the number of 4s required for that operation; unary operations would have weight 0, + 4 would have weight 1, * 44 would have weight 2, etc. You would add the weight of the operation to the weight of the node on which it operates to get a new weight, so for example + 4 acting on node (44) with weight 2 and expression "44" would result in a new node (48) with weight 3 and expression "+ 4 44". If the result for 48 has better weight than the existing result for 48, substitute that new node for (48).
You will have to use some sense when applying functions. factorial(4444) would be a very large number; it would be wise to set a domain for your factorial function which would prevent the result from getting too big or going out of bounds. The same with functions like / 4; if you don't want to deal with fractions, say that non-multiples of 4 are outside of the domain of / 4 and don't apply the operator in that case.
The resulting algorithm is very much like Dijkstra's algorithm for calculating distance in a graph, though not exactly the same.
I think that the brute force solution here is the only way to go.
The reasoning behind this is that each number has a different way to get to it, and getting to a certain x might have nothing to do with getting to x+1.
Having said that, you might be able to make the brute force solution a bit quicker by using obvious moves where possible.
For instance, if I got to 20 using "4" three times (4*4+4), it is obvious to get to 16, 24 and 80. Holding an array of 100 bits and marking the numbers reached
Similar to subset sum problem, it can be solved using Dynamic Programming (DP) by following the recursive formulas:
D(0,0) = true
D(x,0) = false x!=0
D(x,i) = D(x-4,i-1) OR D(x+4,i-1) OR D(x*4,i-1) OR D(x/4,i-1)
By computing the above using DP technique, it is easy to find out which numbers can be produced using these 4's, and by walking back the solution, you can find out how each number was built.
The advantage of this method (when implemented with DP) is you do not recalculate multiple values more than once. I am not sure it will actually be effective for 4 4's, but I believe theoretically it could be a significant improvement for a less restricted generalization of this problem.
This answer is just an extension of Amit's.
Essentially, your operations are:
Apply a unary operator to an existing expression to get a new expression (this does not use any additional 4s)
Apply a binary operator to two existing expressions to get a new expression (the new expression has number of 4s equal to the sum of the two input expressions)
For each n from 1..4, calculate Expressions(n) - a List of (Expression, Value) pairs as follows:
(For a fixed n, only store 1 expression in the list that evaluates to any given value)
Initialise the list with the concatenation of n 4s (i.e. 4, 44, 444, 4444)
For i from 1 to n-1, and each permitted binary operator op, add an expression (and value) e1 op e2 where e1 is in Expressions(i) and e2 is in Expressions(n-i)
Repeatedly apply unary operators to the expressions/values calculated so far in steps 1-3. When to stop (applying 3 recursively) is a little vague, certainly stop if an iteration produces no new values. Potentially limit the magnitude of the values you allow, or the size of the expressions.
Example unary operators are !, Sqrt, -, etc. Example binary operators are +-*/^ etc. You can easily extend this approach to operators with more arguments if permitted.
You could do something a bit cleverer in terms of step 3 never ending for any given n. The simple way (described above) does not start calculating Expressions(i) until Expressions(j) is complete for all j < i. This requires that we know when to stop. The alternative is to build Expressions of a certain maximum length for each n, then if you need to (because you haven't found certain values), extend the maximum length in an outer loop.

Generating a mathematical model of a pattern

Does there exist some algorithm that allows for the creation of a mathematical model given an inclusive set?
I'm not sure I'm asking that correctly... Let me try again...
Given some input set...
int Set[] = { 1, 4, 9, 16, 25, 36 };
Does there exist an algorithm that would be able to deduce the pattern evident in the set? In this case being...
Set[x] = x^2
The only way I can think of doing something like this is some GA where the fitness is how closely the generated model matches the input set.
Edit:
I should add that my problem domain implies that the set is inclusive. Meaning, I am finding the closest possible function for the set and not using the function to extrapolate beyond the set...
The problem of curve fitting might be a reasonable place to start looking. I'm not sure if this is exactly what you're looking for - it won't really identify the pattern so much as just produce a function which follows the pattern as closely as possible.
As others have mentioned, for a simple set there can easily be infinitely many such functions, so something like this may be what you want, rather than exactly what you have described in your question.
Wikipedia seems to indicate that the Gauss-Newton algorithm or the Levenberg–Marquardt algorithm might be a good place to begin your research.
A mathematical argument explaining why, in general, this is impossible:
There are only countably many computer programs that can be written at all.
There are uncountably many infinite sequences of integers.
Therefore, there are infinitely many sequences of integers for which no possible computer program can generate those sequences.
Accordingly, this is impossible in the general case. Sorry!
Hope this helps!
If you want to know if the given data fits some polynomial function, you compute successive differences until you reach a constant. The number of differences to reach the constant is the degree of the polynomial.
x | 1 2 3 4
y | 1 4 9 16
y' | 3 5 7
y" | 2 2
Since y" is 2, y' is 2x + C1, and thus y is x2 + C1x + C2. C1 is 0, since 2×1.5 = 3. C2 is 0 because 12 = 1. So, we have y = x2.
So, the algorithm is:
Take successive differences.
If it does not converge to a constant, either resort to curve fitting, or report the data is insufficient to determine a polynomial.
If it does converge to a constant, iteratively integrate polynomial expression and evaluate the trailing constant until the degree is achieved.

Create a random permutation of 1..N in constant space

I am looking to enumerate a random permutation of the numbers 1..N in fixed space. This means that I cannot store all numbers in a list. The reason for that is that N can be very large, more than available memory. I still want to be able to walk through such a permutation of numbers one at a time, visiting each number exactly once.
I know this can be done for certain N: Many random number generators cycle through their whole state space randomly, but entirely. A good random number generator with state size of 32 bit will emit a permutation of the numbers 0..(2^32)-1. Every number exactly once.
I want to get to pick N to be any number at all and not be constrained to powers of 2 for example. Is there an algorithm for this?
The easiest way is probably to just create a full-range PRNG for a larger range than you care about, and when it generates a number larger than you want, just throw it away and get the next one.
Another possibility that's pretty much a variation of the same would be to use a linear feedback shift register (LFSR) to generate the numbers in the first place. This has a couple of advantages: first of all, an LFSR is probably a bit faster than most PRNGs. Second, it is (I believe) a bit easier to engineer an LFSR that produces numbers close to the range you want, and still be sure it cycles through the numbers in its range in (pseudo)random order, without any repetitions.
Without spending a lot of time on the details, the math behind LFSRs has been studied quite thoroughly. Producing one that runs through all the numbers in its range without repetition simply requires choosing a set of "taps" that correspond to an irreducible polynomial. If you don't want to search for that yourself, it's pretty easy to find tables of known ones for almost any reasonable size (e.g., doing a quick look, the wikipedia article lists them for size up to 19 bits).
If memory serves, there's at least one irreducible polynomial of ever possible bit size. That translates to the fact that in the worst case you can create a generator that has roughly twice the range you need, so on average you're throwing away (roughly) every other number you generate. Given the speed an LFSR, I'd guess you can do that and still maintain quite acceptable speed.
One way to do it would be
Find a prime p larger than N, preferably not much larger.
Find a primitive root of unity g modulo p, that is, a number 1 < g < p such that g^k ≡ 1 (mod p) if and only if k is a multiple of p-1.
Go through g^k (mod p) for k = 1, 2, ..., ignoring the values that are larger than N.
For every prime p, there are φ(p-1) primitive roots of unity, so it works. However, it may take a while to find one. Finding a suitable prime is much easier in general.
For finding a primitive root, I know nothing substantially better than trial and error, but one can increase the probability of a fast find by choosing the prime p appropriately.
Since the number of primitive roots is φ(p-1), if one randomly chooses r in the range from 1 to p-1, the expected number of tries until one finds a primitive root is (p-1)/φ(p-1), hence one should choose p so that φ(p-1) is relatively large, that means that p-1 must have few distinct prime divisors (and preferably only large ones, except for the factor 2).
Instead of randomly choosing, one can also try in sequence whether 2, 3, 5, 6, 7, 10, ... is a primitive root, of course skipping perfect powers (or not, they are in general quickly eliminated), that should not affect the number of tries needed greatly.
So it boils down to checking whether a number x is a primitive root modulo p. If p-1 = q^a * r^b * s^c * ... with distinct primes q, r, s, ..., x is a primitive root if and only if
x^((p-1)/q) % p != 1
x^((p-1)/r) % p != 1
x^((p-1)/s) % p != 1
...
thus one needs a decent modular exponentiation (exponentiation by repeated squaring lends itself well for that, reducing by the modulus on each step). And a good method to find the prime factor decomposition of p-1. Note, however, that even naive trial division would be only O(√p), while the generation of the permutation is Θ(p), so it's not paramount that the factorisation is optimal.
Another way to do this is with a block cipher; see this blog post for details.
The blog posts links to the paper Ciphers with Arbitrary Finite Domains which contains a bunch of solutions.
Consider the prime 3. To fully express all possible outputs, think of it this way...
bias + step mod prime
The bias is just an offset bias. step is an accumulator (if it's 1 for example, it would just be 0, 1, 2 in sequence, while 2 would result in 0, 2, 4) and prime is the prime number we want to generate the permutations against.
For example. A simple sequence of 0, 1, 2 would be...
0 + 0 mod 3 = 0
0 + 1 mod 3 = 1
0 + 2 mod 3 = 2
Modifying a couple of those variables for a second, we'll take bias of 1 and step of 2 (just for illustration)...
1 + 2 mod 3 = 0
1 + 4 mod 3 = 2
1 + 6 mod 3 = 1
You'll note that we produced an entirely different sequence. No number within the set repeats itself and all numbers are represented (it's bijective). Each unique combination of offset and bias will result in one of prime! possible permutations of the set. In the case of a prime of 3 you'll see that there are 6 different possible permuations:
0,1,2
0,2,1
1,0,2
1,2,0
2,0,1
2,1,0
If you do the math on the variables above you'll not that it results in the same information requirements...
1/3! = 1/6 = 1.66..
... vs...
1/3 (bias) * 1/2 (step) => 1/6 = 1.66..
Restrictions are simple, bias must be within 0..P-1 and step must be within 1..P-1 (I have been functionally just been using 0..P-2 and adding 1 on arithmetic in my own work). Other than that, it works with all prime numbers no matter how large and will permutate all possible unique sets of them without the need for memory beyond a couple of integers (each technically requiring slightly less bits than the prime itself).
Note carefully that this generator is not meant to be used to generate sets that are not prime in number. It's entirely possible to do so, but not recommended for security sensitive purposes as it would introduce a timing attack.
That said, if you would like to use this method to generate a set sequence that is not a prime, you have two choices.
First (and the simplest/cheapest), pick the prime number just larger than the set size you're looking for and have your generator simply discard anything that doesn't belong. Once more, danger, this is a very bad idea if this is a security sensitive application.
Second (by far the most complicated and costly), you can recognize that all numbers are composed of prime numbers and create multiple generators that then produce a product for each element in the set. In other words, an n of 6 would involve all possible prime generators that could match 6 (in this case, 2 and 3), multiplied in sequence. This is both expensive (although mathematically more elegant) as well as also introducing a timing attack so it's even less recommended.
Lastly, if you need a generator for bias and or step... why don't you use another of the same family :). Suddenly you're extremely close to creating true simple-random-samples (which is not easy usually).
The fundamental weakness of LCGs (x=(x*m+c)%b style generators) is useful here.
If the generator is properly formed then x%f is also a repeating sequence of all values lower than f (provided f if a factor of b).
Since bis usually a power of 2 this means that you can take a 32-bit generator and reduce it to an n-bit generator by masking off the top bits and it will have the same full-range property.
This means that you can reduce the number of discard values to be fewer than N by choosing an appropriate mask.
Unfortunately LCG Is a poor generator for exactly the same reason as given above.
Also, this has exactly the same weakness as I noted in a comment on #JerryCoffin's answer. It will always produce the same sequence and the only thing the seed controls is where to start in that sequence.
Here's some SageMath code that should generate a random permutation the way Daniel Fischer suggested:
def random_safe_prime(lbound):
while True:
q = random_prime(lbound, lbound=lbound // 2)
p = 2 * q + 1
if is_prime(p):
return p, q
def random_permutation(n):
p, q = random_safe_prime(n + 2)
while True:
r = randint(2, p - 1)
if pow(r, 2, p) != 1 and pow(r, q, p) != 1:
i = 1
while True:
x = pow(r, i, p)
if x == 1:
return
if 0 <= x - 2 < n:
yield x - 2
i += 1

Why modulo 65521 in Adler-32 checksum algorithm?

The Adler-32 checksum algorithm does sums modulo 65521. I know that 65521 is the largest prime number that fits in 16 bits, but why is it important to use a prime number in this algorithm?
(I'm sure the answer will seem obvious once someone tells me, but the number-theory parts of my brain just aren't working. Even without expertise in checksum algorithms, a smart person who reads http://en.wikipedia.org/wiki/Fletcher%27s_checksum can probably explain it to me.)
Why was mod prime used for Adler32?
From Adler's own website http://zlib.net/zlib_tech.html
However, Adler-32
has been constructed to minimize the
ways to make small changes in the data
that result in the same check value,
through the use of sums significantly
larger than the bytes and by using a
prime (65521) for the modulus. It is
in this area that some analysis is
deserved, but it has not yet been
done.
The main reason for Adler-32 is, of
course, speed in software
implementations.
An alternative to Adler-32 is Fletcher-32, which replaces the modulo of 65521 with 65535. This paper shows that Fletcher-32 is superior for channels with low-rate random bit errors.
It was used because primes tend to have better mixing properties. Exactly how good it is remains to be discussed.
Other Explanations
Someone else in this thread makes a somewhat convincing argument that modulus a prime is better for detecting bit-swapping. However, this is most likely not the case because bit-swapping is extremely rare. The two most prevalent errors are:
Random bit-flips (1 <-> 0) common anywhere.
Bit shifting (1 2 3 4 5 -> 2 3 4 5 or 1 1 2 3 4 5) common in networking
Most of the bit-swapping out there is caused by random bit-flips that happened to look like a bit swap.
Error correction codes are in fact, designed to withstand n-bits of deviation. From Adler's website:
A properly constructed CRC-n has the
nice property that less than n bits in
error is always detectable. This is
not always true for Adler-32--it can
detect all one- or two-byte errors but
can miss some three-byte errors.
Effectiveness of using a prime modulus
I did a long writeup on essentially the same question. Why modulo a prime number?
http://www.codexon.com/posts/hash-functions-the-modulo-prime-myth
The short answer
We know much less about prime numbers than composite ones. Therefore people like Knuth started using them.
While it might be true that primes have less relationship to much of the data we hash, increasing the table/modulo size also decreases the probability of a collision (sometimes more than any benefit gained from rounding down to the nearest prime).
Here is a graph of collisions per bucket with 10 million cryptographically random integers comparing mod 65521 vs 65535.
The Adler-32 algorithm is to compute
A = 1 + b1 + b2 + b3 + ...
and
B = (1 + b1) + (1 + b1 + b2) + (1 + b1 + b2 + b3) + ... = 1 + b1 + 2 * b2 + 3 * b3 + ...
and report them modulo m. When m is prime, the numbers modulo m form what mathematicians call a field. Fields have the handy property that for any nonzero c, we have a = b if and only if c * a = c * b. Compare the times table modulo 6, which is not a prime, with the times table modulo 5, which is:
* 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 1 2 3 4 5
2 0 2 4 0 2 4
3 0 3 0 3 0 3
4 0 4 2 0 4 2
5 0 5 4 3 2 1
* 0 1 2 3 4
0 0 0 0 0 0
1 0 1 2 3 4
2 0 2 4 1 3
3 0 3 1 4 2
4 0 4 3 2 1
Now, the A part gets fooled whenever we interchange two bytes -- addition is commutative after all. The B part is supposed to detect this kind of error, but when m is not a prime, more locations are vulnerable. Consider an Adler checksum mod 6 of
1 3 2 0 0 4
We have A = 4 and B = 1. Now consider swapping b2 and b4:
1 0 2 3 0 4
A and B are unchanged because 2 * 3 = 4 * 0 = 2 * 0 = 4 * 3 (modulo 6). One can also swap 2 and 5 to the same effect. This is more likely when the times table is unbalanced -- modulo 5, these changes are detected. In fact, the only time a prime modulus fails to detect a single swap is when two equal indexes mod m are swapped (and if m is big, they must be far apart!).^ This logic can also be applied to interchanged substrings.
The disadvantage in using a smaller modulus is that it will fail slightly more often on random data; in the real world, however, corruption is rarely random.
^ Proof: suppose that we swap indexes i and j with values a and b. Then ai + bj = aj + bi, so ai - aj + bj - bi = 0 and (a - b)*(i - j) = 0. Since a field is an integral domain, it follows that a = b (values are congruent) or i = j (indexes are congruent).
EDIT: the website that Unknown linked to (http://www.zlib.net/zlib_tech.html) makes it clear that the design of Adler-32 was not at all principled. Because of the Huffman code in a DEFLATE stream, even small errors are likely to change the framing (because it's data-dependent) and cause large errors in the output. Consider this answer a slightly contrived example for why people ascribe certain properties to primes.
Long story short:
The modulo of a prime has the best bit-shuffeling properties, and that's exactly what we want for a hash-value.
For perfectly random data, the more buckets the better.
Let's say the data is non-random in some way. Now, the only way that the non-randomness could affect the algorithm is by creating a situation where some buckets have a higher probability of being used than others.
If the modulo number is non-prime, then any pattern affecting one of the numbers making up the modulo could affect the hash. So if you're using 15, a pattern every 3 or 5 as well as every 15 could cause collisions, while if you're using 13 the pattern would have to be every 13 to cause collisions.
65535 = 3*5*17*257, so a pattern involving 3 or 5 could cause collisions using this modulo-- if multiples of 3 were much more common for some reason, for instance, then only the buckets which were multiples of 3 would be put to good use.
Now I'm not sure whether, realistically, this is likely to be an issue. It would be good to determine the collision rate empirically with actual data of the type one wants to hash, not random numbers. (For instance, would numerical data involving http://en.wikipedia.org/wiki/Benford's_law">Benford's Law or some such irregularity cause patterns that would affect this algorithm? How about using ASCII codes for realistic text?)
Checksums are generally used with the intention of detecting that two things are different, especially in cases where both things are not available at the same time and place. They might be available at different places (e.g. a packet of information as sent, versus a packet of information as received), or different times (e.g. a block of information when it was stored, versus a block of information when it was read back). In some cases, it may be desirable to check whether two things that are stored independently in two different places are likely to match, without having to send the actual data from one device to the other (e.g. comparing loaded code images or configurations).
If the only reasons that the things being compared wouldn't match would be random corruption of one of them, then the use of a prime modulus for an Adler-32 checksum is probably not particularly helpful. If, however, it's possible that one of the things might have had some 'deliberate' changes made to it, use of a non-prime modulus may cause certain changes to go unnoticed. For example, the effects of changing a byte from 00 to FF, and changing another byte that's some multiple of 257 bytes earlier or later from FF to 00, would cancel out when using a Fletcher's checksum, but not when using Adler-32 checksum. It's not particularly likely that such a scenario would occur from random corruption, but such offsetting changes could occur when changing a program. It wouldn't be especially likely that they'd occur an exact multiple of 257 bytes apart, but it's a risk which can be avoided by using a prime modulus (provided, at least, that the number of bytes in the file is smaller than the modulus)
The answer lies in the field theory.
The set Z/Z_n with the operations plus und times is a field when n is a prime (i.e. addition und multiplication with modulo n).
In other words, the following equation:
m * x = (in Z/Z_n)
has only one solution for any value ofm (namely x = 0)
Consider this example:
2 * x = 0 (mod 10)
This equation has two solutions, x = 0 AND x = 5. That is because 10 is not a prime and can be written as 2 * 5.
This property is responsible for better distribution of the hash values.

Resources