Performance of two alternative functions that compute least divisor - performance

In the book "The Haskell Road to Logic, Maths and Programming", the authors present two alternative ways of finding the least divisor k of a number n with k > 1, claiming that the second version is much faster than the first one. I have problems understanding why (I am a beginnner).
Here is the first version (page 10):
ld :: Integer -> Integer -- finds the smallest divisor of n which is > 1
ld n = ldf 2 n
ldf :: Integer -> Integer -> Integer
ldf k n | n `rem` k == 0 = k
| k ^ 2 > n = n
| otherwise = ldf (k + 1) n
If I understand this correctly, the ld function basically ends up iterating through all integers in the interval [2..sqrt(n)] and stops as soon as one of them divides n, returning it as the result.
The second version, which the authors claim to be much faster, goes like this (page 23):
ldp :: Integer -> Integer -- finds the smallest divisor of n which is > 1
ldp n = ldpf allPrimes n
ldpf :: [Integer] -> Integer -> Integer
ldpf (p:ps) n | rem n p == 0 = p
| p ^ 2 > n = n
| otherwise = ldpf ps n
allPrimes :: [Integer]
allPrimes = 2 : filter isPrime [3..]
isPrime :: Integer -> Bool
isPrime n | n < 1 = error "Not a positive integer"
| n == 1 = False
| otherwise = ldp n == n
The authors claim that this version is faster because it iterates only through the list of primes within the interval 2..sqrt(n), instead of iterating through all numbers in that range.
However, this argument doesn't convince me: the recursive function ldpf is eating one by one the numbers from the list of primes allPrimes. This list is generated by doing filter on the list of all integers.
So unless I am missing something, this second version ends up iterating through all numbers within the interval 2..sqrt(n) too, but for each number it first checks whether it is prime (a relatively expensive operation), and if so, it checks whether it divides n (a relatively cheap one).
I would say that just checking whether k divides n for each k should be faster. Where is the flaw in my reasoning?

The main advantage of the second solution is that you compute the list of primes allPrimes only once. Thanks to lazy evaluation, each call computes just the primes it needs, or reuses the ones that have been already computed. So the expensive part is computed only once and then just reused.
For computing the least divisor of just a single number, the first version is indeed more efficient. But try running ldp and ld for all numbers between let's say 1 and 100000 and you'll see the difference.

haskell is unknown for me so without proper measurements of booth version I can only assume that the claims are right. In that case the reasons can be:
1.primes are precomputed in some array
then isprime? is not time expensive
2.primes are computed on the run (and memoizing)
from start will be the first version faster
but continuous use will make the second version faster (especially for bigger n)
3.primes are computed on the run (and not memoizing)
as DeadMG mentioned
compiler optimizations+CPU caching can sometimes feel like memoizing effect up to a point
in that case version 2 will be overall faster
but if you continue to use bigger and bigger n and small ones all the time
after reaching cache invalidation point it will slow down even more then version 1
4.this is just a guess
is it possible for your compiler to convert recursion
like single integer iteration through list or range
to loop ? (most recursions of this type can be converted to loop anyway)
that could explain all ...
no recursion calls overhead
no heap trashing
[Note]
as I wrote above I am not haskell user so treat this answer accordingly

As I understand this, the division operation isn't as expensive as you might think for the divisor of 2, this makes for half of allPrimes filtered out numbers to be checked for "shift right 1 bit" which is as simple as a computing operation can be, while the first algorithm will perform a relatively expensive true division by the whole of the number. Say if the possible divisor is 1956, it will be filtered out of allPrimes by executing the very first test at almost no cost (shift right will return zero - divisible by 2) while dividing a say 2^4253-1 by 1956 is already senseless, as it's not divisible by 2, and in case of really big numbers divisions take a lot of time, and at least half of them (or say 5/6, for divisors 2 and 3) are useless. Also allPrimes is a cached list, thus checking the next integer for a prime to be included in allPrimes uses only verified primes, so the primality test isn't extremely expensive even for the actual prime number. This combined gives the second method advantage.

Related

Is this shortcut for modulo by a constant (C) valid? IF (A mod 2^n) > C: { -C}

Looking to do modulo operator, A mod K where...
K is a uint32_t constant, is not a power of two, and I will be using it over and over again.
A is a uint32_t variable, possibly as much as ~2^13 times larger than K.
The ISA does not have single cycle modulo or division instructions. (8-bit micro)
The naive approach seems to coincide with the naive approach to division; repeat subtraction until underflow, then keep the remainder. This would obviously have fairly bad worst case performance, but would work for any A and K.
A known fast approach which works well for a K that is some power of two, is to logical AND with that power of two -1.
From Wikipedia...
A % 2^n == A & (2^n - 1)
My knee jerk reaction is to use these two things together, and I'm wondering if that is valid?
Specifically, I figure I can use the power of two mod trick to narrow the worst case for the above subtraction method. In other words, quickly mod to the nearest power of two above my constant, then subtract my constant if necessary. Here's the code that is in the actual question, fully expanded.
A = A AND (2^n - 1) # MOD A to the next higher power of two
if A > K: # See if we are still larger than our constant
A -= K # If so, subtract. We now must be lower.
##################
# A = A MOD K ???
##################
On inspection, this should always work, and should always be fast, since the next power of two greater than K should always be such that 2K will be larger. That is, K < 2^n < 2K meaning I should only ever need one extra test, and then possibly one subtraction.
...but this seems too simple. If it worked, I'd expect to have seen it before. But I can't find an example. I can't find a counter example either though. I have checked the usual places. What am I missing?
You can't combine both the approaches. First understand why does the below equation holds true.
A % p == A & (p - 1), where p = 2^n
p will have exactly 1 set bit in it's binary representation, say it's position is x.
So all the numbers which have atleast one set bit in a position greater than x, are all divisible by p, that is why performing AND with p-1 would give all set bits less than 2^x, which is same as performing mod
But that isn't the case when p is not a power of 2.
If that didn't made sense, then take for example:
A = 18 = 10010,
K = 6 = 110,
A % K = 0
According to your approach, you will perform AND operation with A and 7 (= 2^3-1), resulting in 2, which is not the value of MOD.

Efficiently generate primes in Python and calculate complexity

Generating prime numbers from 1 to n Python 3. How to improve efficiency and what is the complexity?
Input: A number, max (a large number)
Output: All the primes from 1 to max
Output is in the form of a list and will be [2,3,5,7,11,13,.......]
The code attempts to perform this task in an efficient way (least time complexity).
from math import sqrt
max = (10**6)*3
print("\nThis code prints all primes till: " , max , "\n")
list_primes=[2]
def am_i_prime(num):
"""
Input/Parameter the function takes: An integer number
Output: returns True, if the number is prime and False if not
"""
decision=True
i=0
while(list_primes[i] <= sqrt(num)): #Till sqrt(n) to save comparisons
if(num%list_primes[i]==0):
decision=False
break
#break is inserted so that we get out of comparisons faster
#Eg. for 1568, we should break from the loop as soon as we know that 1568%2==0
i+=1
return decision
for i in range(3,max,2): #starts from 3 as our list contains 2 from the beginning
if am_i_prime(i)==True:
list_primes.append(i) #if a number is found to be prime, we append it to our list of primes
print(list_primes)
How can I make this faster? Where can I improve?
What is the time complexity of this code? Which steps are inefficient?
In what ways is the Sieve of Eratosthenes more efficient than this?
Working for the first few iterations:-
We have a list_primes which contains prime numbers. It initially contains only 2.
We go to the next number, 3. Is 3 divisible by any of the numbers in list_primes? No! We append 3 to list_primes. Right now, list_primes=[2,3]
We go to the next number 4. Is 4 divisible by any of the numbers in list_primes? Yes (4 is divisible by 2). So, we don't do anything. Right now list_primes=[2,3]
We go to the next number, 5. Is 5 divisible by any of the numbers in list_primes? No! We append 5 to list_primes. Right now, list_primes=[2,3,5]
We go to the next number, 6. Is 6 divisible by any of the numbers in list_primes? Yes (6 is divisible by 2 and also divisible by 3). So, we don't do anything. Right now list_primes=[2,3,5]
And so on...
Interestingly, it takes a rather deep mathematical theorem to prove that your algorithm is correct at all. The theorem is: "For every n ≥ 2, there is a prime number between n and n^2". I know it has been proven, and much stricter bounds are proven, but I must admit I wouldn't know how to prove it myself. And if this theorem is not correct, then the loop in am_i_prime can go past the bounds of the array.
The number of primes ≤ k is O (k / log k) - this is again a very deep mathematical theorem. Again, beyond me to prove.
But anyway, there are about n / log n primes up to n, and for these primes the loop will iterate through all primes up to n^(1/2), and there are O (n^(1/2) / log n) of them.
So for the primes alone, the runtime is therefore O (n^1.5 / log^2 n), so that is a lower bound. With some effort it should be possible to prove that for all numbers, the runtime is asymptotically the same.
O (n^1.5 / log n) is obviously an upper bound, but experimentally the number of divisions to find all primes ≤ n seems to be ≤ 2 n^1.5 / log^2 n, where log is the natural logarithm.
The following rearrangement and optimization of your code will reach your maximum in nearly 1/2 the time of your original code. It combines your top level loop and predicate function into a single function to eliminate overhead and manages squares (square roots) more efficiently:
def get_primes(maximum):
primes = []
if maximum > 1:
primes.append(2)
squares = [4]
for number in range(3, maximum, 2):
i = 0
while squares[i] <= number:
if number % primes[i] == 0:
break
i += 1
else: # no break
primes.append(number)
squares.append(number * number)
return primes
maximum = 10 ** 6 * 3
print(get_primes(maximum))
However, a sieve-based algorithm will easily beat this (as it avoids division and/or multiplication). Your code has a bug: setting max = 1 will create the list [2] instead of the correct answer of an empty list. Always test both ends of your limits.
O(N**2)
Approximately speaking, the first call to am_I_prime does 1 comparison, the second does 2, ..., so the total count is 1 + 2 + ... + N, which is (N * (N-1)) / 2, which has order N-squared.

Determining whether a system of congruences has a solution

Having a system of linear congruences, I'd like to determine if it has a solution. Using simple algorithms that solve such systems is impossible, as the answer may grow exponentially.
One hypothesis I have is that if a system of congruences has no solution, then there are two of them that contradict each other. I have no idea if this holds, if it did that would lead to an easy O(n^2 log n) algo, as checking if a pair of congruences has a solution requires O(log n) time. Nevertheless for this problem I'd rather see something closer to O(n).
We may assume that no moduli exceeds 10^6, especially we can quickly factor them all to begin with. We may even further assume that the sum of all moduli doesn't exceed 10^6 (but still, their product can be huge).
As you suspect, there's a fairly simple way to determine whether the set of congruences has a solution without actually needing to build that solution. You need to:
Reduce each congruence to the form x = a (mod n) if necessary; from the comments, it sounds as though you already have this.
Factorize each modulus n as a product of prime powers: n = p1^e1 * p2^e2 * ... * pk^ek.
Replace each congruence x = a (mod n) with a collection of congruences x = a (mod pi^ei), one for each of the k prime powers you found in step 2.
And now, by the Chinese Remainder Theorem it's enough to check compatibility for each prime independently: given any two congruences x = a (mod p^e) and x = b (mod p^f), they're compatible if and only if a = b (mod p^(min(e, f)). Having determined compatibility, you can throw out the congruence with smaller modulus without losing any information.
With the right data structures, you can do all this in a single pass through your congruences: for each prime p encountered, you'll need to keep track of the biggest exponent e found so far, together with the corresponding right-hand side (reduced modulo p^e for convenience). The running time will likely be dominated by the modulus factorizations, though if no modulus exceeds 10^6, then you can make that step very fast, too, by prebuilding a mapping from each integer in the range 1 .. 10^6 to its smallest prime factor.
EDIT: And since this is supposed to be a programming site, here's some (Python 3) code to illustrate the above. (For Python 2, replace the range call with xrange for better efficiency.)
def prime_power_factorisation(n):
"""Brain-dead factorisation routine, for illustration purposes only."""
# DO NOT USE FOR LARGE n!
while n > 1:
p, pe = next(d for d in range(2, n+1) if n % d == 0), 1
while n % p == 0:
n, pe = n // p, pe*p
yield p, pe
def compatible(old_ppc, new_ppc):
"""Determine whether two prime power congruences (with the same
prime) are compatible."""
m, a = old_ppc
n, b = new_ppc
return (a - b) % min(m, n) == 0
def are_congruences_solvable(moduli, right_hand_sides):
"""Determine whether the given congruences have a common solution."""
# prime_power_congruences is a dictionary mapping each prime encountered
# so far to a pair (prime power modulus, right-hand side).
prime_power_congruences = {}
for m, a in zip(moduli, right_hand_sides):
for p, pe in prime_power_factorisation(m):
# new prime-power congruence: modulus, rhs
new_ppc = pe, a % pe
if p in prime_power_congruences:
old_ppc = prime_power_congruences[p]
if not compatible(new_ppc, old_ppc):
return False
# Keep the one with bigger exponent.
prime_power_congruences[p] = max(new_ppc, old_ppc)
else:
prime_power_congruences[p] = new_ppc
# If we got this far, there are no incompatibilities, and
# the congruences have a mutual solution.
return True
One final note: in the above, we made use of the fact that the moduli were small, so that computing prime power factorisations wasn't a big deal. But if you do need to do this for much larger moduli (hundreds or thousands of digits), it's still feasible. You can skip the factorisation step, and instead find a "coprime base" for the collection of moduli: that is, a collection of pairwise relatively prime positive integers such that each modulus appearing in your congruences can be expressed as a product (possibly with repetitions) of elements of that collection. Now proceed as above, but with reference to that coprime base instead of the set of primes and prime powers. See this article by Daniel Bernstein for an efficient way to compute a coprime base for a set of positive integers. You'd likely end up making two passes through your list: one to compute the coprime base, and a second to check the consistency.

Analysing the runtime efficiency of a Haskell function

I have the following solution in Haskell to Problem 3:
isPrime :: Integer -> Bool
isPrime p = (divisors p) == [1, p]
divisors :: Integer -> [Integer]
divisors n = [d | d <- [1..n], n `mod` d == 0]
main = print (head (filter isPrime (filter ((==0) . (n `mod`)) [n-1,n-2..])))
where n = 600851475143
However, it takes more than the minute limit given by Project Euler. So how do I analyze the time complexity of my code to determine where I need to make changes?
Note: Please do not post alternative algorithms. I want to figure those out on my own. For now I just want to analyse the code I have and look for ways to improve it. Thanks!
Two things:
Any time you see a list comprehension (as you have in divisors), or equivalently, some series of map and/or filter functions over a list (as you have in main), treat its complexity as Θ(n) just the same as you would treat a for-loop in an imperative language.
This is probably not quite the sort of advice you were expecting, but I hope it will be more helpful: Part of the purpose of Project Euler is to encourage you to think about the definitions of various mathematical concepts, and about the many different algorithms that might correctly satisfy those definitions.
Okay, that second suggestion was a bit too nebulous... What I mean is, for example, the way you've implemented isPrime is really a textbook definition:
isPrime :: Integer -> Bool
isPrime p = (divisors p) == [1, p]
-- p is prime if its only divisors are 1 and p.
Likewise, your implementation of divisors is straightforward:
divisors :: Integer -> [Integer]
divisors n = [d | d <- [1..n], n `mod` d == 0]
-- the divisors of n are the numbers between 1 and n that divide evenly into n.
These definitions both read very nicely! Algorithmically, on the other hand, they are too naïve. Let's take a simple example: what are the divisors of the number 10? [1, 2, 5, 10]. On inspection, you probably notice a couple things:
1 and 10 are pairs, and 2 and 5 are pairs.
Aside from 10 itself, there can't be any divisors of 10 that are greater than 5.
You can probably exploit properties like these to optimize your algorithm, right? So, without looking at your code -- just using pencil and paper -- try sketching out a faster algorithm for divisors. If you've understood my hint, divisors n should run in sqrt n time. You'll find more opportunities along these lines as you continue. You might decide to redefine everything differently, in a way that doesn't use your divisors function at all...
Hope this helps give you the right mindset for tackling these problems!
Let's start from the top.
divisors :: Integer -> [Integer]
divisors n = [d | d <- [1..n], n `mod` d == 0]
For now, let's assume that certain things are cheap: incrementing numbers is O(1), doing mod operations is O(1), and comparisons with 0 are O(1). (These are false assumptions, but what the heck.) The divisors function loops over all numbers from 1 to n, and does an O(1) operation on each number, so computing the complete output is O(n). Notice that here when we say O(n), n is the input number, not the size of the input! Since it takes m=log(n) bits to store n, this function takes O(2^m) time in the size of the input to produce a complete answer. I'll use n and m consistently to mean the input number and input size below.
isPrime :: Integer -> Bool
isPrime p = (divisors p) == [1, p]
In the worst case, p is prime, which forces divisors to produce its whole output. Comparison to a list of statically-known length is O(1), so this is dominated by the call to divisors. O(n), O(2^m)
Your main function does a bunch of things at once, so let's break down subexpressions a bit.
filter ((==0) . (n `mod`))
This loops over a list, and does an O(1) operation on each element. This is O(m), where here m is the length of the input list.
filter isPrime
Loops over a list, doing O(n) work on each element, where here n is the largest number in the list. If the list happens to be n elements long (as it is in your case), this means this is O(n*n) work, or O(2^m*2^m) = O(4^m) work (as above, this analysis is for the case where it produces its entire list).
print . head
Tiny bits of work. Let's call it O(m) for the printing part.
main = print (head (filter isPrime (filter ((==0) . (n `mod`)) [n-1,n-2..])))
Considering all the subexpressions above, the filter isPrime bit is clearly the dominating factor. O(4^m), O(n^2)
Now, there's one final subtlety to consider: throughout the analysis above, I've consistently made the assumption that each function/subexpression was forced to produce its entire output. As we can see in main, this probably isn't true: we call head, which only forces a little bit of the list. However, if the input number itself isn't prime, we know for sure that we must look through at least half the list: there will certainly be no divisors between n/2 and n. So, at best, we cut our work in half -- which has no effect on the asymptotic cost.
Daniel Wagner's answer explains the general strategy of deriving bounds for the runtime complexity rather well. However, as is usually the case for general strategies, it yields too conservative bounds.
So, just for the heck of it, let's investigate this example in some more detail.
main = print (head (filter isPrime (filter ((==0) . (n `mod`)) [n-1,n-2..])))
where n = 600851475143
(Aside: if n were prime, this would cause a runtime error when checking n `mod` 0 == 0, thus I change the list to [n, n-1 .. 2] so that the algorithm works for all n > 1.)
Let's split up the expression into its parts, so we can see and analyse each part more easily
main = print answer
where
n = 600851475143
candidates = [n, n-1 .. 2]
divisorsOfN = filter ((== 0) . (n `mod`)) candidates
primeDivisors = filter isPrime divisorsOfN
answer = head primeDivisors
Like Daniel, I work with the assumption that arithmetic operations, comparisons etc. are O(1) - although not true, that's a good enough approximation for all remotely reasonable inputs.
So, of the list candidates, the elements from n down to answer have to be generated, n - answer + 1 elements, for a total cost of O(n - answer + 1). For composite n, we have answer <= n/2, then that's Θ(n).
Generating the list of divisors as far as needed is then Θ(n - answer + 1) too.
For the number d(n) of divisors of n, we can use the coarse estimate d(n) <= 2√n.
All divisors >= answer of n have to be checked for primality, that's at least half of all divisors.
Since the list of divisors is lazily generated, the complexity of
isPrime :: Integer -> Bool
isPrime p = (divisors p) == [1, p]
is O(smallest prime factor of p), because as soon as the first divisor > 1 is found, the equality test is determined. For composite p, the smallest prime factor is <= √p.
We have < 2√n primality checks of complexity at worst O(√n), and one check of complexity Θ(answer), so the combined work of all prime tests carried out is O(n).
Summing up, the total work needed is O(n), since the cost of each step is O(n) at worst.
In fact, the total work done in this algorithm is Θ(n). If n is prime, generating the list of divisors as far as needed is done in O(1), but the prime test is Θ(n). If n is composite, answer <= n/2, and generating the list of divisors as far as needed is Θ(n).
If we don't consider the arithmetic operations to be O(1), we have to multiply with the complexity of an arithmetic operation on numbers the size of n, that is O(log n) bits, which, depending on the algorithms used, usually gives a factor slightly above log n and below (log n)^2.

Finding even numbers in an array without using feedback

I saw this post: Finding even numbers in an array and I was thinking about how you could do it without feedback. Here's what I mean.
Given an array of length n containing at most e even numbers and a
function isEven that returns true if the input is even and false
otherwise, write a function that prints all the even numbers in the
array using the fewest number of calls to isEven.
The answer on the post was to use a binary search, which is neat since it doesn't mean the array has to be in order. The number of times you have to check if a number is even is e log n instead if n because you do a binary search (log n) to find one even number each time (e times).
But that idea means that you divide the array in half, test for evenness, then decide which half to keep based on the result.
My question is whether or not you can beat n calls on a fixed testing scheme where you check all the numbers you want for evenness without knowing the outcome, and then figure out where the even numbers are after you've done all the tests based on the results. So I guess it's no-feedback or blind or some term like that.
I was thinking about this for a while and couldn't come up with anything. The binary search idea doesn't work at all with this constraint, but maybe something else does? Even getting down to n/2 calls instead of n (yes, I know they are the same big-O) would be good.
The technical term for "no-feedback or blind" is "non-adaptive". O(e log n) calls still suffice, but the algorithm is rather more involved.
Instead of testing the evenness of products, we're going to test the evenness of sums. Let E ≠ F be distinct subsets of {1, …, n}. If we have one array x1, …, xn with even numbers at positions E and another array y1, …, yn with even numbers at positions F, how many subsets J of {1, …, n} satisfy
(∑i in J xi) mod 2 ≠ (∑i in J yi) mod 2?
The answer is 2n-1. Let i be an index such that xi mod 2 ≠ yi mod 2. Let S be a subset of {1, …, i - 1, i + 1, … n}. Either J = S is a solution or J = S union {i} is a solution, but not both.
For every possible outcome E, we need to make calls that eliminate every other possible outcome F. Suppose we make 2e log n calls at random. For each pair E ≠ F, the probability that we still cannot distinguish E from F is (2n-1/2n)2e log n = n-2e, because there are 2n possible calls and only 2n-1 fail to distinguish. There are at most ne + 1 choices of E and thus at most (ne + 1)ne/2 pairs. By a union bound, the probability that there exists some indistinguishable pair is at most n-2e(ne + 1)ne/2 < 1 (assuming we're looking at an interesting case where e ≥ 1 and n ≥ 2), so there exists a sequence of 2e log n calls that does the job.
Note that, while I've used randomness to show that a good sequence of calls exists, the resulting algorithm is deterministic (and, of course, non-adaptive, because we chose that sequence without knowledge of the outcomes).
You can use the Chinese Remainder Theorem to do this. I'm going to change your notation a bit.
Suppose you have N numbers of which at most E are even. Choose a sequence of distinct prime powers q1,q2,...,qk such that their product is at least N^E, i.e.
qi = pi^ei
where pi is prime and ei > 0 is an integer and
q1 * q2 * ... * qk >= N^E
Now make a bunch of 0-1 matrices. Let Mi be the qi x N matrix where the entry in row r and column c has a 1 if c = r mod qi and a 0 otherwise. For example, if qi = 3^2, then row 2 has ones in columns 2, 11, 20, ... 2 + 9j and 0 elsewhere.
Now stack these matrices vertically to get a Q x N matrix M, where Q = q1 + q2 + ... + qk. The rows of M tell you which numbers to multiply together (the nonzero positions). This gives a total of Q products that you need to test for evenness. Call each row a "trial", and say that a "trial involves j" if the jth column of that row is nonempty. The theorem you need is the following:
THEOREM: The number in position j is even if and only if all trials involving j are even.
So you do a total of Q trials and then look at the results. If you choose the prime powers intelligently, then Q should be significantly smaller than N. There are asymptotic results that show you can always get Q on the order of
(2E log N)^2 / 2log(2E log N)
This theorem is actually a corollary of the Chinese Remainder Theorem. The only place that I've seen this used is in Combinatorial Group Testing. Apparently the problem originally arose when testing soldiers coming back from WWII for syphilis.
The problem you are facing is a form of group testing, type of a problem with the objective of reducing the cost of identifying certain elements of a set (up to d elements of a set of N elements).
As you've already stated, there are two basic principles via which the testing may be carried out:
Non-adaptive Group Testing, where all the tests to be performed are decided a priori.
Adaptive Group Testing, where we perform several tests, basing each test on the outcome of previous tests. Obviously, adaptive testing has a potential to reduce the cost, compared to non-adaptive testing.
Theoretical bounds for both principles have been studied, and are available in this Wiki article, or this paper.
For adaptive testing, the upper bound is O(d*log(N)) (as already described in this answer).
For non-adaptive testing, it can be shown that the upper bound is O(d*d/log(d)*log(N)), which is obviously larger than the upper bound for adaptive testing by a factor of d/log(d).
This upper bound for non-adaptive testing comes from an algorithm which uses disjunct matrices: matrices of dimension T x N ("number of tests" x "number of elements"), where each item can be either true (if an element was included in a test), or false (if it wasn't), with a property that any subset of d columns must differ from all other columns by at least a single row (test inclusion). This allows linear time of decoding (there are also "d-separable" matrices where fewer test are needed, but the time complexity for their decoding is exponential and not computationaly feasible).
Conclusion:
My question is whether or not you can beat n calls on a fixed testing scheme [...]
For such a scheme and a sufficiently large value of N, a disjunct matrix can be constructed which would have less than K * [d*d/log(d)*log(N)] rows. So, for large values of N, yes, you can beat it.
The underlying question (challenge) is kind of silly. If the binary search answer is acceptable (where it sums sub arrays and sends them to IsEven) then I can think of a way to do it with E or less calls to IsEven (assuming the numbers are integers of course).
JavaScript to demonstrate
// sort the array by only the first bit of the number
A.sort(function(x,y) { return (x & 1) - (y & 1); });
// all of the evens will be at the beginning
for(var i=0; i < E && i < A.length; i++) {
if(IsEven(A[i]))
Print(A[i]);
else
break;
}
Not exactly a solution, but just few thoughts.
It is easy to see that if a solution exists for array length n that takes less than n tests, then for any array length m > n it is easy to see that there is always a solution with less than m tests. So, if you have a solution for n = 2 or 3 or 4, then the problem is solved.
You can split the array into pairs of numbers and for each pair: if the sum is odd, then exactly one of them is even, otherwise if one of the numbers is even, then both of them are even. This way for each pair it takes either one or two tests. Best case:n/2 tests, worse case:n tests, if even and odd numbers are chosen with equal probability, then: 3n/4 tests.
My hunch is there is no solution with less than n tests. Not sure how to prove it.
UPDATE: The second solution can be extended in the following way.
Check if the sum of two numbers is even. If odd, then exactly one of them is even. Otherwise label the set as "homogeneous set of size 2". Take two "homogenous set"s of same size n. Pick one number from each set and check if their sum is even. If it is even, combine these two sets to a "homogeneous set of size 2n". Otherwise, it implies that one of those sets purely consists of even numbers and the other one purely odd numbers.
Best case:n/2 tests. Average case: 3*n/2. Worst case is still n. Worst case exists only when all the numbers are even or all the numbers are odd.
If we can add and multiply array elements, then we can compute every Boolean function (up to complementation) on the low-order bits. Simulate a circuit that encodes the positions of the even numbers as a number from 0 to nC0 + nC1 + ... + nCe - 1 represented in binary and use calls to isEven to read off the bits.
Number of calls used: within 1 of the information-theoretic optimum.
See also fully homomorphic encryption.

Resources