Having a sequence of n <= 10^6 integers, all not exceeding m <= 3*10^6, I'd like to count how many coprime pairs are in it. Two numbers are coprime if their greatest common divisor is 1.
It can be done trivially in O(n^2 log n), but this is obviously way to slow, as the limit suggests something closer to O(n log n). One thing than can be done quickly is factoring out all the numbers, and also throwing out multiple occurences of the same prime in each, but that doesn't lead to any significant improvement. I also thought of counting the opposite - pairs that have a common divisor. It could be done in groups - firstly counting all the pairs that their smallest common prime divisor is 2, then 3, 5, and etc., but it seems to me like an other dead end.
I've come up with a slightly faster alternative based on your answer. On my work PC my C++ implementation (bottom) takes about 350ms to solve any problem instance; on my old laptop, it takes just over 1s. This algorithm avoids all division and modulo operations, and uses only O(m) space.
As with your algorithm, the basic idea is to apply the Inclusion-Exclusion Principle by enumerating every number 2 <= i <= m that contains no repeated factors exactly once, and for each such i, counting the number of numbers in the input that are divisible by i and either adding or subtracting this from the total. The key difference is that we can do the counting part "stupidly", simply by testing whether each possible multiple of i appears in the input, and this still takes just O(m log m) time.
How many times does the innermost line c += v[j].freq; in countCoprimes() repeat? The body of the outer loop is executed once for each number 2 <= i <= m that contains no repeated prime factors; this iteration count is trivially upper-bounded by m. The inner loop advances i steps at a time through the range [2..m], so the number of operations it performs during a single outer loop iteration is upper-bounded by m / i. Therefore the total number of iterations of the innermost line is upper-bounded by the sum from i=2 to m of m/i. The m factor can be moved outside the sum to get an upper bound of
m * sum{i=2..m}(1/i)
That sum is a partial sum in a harmonic series, and it is upper-bounded by log(m), so the total number of innermost loop iterations is O(m log m).
extendedEratosthenes() is designed to reduce constant factors by avoiding all divisions and keeping to O(m) memory usage. All countCoprimes() actually needs to know for a number 2 <= i <= m is (a) whether it has repeated prime factors, and if it doesn't, (b) whether it has an even or odd number of prime factors. To calculate (b) we can make use of the fact that the Sieve of Eratosthenes effectively "hits" any given i with its distinct prime factors in increasing order, so we can just flip a bit (the parity field in struct entry) to keep track of whether i has an even or odd number of factors. Each number starts with a prod field equal to 1; to record (a) we simply "knock out" any number that contains the square of a prime number as a factor by setting its prod field to 0. This field serves a dual purpose: if v[i].prod == 0, it indicates that i was discovered to have repeated factors; otherwise it contains the product of the (necessarily distinct) factors discovered so far. The (fairly minor) utility of this is that it allows us to stop the main sieve loop at the square root of m, instead of going all the way up to m: by now, for any given i that has no repeated factors, either v[i].prod == i, in which case we have found all the factors for i, or v[i].prod < i, in which case i must have exactly one factor > sqrt(3000000) that we have not yet accounted for. We can find all such remaining "large factors" with a second, non-nested loop.
#include <iostream>
#include <vector>
using namespace std;
struct entry {
int freq; // Frequency that this number occurs in the input list
int parity; // 0 for even number of factors, 1 for odd number
int prod; // Product of distinct prime factors
};
const int m = 3000000; // Maximum input value
int n = 0; // Will be number of input values
vector<entry> v;
void extendedEratosthenes() {
int i;
for (i = 2; i * i <= m; ++i) {
if (v[i].prod == 1) {
for (int j = i, k = i; j <= m; j += i) {
if (--k) {
v[j].parity ^= 1;
v[j].prod *= i;
} else {
// j has a repeated factor of i: knock it out.
v[j].prod = 0;
k = i;
}
}
}
}
// Fix up numbers with a prime factor above their square root.
for (; i <= m; ++i) {
if (v[i].prod && v[i].prod != i) {
v[i].parity ^= 1;
}
}
}
void readInput() {
int i;
while (cin >> i) {
++v[i].freq;
++n;
}
}
void countCoprimes() {
__int64 total = static_cast<__int64>(n) * (n - 1) / 2;
for (int i = 2; i <= m; ++i) {
if (v[i].prod) {
// i must have no repeated factors.
int c = 0;
for (int j = i; j <= m; j += i) {
c += v[j].freq;
}
total -= (v[i].parity * 2 - 1) * static_cast<__int64>(c) * (c - 1) / 2;
}
}
cerr << "Total number of coprime pairs: " << total << "\n";
}
int main(int argc, char **argv) {
cerr << "Initialising array...\n";
entry initialElem = { 0, 0, 1 };
v.assign(m + 1, initialElem);
cerr << "Performing extended Sieve of Eratosthenes...\n";
extendedEratosthenes();
cerr << "Reading input...\n";
readInput();
cerr << "Counting coprimes...\n";
countCoprimes();
return 0;
}
Further exploiting the ideas I mentioned in my question, I actually managed to come up with a solution myself. As some of you may be interested in it, I will describe it briefly. It does work in O(m log m + n), I've already implemented it in C++ and tested - solves the biggest cases (10^6 integers) in less than 5 seconds.
We have n integers, all not greater than m. We start by doing Eratosthenes Sieve mapping each integer up to m to it's smalles prime factor, allowing us to factor out any number not greater than m in O(log m) time. Then for all given numbers A[i], as long as there is some prime p than divides A[i] in a power greater than one, we divide A[i] by it, because when asking if two numbers are coprime we can omit the exponents. That leaves us with all A[i] being products of distinct primes.
Now, let us assume that we were able to construct in a reasonable time a table T, such that T[i] is number of entries A[j] such that i divides A[j]. This is somehow similar to the approach #Brainless took in his second answer. Constructing table T quickly was the technic I spoke about in the comments below my question.
From now, we will work by Inclusion-Exclusion Principle. Having T, for each i we calculate P[i] - the amount of pairs (j,k) such that A[j] and A[k] are both divisible by i. Then to compute the answer, sum all P[i], taking minus sign before those P[i] for which i has an even number of prime divisors. Note that all prime divisors of i are distinct, because for all other indices i P[i] equals 0. By Inclusion-Exclusion each pair will be counted only once. To see this differently, take a pair A[i] and A[j], assuming that they share exactly k common prime divisors. Then this pair will be counted k times, then discounted kC2 times, counted kC3 times, discounted kC4 times... for nCk see the Newton's Symbol. Some mathematical manipulation makes us see that the considered pair will be counted 1 - (1-1)^k = 1 times, what concludes the proof.
Steps made so far required O(m log log m) for the Sieve and O(m) for computing the result. The last thing to do is to construct array T. We could for every A[i] just increment T[j] for all j dividing i. As A[i] can have at most O(sqrt(A[i])) divisors (and in practice even less than that) then we could construct T in O(n sqrt m). But we can do better than that!
Take two-dimensional array W. At each moment a following invariant holds - if for each non-zero W[i][j] we would increment the counter in table T by W[i][j] for all numbers that divide i, and also share the exact exponents i has in j smallest primes divisors of i, then T would be constructed properly. As this may seem a little confusing, let's see it in action. At start, to make the invariant true, for each A[i] we just increment W[A[i]][0]. Also note that a number not exceeding m can have at most O(log m) prime divisors, so the overall size of W is O(m log m). Now we see that an information stored in W[i][j] can be "pushed forward" in a following way: consider p to be (j+1)-th prime divisor of i, assuming it has one. Then some divisor of i can either have p with an exponent same as in i, or lower. First of these cases is W[i][j+1] - we add another prime that has to be "fully taken" by a divisor. Second case is W[i/p][j] as a divisor of i that doesn't have p with a highest exponent must also divide i/p. And that's it! We consider all i in descending order, then j in ascending order. We "push forward" information from W[i][j]. See that if i has exactly j prime divisors, then the information from it cannot be pushed, but we don't really need that! If i has j prime divisors, then W[i][j] basically says: increment by W[i][j] only index i in array T. So when all the information has been pushed to "last rows" in each W[i] we pass through those rows and finish constructing T. As each cell of W[i][j] has been visited once, this algorithm takes O(m log m) time, and also O(n) at the begining. That concludes the construction. Here's some C++ code from the actual implementation:
FORD(i,SIZE(W)-1,2) //i in descending order
{
int v = i, p;
FOR(j,0,SIZE(W[i])-2) //exclude last row
{
p = S[v]; //j-th divisor; S[v] - smallest prime divisor of v
while (v%p == 0) v /= p;
W[i][j+1] += W[i][j];
W[i/p][j] += W[i][j];
}
T[i] = W[i].back();
}
At the end I'd say that I think array T can be constructed faster and simpler than what I've shown. If anyone has some neat idea about how it could be done, I would appreciate all feedback.
Here's an idea based on the formula for the complete sequence 1..n, found on http://oeis.org/A018805:
a(n) = 2*( Sum phi(j), j=1..n ) - 1, where phi is Euler's totient function
Iterate over the sequence, S. For each term, S_i:
for each of the prime factors, p, of S_i:
if a hash for p does not exist:
create a hash with index p that points to a set of all indexes of S except i,
and a counter set to 1, representing how many terms of S are divisible by p so far
else:
delete i in the existing set of indexes and increment the counter
Sort the hashes for S_i's prime factors by their counters in descending order. Starting with
the largest counter (which means the smallest set), make a list of indexes up to i that are also
members of the next smallest set, until the sets are exhausted. Add the remaining number of
indexes in the list to the cumulative total.
Example:
sum phi' [4,7,10,15,21]
S_0: 4
prime-hash [2:1-4], counters [2:1]
0 indexes up to i in the set for prime 2
total 0
S_1: 7
prime hash [2:1-4; 7:0,2-4], counters [2:1, 7:1]
1 index up to i in the set for prime 7
total 1
S_2: 10
prime hash [2:1,3-4; 5:0-1,3-4; 7:0,2-4], counters [2:2, 5:1, 7:1]
1 index up to i in the set for prime 2, which is also a member
of the set for prime 5
total 2
S_3: 15
prime hash [2:1,3-4; 5:0-1,4; 7:0,2-4; 3:0-2,4], counters [2:2: 5:2, 7:1, 3:1]
2 indexes up to i in the set for prime 5, which are also members
of the set for prime 3
total 4
S_4: 21
prime hash [2:1,3-4; 5:0-1,4; 7:0,2-3; 3:0-2], counters [2:2: 5:2, 7:2, 3:2]
2 indexes up to i in the set for prime 7, which are also members
of the set for prime 3
total 6
6 coprime pairs:
(4,7),(4,15),(4,21),(7,10),(7,15),(10,21)
I would suggest :
1) Use Eratosthene to get a list of sorted prime numbers under 10^6.
2) For each number n in the list, get it's prime factors. Associate it another number f(n) in the following way : let's say that the prime factors of n are 3, 7 and 17. Then the binary representation of f(n) is :
`0 1 0 1 0 0 1`
The first digit (0 here) is associated to the prime number 2, the second (1 here) is associated to the prime number 3, etc ...
Therefore 2 numbers n and m are coprime iff f(n) & f(m) = 0.
3) It's easy to see that there is a N such that for each n : f(n) <= (2^N) - 1. This means that the biggest number f(n) is smaller or equal to a number whose binary representation is :
`1 1 1 1 1 1 1 1 1 1 1 1 1 1 1`
Here N is the number of 1 in the above sequence. Get this N and sort the list of numbers f(n). Let's call this list L.
If you want to optimize: in this list, instead of sorting duplicates, store a pair containing f(n) and the number of times f(n) is duplicated.
4) Iterate from 1 to N in this way : initialize i = 1 0 0 0 0, and at each iteration, move the digit 1 to the right with all other values kept to 0 (implement it using bitshift).
At each iteration, iterate over L to get the number d(i) of elements l in L such that i & l != 0 (be careful if you use the above optimization). In other words, for each i, get the number of elements in L which are not coprimes with i, and name this number d(i). Add the total
D = d(1) + d(2) + ... + d(N)
5) This number D is the number of pairs which are not coprime in the original list. The number of coprime pairs is :
M*(M-1)/2 - D
where M is the number of elements in the original list. The complexity of this method is O(n log(n)).
Good luck !
My previous answer was wrong, apologies. I propose here a modification:
Once you get the prime divisors of each number of the list, associate to each prime number p the number l(p) of numbers in the list which has p as divisor. For example consider the prime number 5, and the list's number which can be divided by 5 are 15, 100 and 255. Then l(5)=3.
To achieve it in O(n logn), iterate over the list and for each number in this list, iterate over it's prime factors; for each prime factor p, increment its l(p).
Then the number of pairs which are not coprime and can be divided by p is
l(p)*(l(p) - 1) / 2
Sum this number for all prime p, and you will get the number of pairs in the list which are not coprime (note that l(p) can be 0). Let say this sum is D, then the answer is
M*(M-1)/2 - D
where M is the length of the list. Good luck !
I was having a discussion on the runtime cost of the algorithm to check if an number is an armstrong number or not with a friend. Just for those who don't know what an Armstrong number is:
A number 𝑎 is an Armstrong Number or narcissistic number if it is equal to the sum of its own digits raised to the power of the number of digits.
Here's the routine I wrote to check if the number is an Armstrong number or not:
bool check_armstrong_number(int num){
int number_copy = num; // Store the number in another variable to compare afterwards
int digits = 0; // Stores the number of digits in the number.
// Calculate the number of digits in the number
while(number_copy){
number_copy = number_copy/10;
digits++;
}
number_copy = num;
int sum = 0; // To construct numbers from sum of digits raise to the power number_of_digits.
while(num){
int unit_digit = num % 10; // Get the unit digit with the help of modulus operator.
sum = sum + std::pow(unit_digit,digits); // Calculate unit_digit^digits
num = num / 10;
}
return (sum == number_copy); // Returns true only if sum equals original number
}
My friend says the algorithm is O(log N) , while I believe it is O(log^2(N)). I think iteration over the digits is O(log N) operation(as the number of digits in an integer is of the order log N), and the std::pow(unit_digit,digits) is also around O(log N) for each calculation. So, O(log N) computation time for log N computation should be around O(log^2 N). Can anyone clarify this a bit?
It's O(log(n)*log(log(n)) .
Log(n) for iteration .
In pow(a,b) the complexity is log(b) . So here your b = log(n) .
So log(log(n)) for pow function .
Given a number N, how do we find maximum P*Q < N, such that P and Q are prime numbers?
My (brute force) attempt:
find a list {P, N/P} for all primes P < √N
find a list of primes Q, such that Q is the largest prime just below
N/P in the list above
Determine the maximum product P*Q from above
While this brute force approach will work, is there a formal (more sensible) solution to this question?
Example: N=27
√N = 5.196
Candidate primes: 2,3,5 --> [{2,13.5},{3,9},{5,5.4}] ->[{2,13},{3,7},{5,5}]
Solution: Max([2*13, 3*7, 5*5]) = 2*13 = 26
Hence, the brute force solution works.
Taking this one step further, we see that Q_max <= N/2 and if indeed we agree that P < Q, then we have Q >= √N.
We can refine our solution set to only those values {P, N\2}, where N\2 >= √N.
I have opted for integer division "\", since we are only interested in integer values, and integer division is indeed much faster than regular division "/"
The problem reduces to:
Example: N=27
√N = 5.196
Candidate P: 2,3 --> [{2,13},{3,9}] -->[{2,13},{3,7}]
(we drop {5,5} since N\P < √N i.e. 5 < 5.196)
Solution set: max([2*13, 3*7]) = 2*13 = 26
It might look trivial, but it just eliminated 1/3 of the possible solution set.
Are there other clever procedures we can add to reduce the set further?
This is similar to what #RalphMRickenback describes, but with tighter complexity bounds.
The prime finding algorithm he describes is the sieve of Erathostenes, which needs space O(n), but has time complexity O(n log log n), you may want to see the discussion on Wikipedia if you want to be more careful about this.
After finding a list of primes smaller than n // 2, you can scan it a single time, i.e. with O(n) complexity, by having a pointer start at the beginning and another at the end. If the product of those two primes is larger than your value, reduce the high pointer. If the product is smaller, compare it to a stored maximum product, and increase the low pointer.
EDIT As mentioned in the comments, the time complexity of the scan of the primes is better than linear on n, since it is only over the primes less than n, so O(n / log n).
Rather than pseudo-code, here's full implementation in Python:
def prime_sieve(n):
sieve = [False, False] + [True] * (n - 1)
for num, is_prime in enumerate(sieve):
if num * num > n:
break
if not is_prime:
continue
for not_a_prime in range(num * num, n + 1, num):
sieve[not_a_prime] = False
return [num for num, is_prime in enumerate(sieve) if is_prime]
def max_prime_product(n):
primes = prime_sieve(n // 2)
lo, hi = 0, len(primes) - 1
max_prod = 0
max_pair = None
while lo <= hi:
prod = primes[lo] * primes[hi]
if prod <= n:
if prod > max_prod:
max_prod = prod
max_pair = (primes[lo], primes[hi])
lo += 1
else:
hi -= 1
return max_prod, max_pair
With your example this produces:
>>> max_prime_product(27)
(26, (2, 13))
Another brute force attempt:
Find all prime numbers p <= N/2
Iterate over the array from the smallest p as long as p < √N and multiply it with the largest q < N/p, retaining the largest product.
If enough memory is available (N/2 bit), one could make a bitarray of that size. Initialize it with all TRUE but the first position. Iterating over the bit array, calculate the multiples of the position you are at and set all multiples to false. If the next position is false already, you do not need to recalculate all its multiples, they are already set to false.
Finding all primes therefore is < O(N^2).
a[1] := false;
m := n \ 2; // sizeof(a)
for i := 2 to m do
a[i] := true;
for i := 2 to m do
if a[i] then
for j := 2*a[i] to m step a[i] do
a[i*j] := false;
Step 2) is < O(n^2) as well:
result := 0;
for i := 2 to √N do
if not a[i] then continue; // next i;
for j := (n \ i) downto i do
if not a[j] then continue; // next j
if a[j] * a[i] < N
result := max(result, a[j] * a[i]);
break; // next i;
if result = N then break; // you are finished
This can be optimized further, I guess. You can keep (i,j) to know the two prime numbers.