Count of divisors of numbers till N in O(N)? - algorithm

So, we can count divisors of each number from 1 to N in O(NlogN) algorithm with sieve:
int n;
cin >> n;
for (int i = 1; i <= n; i++) {
for (int j = i; j <= n; j += i) {
cnt[j]++; //// here cnt[x] means count of divisors of x
}
}
Is there way to reduce it to O(N)?
Thanks in advance.

Here is a simple optimization on #גלעד ברקן's solution. Rather than use sets, use arrays. This is about 10x as fast as the set version.
n = 100
answer = [None for i in range(0, n+1)]
answer[1] = 1
small_factors = [1]
p = 1
while (p < n):
p = p + 1
if answer[p] is None:
print("\n\nPrime: " + str(p))
limit = n / p
new_small_factors = []
for i in small_factors:
j = i
while j <= limit:
new_small_factors.append(j)
answer[j * p] = answer[j] + answer[i]
j = j * p
small_factors = new_small_factors
print("\n\nAnswer: " + str([(k,d) for k,d in enumerate(answer)]))
It is worth noting that this is also a O(n) algorithm for enumerating primes. However with the use of a wheel generated from all of the primes below size log(n)/2 it can create a prime list in time O(n/log(log(n))).

How about this? Start with the prime 2 and keep a list of tuples, (k, d_k), where d_k is the number of divisors of k, starting with (1,1):
for each prime, p (ascending and lower than or equal to n / 2):
for each tuple (k, d_k) in the list:
if k * p > n:
remove the tuple from the list
continue
power = 1
while p * k <= n:
add the tuple to the list if k * p^power <= n / p
k = k * p
output (k, (power + 1) * d_k)
power = power + 1
the next number the output has skipped is the next prime
(since clearly all numbers up to the next prime are
either smaller primes or composites of smaller primes)
The method above also generates the primes, relying on O(n) memory to keep finding the next prime. Having a more efficient, independent stream of primes could allow us to avoid appending any tuples (k, d_k) to the list, where k * next_prime > n, as well as free up all memory holding output greater than n / next_prime.
Python code

Consider the total of those counts, sum(phi(i) for i=1,n). That sum is O(N log N), so any O(N) solution would have to bypass individual counting.
This suggests that any improvement would need to depend on prior results (dynamic programming). We already know that phi(i) is the product of each prime degree plus one. For instance, 12 = 2^2 * 3^1. The degrees are 2 and 1, respective. (2+1)*(1+1) = 6. 12 has 6 divisors: 1, 2, 3, 4, 6, 12.
This "reduces" the question to whether you can leverage the prior knowledge to get an O(1) way to compute the number of divisors directly, without having to count them individually.
Think about the given case ... divisor counts so far include:
1 1
2 2
3 2
4 3
6 4
Is there an O(1) way to get phi(12) = 6 from these figures?

Here is an algorithm that is theoretically better than O(n log(n)) but may be worse for reasonable n. I believe that its running time is O(n lg*(n)) where lg* is the https://en.wikipedia.org/wiki/Iterated_logarithm.
First of all you can find all primes up to n in time O(n) using the Sieve of Atkin. See https://en.wikipedia.org/wiki/Sieve_of_Atkin for details.
Now the idea is that we will build up our list of counts only inserting each count once. We'll go through the prime factors one by one, and insert values for everything with that as the maximum prime number. However in order to do that we need a data structure with the following properties:
We can store a value (specifically the count) at each value.
We can walk the list of inserted values forwards and backwards in O(1).
We can find the last inserted number below i "efficiently".
Insertion should be "efficient".
(Quotes are the parts that are hard to estimate.)
The first is trivial, each slot in our data structure needs a spot for the value. The second can be done with a doubly linked list. The third can be done with a clever variation on a skip-list. The fourth falls out from the first 3.
We can do this with an array of nodes (which do not start out initialized) with the following fields that look like a doubly linked list:
value The answer we are looking for.
prev The last previous value that we have an answer for.
next The next value that we have an answer for.
Now if i is in the list and j is the next value, the skip-list trick will be that we will also fill in prev for the first even after i, the first divisible by 4, divisible by 8 and so on until we reach j. So if i = 81 and j = 96 we would fill in prev for 82, 84, 88 and then 96.
Now suppose that we want to insert a value v at k between an existing i and j. How do we do it? I'll present pseudocode starting with only k known then fill it out for i = 81, j = 96 and k = 90.
k.value := v
for temp in searching down from k for increasing factors of 2:
if temp has a value:
our_prev := temp
break
else if temp has a prev:
our_prev = temp.prev
break
our_next := our_prev.next
our_prev.next := k
k.next := our_next
our_next.prev := k
for temp in searching up from k for increasing factors of 2:
if j <= temp:
break
temp.prev = k
k.prev := our_prev
In our particular example we were willing to search downwards from 90 to 90, 88, 80, 64, 0. But we actually get told that prev is 81 when we get to 88. We would be willing to search up to 90, 92, 96, 128, 256, ... however we just have to set 92.prev 96.prev and we are done.
Now this is a complicated bit of code, but its performance is O(log(k-i) + log(j-k) + 1). Which means that it starts off as O(log(n)) but gets better as more values get filled in.
So how do we initialize this data structure? Well we initialize an array of uninitialized values then set 1.value := 0, 1.next := n+1, and 2.prev := 4.prev := 8.prev := 16.prev := ... := 1. And then we start processing our primes.
When we reach prime p we start by searching for the previous inserted value below n/p. Walking backwards from there we keep inserting values for x*p, x*p^2, ... until we hit our limit. (The reason for backwards is that we do not want to try to insert, say, 18 once for 3 and once for 9. Going backwards prevents that.)
Now what is our running time? Finding the primes is O(n). Finding the initial inserts is also easily O(n/log(n)) operations of time O(log(n)) for another O(n). Now what about the inserts of all of the values? That is trivially O(n log(n)) but can we do better?
Well first all of the inserts to density 1/log(n) filled in can be done in time O(n/log(n)) * O(log(n)) = O(n). And then all of the inserts to density 1/log(log(n)) can likewise be done in time O(n/log(log(n))) * O(log(log(n))) = O(n). And so on with increasing numbers of logs. The number of such factors that we get is O(lg*(n)) for the O(n lg*(n)) estimate that I gave.
I haven't shown that this estimate is as good as you can do, but I think that it is.
So, not O(n), but pretty darned close.

Related

How can we count the number of pairs of coprime integers in an array of integers? (CSES) [duplicate]

Having a sequence of n <= 10^6 integers, all not exceeding m <= 3*10^6, I'd like to count how many coprime pairs are in it. Two numbers are coprime if their greatest common divisor is 1.
It can be done trivially in O(n^2 log n), but this is obviously way to slow, as the limit suggests something closer to O(n log n). One thing than can be done quickly is factoring out all the numbers, and also throwing out multiple occurences of the same prime in each, but that doesn't lead to any significant improvement. I also thought of counting the opposite - pairs that have a common divisor. It could be done in groups - firstly counting all the pairs that their smallest common prime divisor is 2, then 3, 5, and etc., but it seems to me like an other dead end.
I've come up with a slightly faster alternative based on your answer. On my work PC my C++ implementation (bottom) takes about 350ms to solve any problem instance; on my old laptop, it takes just over 1s. This algorithm avoids all division and modulo operations, and uses only O(m) space.
As with your algorithm, the basic idea is to apply the Inclusion-Exclusion Principle by enumerating every number 2 <= i <= m that contains no repeated factors exactly once, and for each such i, counting the number of numbers in the input that are divisible by i and either adding or subtracting this from the total. The key difference is that we can do the counting part "stupidly", simply by testing whether each possible multiple of i appears in the input, and this still takes just O(m log m) time.
How many times does the innermost line c += v[j].freq; in countCoprimes() repeat? The body of the outer loop is executed once for each number 2 <= i <= m that contains no repeated prime factors; this iteration count is trivially upper-bounded by m. The inner loop advances i steps at a time through the range [2..m], so the number of operations it performs during a single outer loop iteration is upper-bounded by m / i. Therefore the total number of iterations of the innermost line is upper-bounded by the sum from i=2 to m of m/i. The m factor can be moved outside the sum to get an upper bound of
m * sum{i=2..m}(1/i)
That sum is a partial sum in a harmonic series, and it is upper-bounded by log(m), so the total number of innermost loop iterations is O(m log m).
extendedEratosthenes() is designed to reduce constant factors by avoiding all divisions and keeping to O(m) memory usage. All countCoprimes() actually needs to know for a number 2 <= i <= m is (a) whether it has repeated prime factors, and if it doesn't, (b) whether it has an even or odd number of prime factors. To calculate (b) we can make use of the fact that the Sieve of Eratosthenes effectively "hits" any given i with its distinct prime factors in increasing order, so we can just flip a bit (the parity field in struct entry) to keep track of whether i has an even or odd number of factors. Each number starts with a prod field equal to 1; to record (a) we simply "knock out" any number that contains the square of a prime number as a factor by setting its prod field to 0. This field serves a dual purpose: if v[i].prod == 0, it indicates that i was discovered to have repeated factors; otherwise it contains the product of the (necessarily distinct) factors discovered so far. The (fairly minor) utility of this is that it allows us to stop the main sieve loop at the square root of m, instead of going all the way up to m: by now, for any given i that has no repeated factors, either v[i].prod == i, in which case we have found all the factors for i, or v[i].prod < i, in which case i must have exactly one factor > sqrt(3000000) that we have not yet accounted for. We can find all such remaining "large factors" with a second, non-nested loop.
#include <iostream>
#include <vector>
using namespace std;
struct entry {
int freq; // Frequency that this number occurs in the input list
int parity; // 0 for even number of factors, 1 for odd number
int prod; // Product of distinct prime factors
};
const int m = 3000000; // Maximum input value
int n = 0; // Will be number of input values
vector<entry> v;
void extendedEratosthenes() {
int i;
for (i = 2; i * i <= m; ++i) {
if (v[i].prod == 1) {
for (int j = i, k = i; j <= m; j += i) {
if (--k) {
v[j].parity ^= 1;
v[j].prod *= i;
} else {
// j has a repeated factor of i: knock it out.
v[j].prod = 0;
k = i;
}
}
}
}
// Fix up numbers with a prime factor above their square root.
for (; i <= m; ++i) {
if (v[i].prod && v[i].prod != i) {
v[i].parity ^= 1;
}
}
}
void readInput() {
int i;
while (cin >> i) {
++v[i].freq;
++n;
}
}
void countCoprimes() {
__int64 total = static_cast<__int64>(n) * (n - 1) / 2;
for (int i = 2; i <= m; ++i) {
if (v[i].prod) {
// i must have no repeated factors.
int c = 0;
for (int j = i; j <= m; j += i) {
c += v[j].freq;
}
total -= (v[i].parity * 2 - 1) * static_cast<__int64>(c) * (c - 1) / 2;
}
}
cerr << "Total number of coprime pairs: " << total << "\n";
}
int main(int argc, char **argv) {
cerr << "Initialising array...\n";
entry initialElem = { 0, 0, 1 };
v.assign(m + 1, initialElem);
cerr << "Performing extended Sieve of Eratosthenes...\n";
extendedEratosthenes();
cerr << "Reading input...\n";
readInput();
cerr << "Counting coprimes...\n";
countCoprimes();
return 0;
}
Further exploiting the ideas I mentioned in my question, I actually managed to come up with a solution myself. As some of you may be interested in it, I will describe it briefly. It does work in O(m log m + n), I've already implemented it in C++ and tested - solves the biggest cases (10^6 integers) in less than 5 seconds.
We have n integers, all not greater than m. We start by doing Eratosthenes Sieve mapping each integer up to m to it's smalles prime factor, allowing us to factor out any number not greater than m in O(log m) time. Then for all given numbers A[i], as long as there is some prime p than divides A[i] in a power greater than one, we divide A[i] by it, because when asking if two numbers are coprime we can omit the exponents. That leaves us with all A[i] being products of distinct primes.
Now, let us assume that we were able to construct in a reasonable time a table T, such that T[i] is number of entries A[j] such that i divides A[j]. This is somehow similar to the approach #Brainless took in his second answer. Constructing table T quickly was the technic I spoke about in the comments below my question.
From now, we will work by Inclusion-Exclusion Principle. Having T, for each i we calculate P[i] - the amount of pairs (j,k) such that A[j] and A[k] are both divisible by i. Then to compute the answer, sum all P[i], taking minus sign before those P[i] for which i has an even number of prime divisors. Note that all prime divisors of i are distinct, because for all other indices i P[i] equals 0. By Inclusion-Exclusion each pair will be counted only once. To see this differently, take a pair A[i] and A[j], assuming that they share exactly k common prime divisors. Then this pair will be counted k times, then discounted kC2 times, counted kC3 times, discounted kC4 times... for nCk see the Newton's Symbol. Some mathematical manipulation makes us see that the considered pair will be counted 1 - (1-1)^k = 1 times, what concludes the proof.
Steps made so far required O(m log log m) for the Sieve and O(m) for computing the result. The last thing to do is to construct array T. We could for every A[i] just increment T[j] for all j dividing i. As A[i] can have at most O(sqrt(A[i])) divisors (and in practice even less than that) then we could construct T in O(n sqrt m). But we can do better than that!
Take two-dimensional array W. At each moment a following invariant holds - if for each non-zero W[i][j] we would increment the counter in table T by W[i][j] for all numbers that divide i, and also share the exact exponents i has in j smallest primes divisors of i, then T would be constructed properly. As this may seem a little confusing, let's see it in action. At start, to make the invariant true, for each A[i] we just increment W[A[i]][0]. Also note that a number not exceeding m can have at most O(log m) prime divisors, so the overall size of W is O(m log m). Now we see that an information stored in W[i][j] can be "pushed forward" in a following way: consider p to be (j+1)-th prime divisor of i, assuming it has one. Then some divisor of i can either have p with an exponent same as in i, or lower. First of these cases is W[i][j+1] - we add another prime that has to be "fully taken" by a divisor. Second case is W[i/p][j] as a divisor of i that doesn't have p with a highest exponent must also divide i/p. And that's it! We consider all i in descending order, then j in ascending order. We "push forward" information from W[i][j]. See that if i has exactly j prime divisors, then the information from it cannot be pushed, but we don't really need that! If i has j prime divisors, then W[i][j] basically says: increment by W[i][j] only index i in array T. So when all the information has been pushed to "last rows" in each W[i] we pass through those rows and finish constructing T. As each cell of W[i][j] has been visited once, this algorithm takes O(m log m) time, and also O(n) at the begining. That concludes the construction. Here's some C++ code from the actual implementation:
FORD(i,SIZE(W)-1,2) //i in descending order
{
int v = i, p;
FOR(j,0,SIZE(W[i])-2) //exclude last row
{
p = S[v]; //j-th divisor; S[v] - smallest prime divisor of v
while (v%p == 0) v /= p;
W[i][j+1] += W[i][j];
W[i/p][j] += W[i][j];
}
T[i] = W[i].back();
}
At the end I'd say that I think array T can be constructed faster and simpler than what I've shown. If anyone has some neat idea about how it could be done, I would appreciate all feedback.
Here's an idea based on the formula for the complete sequence 1..n, found on http://oeis.org/A018805:
a(n) = 2*( Sum phi(j), j=1..n ) - 1, where phi is Euler's totient function
Iterate over the sequence, S. For each term, S_i:
for each of the prime factors, p, of S_i:
if a hash for p does not exist:
create a hash with index p that points to a set of all indexes of S except i,
and a counter set to 1, representing how many terms of S are divisible by p so far
else:
delete i in the existing set of indexes and increment the counter
Sort the hashes for S_i's prime factors by their counters in descending order. Starting with
the largest counter (which means the smallest set), make a list of indexes up to i that are also
members of the next smallest set, until the sets are exhausted. Add the remaining number of
indexes in the list to the cumulative total.
Example:
sum phi' [4,7,10,15,21]
S_0: 4
prime-hash [2:1-4], counters [2:1]
0 indexes up to i in the set for prime 2
total 0
S_1: 7
prime hash [2:1-4; 7:0,2-4], counters [2:1, 7:1]
1 index up to i in the set for prime 7
total 1
S_2: 10
prime hash [2:1,3-4; 5:0-1,3-4; 7:0,2-4], counters [2:2, 5:1, 7:1]
1 index up to i in the set for prime 2, which is also a member
of the set for prime 5
total 2
S_3: 15
prime hash [2:1,3-4; 5:0-1,4; 7:0,2-4; 3:0-2,4], counters [2:2: 5:2, 7:1, 3:1]
2 indexes up to i in the set for prime 5, which are also members
of the set for prime 3
total 4
S_4: 21
prime hash [2:1,3-4; 5:0-1,4; 7:0,2-3; 3:0-2], counters [2:2: 5:2, 7:2, 3:2]
2 indexes up to i in the set for prime 7, which are also members
of the set for prime 3
total 6
6 coprime pairs:
(4,7),(4,15),(4,21),(7,10),(7,15),(10,21)
I would suggest :
1) Use Eratosthene to get a list of sorted prime numbers under 10^6.
2) For each number n in the list, get it's prime factors. Associate it another number f(n) in the following way : let's say that the prime factors of n are 3, 7 and 17. Then the binary representation of f(n) is :
`0 1 0 1 0 0 1`
The first digit (0 here) is associated to the prime number 2, the second (1 here) is associated to the prime number 3, etc ...
Therefore 2 numbers n and m are coprime iff f(n) & f(m) = 0.
3) It's easy to see that there is a N such that for each n : f(n) <= (2^N) - 1. This means that the biggest number f(n) is smaller or equal to a number whose binary representation is :
`1 1 1 1 1 1 1 1 1 1 1 1 1 1 1`
Here N is the number of 1 in the above sequence. Get this N and sort the list of numbers f(n). Let's call this list L.
If you want to optimize: in this list, instead of sorting duplicates, store a pair containing f(n) and the number of times f(n) is duplicated.
4) Iterate from 1 to N in this way : initialize i = 1 0 0 0 0, and at each iteration, move the digit 1 to the right with all other values kept to 0 (implement it using bitshift).
At each iteration, iterate over L to get the number d(i) of elements l in L such that i & l != 0 (be careful if you use the above optimization). In other words, for each i, get the number of elements in L which are not coprimes with i, and name this number d(i). Add the total
D = d(1) + d(2) + ... + d(N)
5) This number D is the number of pairs which are not coprime in the original list. The number of coprime pairs is :
M*(M-1)/2 - D
where M is the number of elements in the original list. The complexity of this method is O(n log(n)).
Good luck !
My previous answer was wrong, apologies. I propose here a modification:
Once you get the prime divisors of each number of the list, associate to each prime number p the number l(p) of numbers in the list which has p as divisor. For example consider the prime number 5, and the list's number which can be divided by 5 are 15, 100 and 255. Then l(5)=3.
To achieve it in O(n logn), iterate over the list and for each number in this list, iterate over it's prime factors; for each prime factor p, increment its l(p).
Then the number of pairs which are not coprime and can be divided by p is
l(p)*(l(p) - 1) / 2
Sum this number for all prime p, and you will get the number of pairs in the list which are not coprime (note that l(p) can be 0). Let say this sum is D, then the answer is
M*(M-1)/2 - D
where M is the length of the list. Good luck !

Q: Count array pairs with bitwise AND > k ~ better than O(N^2) possible?

Given an array nums
Count no. of pairs (two elements) where bitwise AND is greater than K
Brute force
for i in range(0,n):
for j in range(i+1,n):
if a[i]&a[j] > k:
res += 1
Better version:
preprocess to remove all elements ≤k
and then brute force
But i was wondering, what would be the limit in complexity here?
Can we do better with a trie, hashmap approach like two-sum?
( I did not find this problem on Leetcode so I thought of asking here )
Let size_of_input_array = N. Let the input array be of B-bit numbers
Here is an easy to understand and implement solution.
Eliminate all values <= k.
The above image shows 5 10-bit numbers.
Step 1: Adjacency Graph
Store a list of set bits. In our example, 7th bit is set for numbers at index 0,1,2,3 in the input array.
Step 2: The challenge is to avoid counting the same pairs again.
To solve this challenge we take help of union-find data structure as shown in the code below.
//unordered_map<int, vector<int>> adjacency_graph;
//adjacency_graph has been filled up in step 1
vector<int> parent;
for(int i = 0; i < input_array.size(); i++)
parent.push_back(i);
int result = 0;
for(int i = 0; i < adjacency_graph.size(); i++){ // loop 1
auto v = adjacency_graph[i];
if(v.size() > 1){
int different_parents = 1;
for (int j = 1; j < v.size(); j++) { // loop 2
int x = find(parent, v[j]);
int y = find(parent, v[j - 1]);
if (x != y) {
different_parents++;
union(parent, x, y);
}
}
result += (different_parents * (different_parents - 1)) / 2;
}
}
return result;
In the above code, find and union are from union-find data structure.
Time Complexity:
Step 1:
Build Adjacency Graph: O(BN)
Step 2:
Loop 1: O(B)
Loop 2: O(N * Inverse of Ackermann’s function which is an extremely slow-growing function)
Overall Time Complexity
= O(BN)
Space Complexity
Overall space complexity = O(BN)
First, prune everything <= k. Also Sort the value list.
Going from the most significant bit to the least significant we are going to keep track of the set of numbers we are working with (initially all ,s=0, e=n).
Let p be the first position that contains a 1 in the current set at the current position.
If the bit in k is 0, then everything that would yield a 1 world definetly be good and we need to investigate the ones that get a 0. We have (end - p) * (end-p-1) /2 pairs in the current range and (end-p) * <total 1s in this position larger or equal to end> combinations with larger previously good numbers, that we can add to the solution. To continue we update end = p. We want to count 1s in all the numbers above, because we only counted them before in pairs with each other, not with the numbers this low in the set.
If the bit in k is 1, then we can't count any wins yet, but we need to eliminate everything below p, so we update start = p.
You can stop once you went through all the bits or start==end.
Details:
Since at each step we eliminate either everything that has a 0 or everything that has a 1, then everything between start and end will have the same bit-prefix. since the values are sorted we can do a binary search to find p.
For <total 1s in this position larger than p>. We already have the values sorted. So we can compute partial sums and store for every position in the sorted list the number of 1s in every bit position for all numbers above it.
Complexity:
We got bit-by-bit so L (the bit length of the numbers), we do a binary search (logN), and lookup and updates O(1), so this is O(L logN).
We have to sort O(NlogN).
We have to compute partial bit-wise sums O(L*N).
Total O(L logN + NlogN + L*N).
Since N>>L, L logN is subsummed by NlogN. Since L>>logN (probably, as in you have 32 bit numbers but you don't have 4Billion of them), then NlogN is subsummed by L*N. So complexity is O(L * N). Since we also need to keep the partial sums around the memory complexity is also O(L * N).

Sample number with equal probability which is not part of a set

I have a number n and a set of numbers S ∈ [1..n]* with size s (which is substantially smaller than n). I want to sample a number k ∈ [1..n] with equal probability, but the number is not allowed to be in the set S.
I am trying to solve the problem in at worst O(log n + s). I am not sure whether it's possible.
A naive approach is creating an array of numbers from 1 to n excluding all numbers in S and then pick one array element. This will run in O(n) and is not an option.
Another approach may be just generating random numbers ∈[1..n] and rejecting them if they are contained in S. This has no theoretical bound as any number could be sampled multiple times even if it is in the set. But on average this might be a practical solution if s is substantially smaller than n.
Say s is sorted. Generate a random number between 1 and n-s, call it k. We've chosen the k'th element of {1,...,n} - s. Now we need to find it.
Use binary search on s to find the count of the elements of s <= k. This takes O(log |s|). Add this to k. In doing so, we may have passed or arrived at additional elements of s. We can adjust for this by incrementing our answer for each such element that we pass, which we find by checking the next larger element of s from the point we found in our binary search.
E.g., n = 100, s = {1,4,5,22}, and our random number is 3. So our approach should return the third element of [2,3,6,7,...,21,23,24,...,100] which is 6. Binary search finds that 1 element is at most 3, so we increment to 4. Now we compare to the next larger element of s which is 4 so increment to 5. Repeating this finds 5 in so we increment to 6. We check s once more, see that 6 isn't in it, so we stop.
E.g., n = 100, s = {1,4,5,22}, and our random number is 4. So our approach should return the fourth element of [2,3,6,7,...,21,23,24,...,100] which is 7. Binary search finds that 2 elements are at most 4, so we increment to 6. Now we compare to the next larger element of s which is 5 so increment to 7. We check s once more, see that the next number is > 7, so we stop.
If we assume that "s is substantially smaller than n" means |s| <= log(n), then we will increment at most log(n) times, and in any case at most s times.
If s is not sorted then we can do the following. Create an array of bits of size s. Generate k. Parse s and do two things: 1) count the number of elements < k, call this r. At the same time, set the i'th bit to 1 if k+i is in s (0 indexed so if k is in s then the first bit is set).
Now, increment k a number of times equal to r plus the number of set bits is the array with an index <= the number of times incremented.
E.g., n = 100, s = {1,4,5,22}, and our random number is 4. So our approach should return the fourth element of [2,3,6,7,...,21,23,24,...,100] which is 7. We parse s and 1) note that 1 element is below 4 (r=1), and 2) set our array to [1, 1, 0, 0]. We increment once for r=1 and an additional two times for the two set bits, ending up at 7.
This is O(s) time, O(s) space.
This is an O(1) solution with O(s) initial setup that works by mapping each non-allowed number > s to an allowed number <= s.
Let S be the set of non-allowed values, S(i), where i = [1 .. s] and s = |S|.
Here's a two part algorithm. The first part constructs a hash table based only on S in O(s) time, the second part finds the random value k ∈ {1..n}, k ∉ S in O(1) time, assuming we can generate a uniform random number in a contiguous range in constant time. The hash table can be reused for new random values and also for new n (assuming S ⊂ { 1 .. n } still holds of course).
To construct the hash, H. First set j = 1. Then iterate over S(i), the elements of S. They do not need to be sorted. If S(i) > s, add the key-value pair (S(i), j) to the hash table, unless j ∈ S, in which case increment j until it is not. Finally, increment j.
To find a random value k, first generate a uniform random value in the range s + 1 to n, inclusive. If k is a key in H, then k = H(k). I.e., we do at most one hash lookup to insure k is not in S.
Python code to generate the hash:
def substitute(S):
H = dict()
j = 1
for s in S:
if s > len(S):
while j in S: j += 1
H[s] = j
j += 1
return H
For the actual implementation to be O(s), one might need to convert S into something like a frozenset to insure the test for membership is O(1) and also move the len(S) loop invariant out of the loop. Assuming the j in S test and the insertion into the hash (H[s] = j) are constant time, this should have complexity O(s).
The generation of a random value is simply:
def myrand(n, s, H):
k = random.randint(s + 1, n)
return (H[k] if k in H else k)
If one is only interested in a single random value per S, then the algorithm can be optimized to improve the common case, while the worst case remains the same. This still requires S be in a hash table that allows for a constant time "element of" test.
def rand_not_in(n, S):
k = random.randint(len(S) + 1, n);
if k not in S: return k
j = 1
for s in S:
if s > len(S):
while j in S: j += 1
if s == k: return j
j += 1
Optimizations are: Only generate the mapping if the random value is in S. Don't save the mapping to a hash table. Short-circuit the mapping generation when the random value is found.
Actually, the rejection method seems like the practical approach.
Generate a number in 1...n and check whether it is forbidden; regenerate until the generated number is not forbidden.
The probability of a single rejection is p = s/n.
Thus the expected number of random number generations is 1 + p + p^2 + p^3 + ... which is 1/(1-p), which in turn is equal to n/(n-s).
Now, if s is much less than n, or even more up to s = n/2, this expected number is at most 2.
It would take s almost equal to n to make it infeasible in practice.
Multiply the expected time by log s if you use a tree-set to check whether the number is in the set, or by just 1 (expected value again) if it is a hash-set. So the average time is O(1) or O(log s) depending on the set implementation. There is also O(s) memory for storing the set, but unless the set is given in some special way, implicitly and concisely, I don't see how it can be avoided.
(Edit: As per comments, you do this only once for a given set.
If, additionally, we are out of luck, and the set is given as a plain array or list, not some fancier data structure, we get O(s) expected time with this approach, which still fits into the O(log n + s) requirement.)
If attacks against the unbounded algorithm are a concern (and only if they truly are), the method can include a fall-back algorithm for the cases when a certain fixed number of iterations didn't provide the answer.
Similarly to how IntroSort is QuickSort but falls back to HeapSort if the recursion depth gets too high (which is almost certainly a result of an attack resulting in quadratic QuickSort behavior).
Find all numbers that are in a forbidden set and less or equal then n-s. Call it array A.
Find all numbers that are not in a forbidden set and greater then n-s. Call it array B. It may be done in O(s) if set is sorted.
Note that lengths of A and B are equal, and create mapping map[A[i]] = B[i]
Generate number t up to n-s. If there is map[t] return it, otherwise return t
It will work in O(s) insertions to a map + 1 lookup which is either O(s) in average or O(s log s)

Minimum sum that cant be obtained from a set

Given a set S of positive integers whose elements need not to be distinct i need to find minimal non-negative sum that cant be obtained from any subset of the given set.
Example : if S = {1, 1, 3, 7}, we can get 0 as (S' = {}), 1 as (S' = {1}), 2 as (S' = {1, 1}), 3 as (S' = {3}), 4 as (S' = {1, 3}), 5 as (S' = {1, 1, 3}), but we can't get 6.
Now we are given one array A, consisting of N positive integers. Their are M queries,each consist of two integers Li and Ri describe i'th query: we need to find this Sum that cant be obtained from array elements ={A[Li], A[Li+1], ..., A[Ri-1], A[Ri]} .
I know to find it by a brute force approach to be done in O(2^n). But given 1 ≤ N, M ≤ 100,000.This cant be done .
So is their any effective approach to do it.
Concept
Suppose we had an array of bool representing which numbers so far haven't been found (by way of summing).
For each number n we encounter in the ordered (increasing values) subset of S, we do the following:
For each existing True value at position i in numbers, we set numbers[i + n] to True
We set numbers[n] to True
With this sort of a sieve, we would mark all the found numbers as True, and iterating through the array when the algorithm finishes would find us the minimum unobtainable sum.
Refinement
Obviously, we can't have a solution like this because the array would have to be infinite in order to work for all sets of numbers.
The concept could be improved by making a few observations. With an input of 1, 1, 3, the array becomes (in sequence):
(numbers represent true values)
An important observation can be made:
(3) For each next number, if the previous numbers had already been found it will be added to all those numbers. This implies that if there were no gaps before a number, there will be no gaps after that number has been processed.
For the next input of 7 we can assert that:
(4) Since the input set is ordered, there will be no number less than 7
(5) If there is no number less than 7, then 6 cannot be obtained
We can come to a conclusion that:
(6) the first gap represents the minimum unobtainable number.
Algorithm
Because of (3) and (6), we don't actually need the numbers array, we only need a single value, max to represent the maximum number found so far.
This way, if the next number n is greater than max + 1, then a gap would have been made, and max + 1 is the minimum unobtainable number.
Otherwise, max becomes max + n. If we've run through the entire S, the result is max + 1.
Actual code (C#, easily converted to C):
static int Calculate(int[] S)
{
int max = 0;
for (int i = 0; i < S.Length; i++)
{
if (S[i] <= max + 1)
max = max + S[i];
else
return max + 1;
}
return max + 1;
}
Should run pretty fast, since it's obviously linear time (O(n)). Since the input to the function should be sorted, with quicksort this would become O(nlogn). I've managed to get results M = N = 100000 on 8 cores in just under 5 minutes.
With numbers upper limit of 10^9, a radix sort could be used to approximate O(n) time for the sorting, however this would still be way over 2 seconds because of the sheer amount of sorts required.
But, we can use statistical probability of 1 being randomed to eliminate subsets before sorting. On the start, check if 1 exists in S, if not then every query's result is 1 because it cannot be obtained.
Statistically, if we random from 10^9 numbers 10^5 times, we have 99.9% chance of not getting a single 1.
Before each sort, check if that subset contains 1, if not then its result is one.
With this modification, the code runs in 2 miliseconds on my machine. Here's that code on http://pastebin.com/rF6VddTx
This is a variation of the subset-sum problem, which is NP-Complete, but there is a pseudo-polynomial Dynamic Programming solution you can adopt here, based on the recursive formula:
f(S,i) = f(S-arr[i],i-1) OR f(S,i-1)
f(-n,i) = false
f(_,-n) = false
f(0,i) = true
The recursive formula is basically an exhaustive search, each sum can be achieved if you can get it with element i OR without element i.
The dynamic programming is achieved by building a SUM+1 x n+1 table (where SUM is the sum of all elements, and n is the number of elements), and building it bottom-up.
Something like:
table <- SUM+1 x n+1 table
//init:
for each i from 0 to SUM+1:
table[0][i] = true
for each j from 1 to n:
table[j][0] = false
//fill the table:
for each i from 1 to SUM+1:
for each j from 1 to n+1:
if i < arr[j]:
table[i][j] = table[i][j-1]
else:
table[i][j] = table[i-arr[j]][j-1] OR table[i][j-1]
Once you have the table, you need the smallest i such that for all j: table[i][j] = false
Complexity of solution is O(n*SUM), where SUM is the sum of all elements, but note that the algorithm can actually be trimmed after the required number was found, without the need to go on for the next rows, which are un-needed for the solution.

Algorithm to determine indices i..j of array A containing all the elements of another array B

I came across this question on an interview questions thread. Here is the question:
Given two integer arrays A [1..n] and
B[1..m], find the smallest window
in A that contains all elements of
B. In other words, find a pair < i , j >
such that A[i..j] contains B[1..m].
If A doesn't contain all the elements of
B, then i,j can be returned as -1.
The integers in A need not be in the same order as they are in B. If there are more than one smallest window (different, but have the same size), then its enough to return one of them.
Example: A[1,2,5,11,2,6,8,24,101,17,8] and B[5,2,11,8,17]. The algorithm should return i = 2 (index of 5 in A) and j = 9 (index of 17 in A).
Now I can think of two variations.
Let's suppose that B has duplicates.
This variation doesn't consider the number of times each element occurs in B. It just checks for all the unique elements that occur in B and finds the smallest corresponding window in A that satisfies the above problem. For example, if A[1,2,4,5,7] and B[2,2,5], this variation doesn't bother about there being two 2's in B and just checks A for the unique integers in B namely 2 and 5 and hence returns i=1, j=3.
This variation accounts for duplicates in B. If there are two 2's in B, then it expects to see at least two 2's in A as well. If not, it returns -1,-1.
When you answer, please do let me know which variation you are answering. Pseudocode should do. Please mention space and time complexity if it is tricky to calculate it. Mention if your solution assumes array indices to start at 1 or 0 too.
Thanks in advance.
Complexity
Time: O((m+n)log m)
Space: O(m)
The following is provably optimal up to a logarithmic factor. (I believe the log factor cannot be got rid of, and so it's optimal.)
Variant 1 is just a special case of variant 2 with all the multiplicities being 1, after removing duplicates from B. So it's enough to handle the latter variant; if you want variant 1, just remove duplicates in O(m log m) time. In the following, let m denote the number of distinct elements in B. We assume m < n, because otherwise we can just return -1, in constant time.
For each index i in A, we will find the smallest index s[i] such that A[i..s[i]] contains B[1..m], with the right multiplicities. The crucial observation is that s[i] is non-decreasing, and this is what allows us to do it in amortised linear time.
Start with i=j=1. We will keep a tuple (c[1], c[2], ... c[m]) of the number of times each element of B occurs, in the current window A[i..j]. We will also keep a set S of indices (a subset of 1..m) for which the count is "right" (i.e., k for which c[k]=1 in variant 1, or c[k] = <the right number> in variant 2).
So, for i=1, starting with j=1, increment each c[A[j]] (if A[j] was an element of B), check if c[A[j]] is now "right", and add or remove j from S accordingly. Stop when S has size m. You've now found s[1], in at most O(n log m) time. (There are O(n) j's, and each set operation took O(log m) time.)
Now for computing successive s[i]s, do the following. Increment i, decrement c[A[i]], update S accordingly, and, if necessary, increment j until S has size m again. That gives you s[i] for each i. At the end, report the (i,s[i]) for which s[i]-i was smallest.
Note that although it seems that you might be performing up to O(n) steps (incrementing j) for each i, the second pointer j only moves to the right: so the total number of times you can increment j is at most n. (This is amortised analysis.) Each time you increment j, you might perform a set operation that takes O(log m) time, so the total time is O(n log m). The space required was for keeping the tuple of counts, the set of elements of B, the set S, and some constant number of other variables, so O(m) in all.
There is an obvious O(m+n) lower bound, because you need to examine all the elements. So the only question is whether we can prove the log factor is necessary; I believe it is.
Here is the solution I thought of (but it's not very neat).
I am going to illustrate it using the example in the question.
Let A[1,2,5,11,2,6,8,24,101,17,8] and B[5,2,11,8,17]
Sort B. (So B = [2,5,8,11,17]). This step takes O(log m).
Allocate an array C of size A. Iterate through elements of A, binary search for it in the sorted B, if it is found enter it's "index in sorted B + 1" in C. If its not found, enter -1. After this step,
A = [1 , 2, 5, 11, 2, 6, 8, 24, 101, 17, 8] (no changes, quoting for ease).
C = [-1, 1, 2, 4 , 1, -1, 3, -1, -1, 5, 3]
Time: (n log m), Space O(n).
Find the smallest window in C that has all the numbers from 1 to m. For finding the window, I can think of two general directions:
a. A bit oriented approach where in I set the bit corresponding to each position and finally check by some kind of ANDing.
b. Create another array D of size m, go through C and when I encounter p in C, increment D[p]. Use this for finding the window.
Please leave comments regarding the general approach as such, as well as for 3a and 3b.
My solution:
a. Create a hash table with m keys, one for each value in B. Each key in H maps to a dynamic array of sorted indices containing indices in A that are equal to B[i]. This takes O(n) time. We go through each index j in A. If key A[i] exists in H (O(1) time) then add an value containing the index j of A to the list of indices that H[A[i]] maps to.
At this point we have 'binned' n elements into m bins. However, total storage is just O(n).
b. The 2nd part of the algorithm involves maintaining a ‘left’ index and a ‘right’ index for each list in H. Lets create two arrays of size m called L and R that contain these values. Initially in our example,
We also keep track of the “best” minimum window.
We then iterate over the following actions on L and R which are inherently greedy:
i. In each iteration, we compute the minimum and maximum values in L and R.
For L, Lmax - Lmin is the window and for R, Rmax - Rmin is the window. We update the best window if one of these windows is better than the current best window. We use a min heap to keep track of the minimum element in L and a max heap to keep track of the largest element in R. These take O(m*log(m)) time to build.
ii. From a ‘greedy’ perspective, we want to take the action that will minimize the window size in each L and R. For L it intuitively makes sense to increment the minimum index, and for R, it makes sense to decrement the maximum index.
We want to increment the array position for the minimum value until it is larger than the 2nd smallest element in L, and similarly, we want to decrement the array position for the largest value in R until it is smaller than the 2nd largest element in R.
Next, we make a key observation:
If L[i] is the minimum value in L and R[i] is less than the 2nd smallest element in L, ie, if R[i] were to still be the minimum value in L if L[i] were replaced with R[i], then we are done. We now have the “best” index in list i that can contribute to the minimum window. Also, all the other elements in R cannot contribute to the best window since their L values are all larger than L[i]. Similarly if R[j] is the maximum element in R and L[j] is greater than the 2nd largest value in R, we are also done by setting R[j] = L[j]. Any other index in array i to the left of L[j] has already been accounted for as have all indices to the right of R[j], and all indices between L[j] and R[j] will perform poorer than L[j].
Otherwise, we simply increment the array position L[i] until it is larger than the 2nd smallest element in L and decrement array position R[j] (where R[j] is the max in R) until it is smaller than the 2nd largest element in R. We compute the windows and update the best window if one of the L or R windows is smaller than the best window. We can do a Fibonacci search to optimally do the increment / decrement. We keep incrementing L[i] using Fibonacci increments until we are larger than the 2nd largest element in L. We can then perform binary search to get the smallest element L[i] that is larger than the 2nd largest element in L, similar for the set R. After the increment / decrement, we pop the largest element from the max heap for R and the minimum element for the min heap for L and insert the new values of L[i] and R[j] into the heaps. This is an O(log(m)) operation.
Step ii. would terminate when Lmin can’t move any more to the right or Rmax can’t move any more to the left (as the R/L values are the same). Note that we can have scenarios in which L[i] = R[i] but if it is not the minimum element in L or the maximum element in R, the algorithm would still continue.
Runtime analysis:
a. Creation of the hash table takes O(n) time and O(n) space.
b. Creation of heaps: O(m*log(m)) time and O(m) space.
c. The greedy iterative algorithm is a little harder to analyze. Its runtime is really bounded by the distribution of elements. Worst case, we cover all the elements in each array in the hash table. For each element, we perform an O(log(m)) heap update.
Worst case runtime is hence O(n*log(m)) for the iterative greedy algorithm. In the best case, we discover very fast that L[i] = R[i] for the minimum element in L or the maximum element in R…run time is O(1)*log(m) for the greedy algorithm!
Average case seems really hard to analyze. What is the average “convergence” of this algorithm to the minimum window. If we were to assume that the Fibonacci increments / binary search were to help, we could say we only look at m*log(n/m) elements (every list has n/m elements) in the average case. In that case, the running time of the greedy algorithm would be m*log(n/m)*log(m).
Total running time
Best case: O(n + m*log(m) + log(m)) time = O(n) assuming m << n
Average case: O(n + m*log(m) + m*log(n/m)*log(m)) time = O(n) assuming m << n.
Worst case: O(n + n*log(m) + m*log(m)) = O(n*log(m)) assuming m << n.
Space: O(n + m) (hashtable and heaps) always.
Edit: Here is a worked out example:
A[5, 1, 1, 5, 6, 1, 1, 5]
B[5, 6]
H:
{
5 => {1, 4, 8}
6 => {5}
}
Greedy Algorithm:
L => {1, 1}
R => {3, 1}
Iteration 1:
a. Lmin = 1 (since H{5}[1] < H{6}[1]), Lmax = 5. Window: 5 - 1 + 1= 5
Increment Lmin pointer, it now becomes 2.
L => {2, 1}
Rmin = H{6}[1] = 5, Rmax = H{5}[3] = 8. Window = 8 - 5 + 1 = 4. Best window so far = 4 (less than 5 computed above).
We also note the indices in A (5, 8) for the best window.
Decrement Rmax, it now becomes 2 and the value is 4.
R => {2, 1}
b. Now, Lmin = 4 (H{5}[2]) and the index i in L is 1. Lmax = 5 (H{6}[1]) and the index in L is 2.
We can't increment Lmin since L[1] = R[1] = 2. Thus we just compute the window now.
The window = Lmax - Lmin + 1 = 2 which is the best window so far.
Thus, the best window in A = (4, 5).
struct Pair {
int i;
int j;
};
Pair
find_smallest_subarray_window(int *A, size_t n, int *B, size_t m)
{
Pair p;
p.i = -1;
p.j = -1;
// key is array value, value is array index
std::map<int, int> map;
size_t count = 0;
int i;
int j;
for(i = 0; i < n, ++i) {
for(j = 0; j < m; ++j) {
if(A[i] == B[j]) {
if(map.find(A[i]) == map.end()) {
map.insert(std::pair<int, int>(A[i], i));
} else {
int start = findSmallestVal(map);
int end = findLargestVal(map);
int oldLength = end-start;
int oldIndex = map[A[i]];
map[A[i]] = i;
int _start = findSmallestVal(map);
int _end = findLargestVal(map);
int newLength = _end - _start;
if(newLength > oldLength) {
// revert back
map[A[i]] = oldIndex;
}
}
}
}
if(count == m) {
break;
}
}
p.i = findSmallestVal(map);
p.j = findLargestVal(map);
return p;
}

Resources