Related
This question already has answers here:
Finding an element in an array that isn't repeated a multiple of three times?
(4 answers)
Closed 7 years ago.
I have been asked this question in an interview.
Given that, there are 3n+1 numbers. n of those numbers occur in triplets, only 1 occurs single time. How do we find the unique number in linear time i.e., O(n) ? The numbers are not sorted.
Note that, if there were 2n+1 numbers, n of which occur in pairs, we could just XOR all the numbers to find the unique one. The interviewer told me that it can be done by bit manipulation.
Count the number of times that each bit occurs in the set of 3n+1 numbers.
Reduce each bit count modulo 3.
What is left is the bit pattern of the single number.
Oh, dreamzor (above) has beaten me to it.
You can invent a 3nary XOR (call it XOR3) operation which operates in base 3 instead of base 2 and simply takes each 3nary digit modulo 3 (when usual XOR takes 2nary digit modulo 2).
Then, if you XOR3 all the numbers (converting them to 3nary first) this way, you will be left with the unique number (in base 3 so you will need to convert it back).
The complexity is not exactly linear, though, because the conversions from/to base 3 require additional logarithmic time. However, if the range of numbers is constant then the conversion time is also constant.
Code on C++ (intentionally verbose):
vector<int> to_base3(int num) {
vector<int> base3;
for (; num > 0; num /= 3) {
base3.push_back(num % 3);
}
return base3;
}
int from_base3(const vector<int> &base3) {
int num = 0;
for (int i = 0, three = 1; i < base3.size(); ++i, three *= 3) {
num += base3[i] * three;
}
return num;
}
int find_unique(const vector<int> &a) {
vector<int> unique_base3(20, 0); // up to 3^20
for (int num : a) {
vector<int> num_base3 = to_base3(num);
for (int i = 0; i < num_base3.size(); ++i) {
unique_base3[i] = (unique_base3[i] + num_base3[i]) % 3;
}
}
int unique_num = from_base3(unique_base3);
return unique_num;
}
int main() {
vector<int> rands { 1287318, 172381, 5144, 566546, 7123 };
vector<int> a;
for (int r : rands) {
for (int i = 0; i < 3; ++i) {
a.push_back(r);
}
}
a.push_back(13371337); // unique number
random_shuffle(a.begin(), a.end());
int unique_num = find_unique(a);
cout << unique_num << endl;
}
byte [] oneCount = new byte [32];
int [] test = {1,2,3,1,5,2,9,9,3,1,2,3,9};
for (int n: test) {
for (int bit = 0; bit < 32; bit++) {
if (((n >> bit) & 1) == 1) {
oneCount[bit]++;
oneCount[bit] = (byte)(oneCount[bit] % 3);
}
}
}
int result = 0;
int x = 1;
for (int bit = 0; bit < 32; bit++) {
result += oneCount[bit] * x;
x = x << 1;
}
System.out.print(result);
Looks like while I was coding, others gave the main idea
To count the subsequences of length 4 of a string of length n which are divisible by 9.
For example if the input string is 9999
then cnt=1
My approach is similar to Brute Force and takes O(n^3).Any better approach than this?
If you want to check if a number is divisible by 9, You better look here.
I will describe the method in short:
checkDividedByNine(String pNum) :
If pNum.length < 1
return false
If pNum.length == 1
return toInt(pNum) == 9;
Sum = 0
For c in pNum:
Sum += toInt(pNum)
return checkDividedByNine(toString(Sum))
So you can reduce the running time to less than O(n^3).
EDIT:
If you need very fast algorithm, you can use pre-processing in order to save for each possible 4-digit number, if it is divisible by 9. (It will cost you 10000 in memory)
EDIT 2:
Better approach: you can use dynamic programming:
For string S in length N:
D[i,j,k] = The number of subsequences of length j in the string S[i..N] that their value modulo 9 == k.
Where 0 <= k <= 8, 1 <= j <= 4, 1 <= i <= N.
D[i,1,k] = simply count the number of elements in S[i..N] that = k(mod 9).
D[N,j,k] = if j==1 and (S[N] modulo 9) == k, return 1. Otherwise, 0.
D[i,j,k] = max{ D[i+1,j,k], D[i+1,j-1, (k-S[i]+9) modulo 9]}.
And you return D[1,4,0].
You get a table in size - N x 9 x 4.
Thus, the overall running time, assuming calculating modulo takes O(1), is O(n).
Assuming that the subsequence has to consist of consecutive digits, you can scan from left to right, keeping track of what order the last 4 digits read are in. That way, you can do a linear scan and just apply divisibility rules.
If the digits are not necessarily consecutive, then you can do some finangling with lookup tables. The idea is that you can create a 3D array named table such that table[i][j][k] is the number of sums of i digits up to index j such that the sum leaves a remainder of k when divided by 9. The table itself has size 45n (i goes from 0 to 4, j goes from 0 to n-1, and k goes from 0 to 8).
For the recursion, each table[i][j][k] entry relies on table[i-1][j-1][x] and table[i][j-1][x] for all x from 0 to 8. Since each entry update takes constant time (at least relative to n), that should get you an O(n) runtime.
How about this one:
/*NOTE: The following holds true, if the subsequences consist of digits in contagious locations */
public int countOccurrences (String s) {
int count=0;
int len = s.length();
String subs = null;
int sum;
if (len < 4)
return 0;
else {
for (int i=0 ; i<len-3 ; i++) {
subs = s.substring(i, i+4);
sum = 0;
for (int j=0; j<=3; j++) {
sum += Integer.parseInt(String.valueOf(subs.charAt(j)));
}
if (sum%9 == 0)
count++;
}
return count;
}
}
Here is the complete working code for the above problem based on the above discussed ways using lookup tables
int fun(int h)
{
return (h/10 + h%10);
}
int main()
{
int t;
scanf("%d",&t);
int i,T;
for(T=0;T<t;T++)
{
char str[10001];
scanf("%s",str);
int len=strlen(str);
int arr[len][5][10];
memset(arr,0,sizeof(int)*(10*5*len));
int j,k,l;
for(j=0;j<len;j++)
{
int y;
y=(str[j]-48)%10;
arr[j][1][y]++;
}
//printarr(arr,len);
for(i=len-2;i>=0;i--) //represents the starting index of the string
{
int temp[5][10];
//COPYING ARRAY
int a,b,c,d;
for(a=0;a<=4;a++)
for(b=0;b<=9;b++)
temp[a][b]=arr[i][a][b]+arr[i+1][a][b];
for(j=1;j<=4;j++) //represents the length of the string
{
for(k=0;k<=9;k++) //represents the no. of ways to make it
{
if(arr[i+1][j][k]!=0)
{
for(c=1;c<=4;c++)
{
for(d=0;d<=9;d++)
{
if(arr[i][c][d]!=0)
{
int h,r;
r=j+c;
if(r>4)
continue;
h=k+d;
h=fun(h);
if(r<=4)
temp[r][h]=( temp[r][h]+(arr[i][c][d]*arr[i+1][j][k]))%1000000007;
}}}
}
//copy back from temp array
}
}
for(a=0;a<=4;a++)
for(b=0;b<=9;b++)
arr[i][a][b]=temp[a][b];
}
printf("%d\n",(arr[0][1][9])%1000000007);
}
return 0;
}
Given a number N and an array of integers (all nos less than 2^15). (A is size of array 100000)
Find Maximum XOR value of N and a integer from the array.
Q is no of queries (50000) and start, stop is the range in the array.
Input:
A Q
a1 a2 a3 ...
N start stop
Output:
Maximum XOR value of N and an integer in the array with the range specified.
Eg: Input
15 2 (2 is no of queries)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
10 6 10 (Query 1)
10 6 10 (Query 2)
Output:
13
13
Code:
for(int i=start-1;i<stop;i++){
int t =no[i]^a;
if(maxxor<t)
maxxor=t;
}
cout << maxxor <<endl;
I need a algorithm 10-100 times faster than this. Sorting is too expensive. I have also tried binary trees,bit manipulation.
How about a 2x - 3x improvement?. Is that possible by optimization.
It is possible to develop faster algorithm.
Let's call bits of N: a[0], a[1], ..., a[15], e.g if N = 13 = 0000000 00001101 (in binary), then a[0] = a[1] = ... a[11] = 0, a[12] = 1, a[13] = 1, a[14] = 0, a[15] = 1.
The main idea of algorithm is following: If a[0] == 1, then best possible answer has this bit zeroed. If a[0] == 0, then best possible answer has one at this position.
So at first you check if you have some number with the desired bit. If yes, you should take only number with this bit. If no, you take it's inverse.
Then you process other bits in same manner. E.g. if a[0] == 1, a[1] == 0, you first check whether there is number beginning with zero, if yes then you check whether there is a number beginning with 01. If nothing begins with zero, then you check whether there is a number beggining with 11. And so on...
So you need a fast algorithm to answer following query: Is there a number beginning with bits ... in range start, stop?
One possibility: Constuct trie from binary representation of numbers. In each node store all positions where this prefix is in array (and sort them). Then answering to this query can be a simple walk through this trie. To check whether there is suitable prefix in start, stop range you should do a binary search over stored array in a node.
This could lead to algorithm with complexity O(lg^2 N) which is faster.
Here is the code, it hasn't been tested much, may contain bugs:
#include <cstdio>
#include <vector>
#include <algorithm>
using namespace std;
class TrieNode {
public:
TrieNode* next[2];
vector<int> positions;
TrieNode() {
next[0] = next[1] = NULL;
}
bool HasNumberInRange(int start, int stop) {
vector<int>::iterator it = lower_bound(
positions.begin(), positions.end(), start);
if (it == positions.end()) return false;
return *it < stop;
}
};
void AddNumberToTrie(int number, int index, TrieNode* base) {
TrieNode* cur = base;
// Go through all binary digits from most significant
for (int i = 14; i >= 0; i--) {
int digit = 0;
if ((number & (1 << i)) != 0) digit = 1;
cur->positions.push_back(index);
if (cur->next[digit] == NULL) {
cur->next[digit] = new TrieNode;
}
cur = cur->next[digit];
}
cur->positions.push_back(index);
}
int FindBestNumber(int a, int start, int stop, TrieNode* base) {
int best_num = 0;
TrieNode* cur = base;
for (int i = 14; i >= 0; i--) {
int digit = 1;
if ((a & (1 << i)) != 0) digit = 0;
if (cur->next[digit] == NULL ||
!cur->next[digit]->HasNumberInRange(start, stop))
digit = 1 - digit;
best_num *= 2;
best_num += digit;
cur = cur->next[digit];
}
return best_num;
}
int main() {
int n; scanf("%d", &n);
int q; scanf("%d", &q);
TrieNode base;
for (int i = 0; i < n; i++) {
int x; scanf("%d", &x);
AddNumberToTrie(x, i, &base);
}
for (int i = 0; i < q; i++) {
int a, start, stop;
// Finds biggest i, such that start <= i < stop and XOR with a is as big as possible
// Base index is 0
scanf("%d %d %d", &a, &start, &stop);
printf("%d\n", FindBestNumber(a, start, stop, &base)^a);
}
}
Your algorithm runs in linear time (O(start-stop), or O(N) for the full range). If you can't assume that the input array already has a special ordering, you probably won't be able to get it any faster.
You only can try to optimize the overhead within the loop, but that surely won't give you a significant increase in speed.
edit:
As it seems you have to search the same list multiple time, but with different start- and end indexes.
That means that pre-sorting the array is also out of the question, because that would change the order of the elements. start and end would be meaningless.
What you could try to do is avoid processing the same range twice if one query fully contains an already scanned range.
Or maybe trying to consider all queries simultaneously while iterating throug the array.
If you have multiple queries with the same range, you can build a tree with the numbers in that range like this:
Use a binary tree of depth 15 where the numbers are at the leaves and a number corresponds to the path that leads to it (left is 0 and right is 1).
e.g. for 0 1 4 7:
/ \
/ /\
/ \ / \
0 1 4 7
Then is your query is N=n_1 n_2 n_3 … n_15 where n_1 is the first bit of N, n_2 the second …
Go from the root to a leaf and when you have to make a choice if n_i = 0 (where i is the depth of the current node) then go to the right, else go to the left. When you are on the leaf, it is the max leaf.
Original Answer for one query:
Your algorithm is optimal, you need to check all numbers in the array.
There may be a way to have a slightly faster program by using programming tricks, but it has no link with the algorithm.
I just come up with a solution that requires O(AlogM) time and space for preprocessing. And O(log2M) time for each query. M is the range of the integers, 2^15 in this problem.
For the
1st..Nth number, (Tree Group 1)
1st..(A/2)th number, (A/2)th..Ath number, (Tree Group 2)
1st..(A/4)th number, (A/4)th..(A/2)th number, (A/2)th..(3A/4)th, (3A/3)th..Ath, (Tree Group 3)
......., (Tree Group 4)
.......,
......., (Tree Group logA)
construct a binary trie of the binary representation of all number in the range. There would be 2M trees. But all trees aggregated will have no more than O(AlogM) elements. For a tree that include x numbers, there can be at most logM*x node in the tree. And each number is included in only one tree in each Tree Group.
For each query, you can split the range into several ranges (no more than 2logA) that we have processed into a tree. And for each tree, we can find the maximum XOR value in O(logM) time (will explain later). That is O(logA*logM) time.
How to find the maximum in a tree? Simply prefer the 1 child if the current digit is 0 in N, otherwise prefer the 0 child. If the preferred child exist, continue to that child, otherwise to the other.
yea or you could just calculate it and not waste time thinking about how to do it better.
int maxXor(int l, int r) {
int highest_xor = 0;
int base = l;
int tbase = l;
int val = 0;
int variance = 0;
do
{
while(tbase + variance <= r)
{
val = base ^ tbase + variance;
if(val > highest_xor)
{
highest_xor = val;
}
variance += 1;
}
base +=1;
variance = 0;
}while(base <= r);
return highest_xor;
}
It's easy enough to make a simple sieve:
for (int i=2; i<=N; i++){
if (sieve[i]==0){
cout << i << " is prime" << endl;
for (int j = i; j<=N; j+=i){
sieve[j]=1;
}
}
cout << i << " has " << sieve[i] << " distinct prime factors\n";
}
But what about when N is very large and I can't hold that kind of array in memory? I've looked up segmented sieve approaches and they seem to involve finding primes up until sqrt(N) but I don't understand how it works. What if N is very large (say 10^18)?
The basic idea of a segmented sieve is to choose the sieving primes less than the square root of n, choose a reasonably large segment size that nevertheless fits in memory, and then sieve each of the segments in turn, starting with the smallest. At the first segment, the smallest multiple of each sieving prime that is within the segment is calculated, then multiples of the sieving prime are marked as composite in the normal way; when all the sieving primes have been used, the remaining unmarked numbers in the segment are prime. Then, for the next segment, for each sieving prime you already know the first multiple in the current segment (it was the multiple that ended the sieving for that prime in the prior segment), so you sieve on each sieving prime, and so on until you are finished.
The size of n doesn't matter, except that a larger n will take longer to sieve than a smaller n; the size that matters is the size of the segment, which should be as large as convenient (say, the size of the primary memory cache on the machine).
You can see a simple implementation of a segmented sieve here. Note that a segmented sieve will be very much faster than O'Neill's priority-queue sieve mentioned in another answer; if you're interested, there's an implementation here.
EDIT: I wrote this for a different purpose, but I'll show it here because it might be useful:
Though the Sieve of Eratosthenes is very fast, it requires O(n) space. That can be reduced to O(sqrt(n)) for the sieving primes plus O(1) for the bitarray by performing the sieving in successive segments. At the first segment, the smallest multiple of each sieving prime that is within the segment is calculated, then multiples of the sieving prime are marked composite in the normal way; when all the sieving primes have been used, the remaining unmarked numbers in the segment are prime. Then, for the next segment, the smallest multiple of each sieving prime is the multiple that ended the sieving in the prior segment, and so the sieving continues until finished.
Consider the example of sieve from 100 to 200 in segments of 20. The five sieving primes are 3, 5, 7, 11 and 13. In the first segment from 100 to 120, the bitarray has ten slots, with slot 0 corresponding to 101, slot k corresponding to 100+2k+1, and slot 9 corresponding to 119. The smallest multiple of 3 in the segment is 105, corresponding to slot 2; slots 2+3=5 and 5+3=8 are also multiples of 3. The smallest multiple of 5 is 105 at slot 2, and slot 2+5=7 is also a multiple of 5. The smallest multiple of 7 is 105 at slot 2, and slot 2+7=9 is also a multiple of 7. And so on.
Function primesRange takes arguments lo, hi and delta; lo and hi must be even, with lo < hi, and lo must be greater than sqrt(hi). The segment size is twice delta. Ps is a linked list containing the sieving primes less than sqrt(hi), with 2 removed since even numbers are ignored. Qs is a linked list containing the offest into the sieve bitarray of the smallest multiple in the current segment of the corresponding sieving prime. After each segment, lo advances by twice delta, so the number corresponding to an index i of the sieve bitarray is lo + 2i + 1.
function primesRange(lo, hi, delta)
function qInit(p)
return (-1/2 * (lo + p + 1)) % p
function qReset(p, q)
return (q - delta) % p
sieve := makeArray(0..delta-1)
ps := tail(primes(sqrt(hi)))
qs := map(qInit, ps)
while lo < hi
for i from 0 to delta-1
sieve[i] := True
for p,q in ps,qs
for i from q to delta step p
sieve[i] := False
qs := map(qReset, ps, qs)
for i,t from 0,lo+1 to delta-1,hi step 1,2
if sieve[i]
output t
lo := lo + 2 * delta
When called as primesRange(100, 200, 10), the sieving primes ps are [3, 5, 7, 11, 13]; qs is initially [2, 2, 2, 10, 8] corresponding to smallest multiples 105, 105, 105, 121 and 117, and is reset for the second segment to [1, 2, 6, 0, 11] corresponding to smallest multiples 123, 125, 133, 121 and 143.
You can see this program in action at http://ideone.com/iHYr1f. And in addition to the links shown above, if you are interested in programming with prime numbers I modestly recommend this essay at my blog.
It's just that we are making segmented with the sieve we have.
The basic idea is let's say we have to find out prime numbers between 85 and 100.
We have to apply the traditional sieve,but in the fashion as described below:
So we take the first prime number 2 , divide the starting number by 2(85/2) and taking round off to smaller number we get p=42,now multiply again by 2 we get p=84, from here onwards start adding 2 till the last number.So what we have done is that we have removed all the factors of 2(86,88,90,92,94,96,98,100) in the range.
We take the next prime number 3 , divide the starting number by 3(85/3) and taking round off to smaller number we get p=28,now multiply again by 3 we get p=84, from here onwards start adding 3 till the last number.So what we have done is that we have removed all the factors of 3(87,90,93,96,99) in the range.
Take the next prime number=5 and so on..................
Keep on doing the above steps.You can get the prime numbers (2,3,5,7,...) by using the traditional sieve upto sqrt(n).And then use it for segmented sieve.
There's a version of the Sieve based on priority queues that yields as many primes as you request, rather than all of them up to an upper bound. It's discussed in the classic paper "The Genuine Sieve of Eratosthenes" and googling for "sieve of eratosthenes priority queue" turns up quite a few implementations in various programming languages.
If someone would like to see C++ implementation, here is mine:
void sito_delta( int delta, std::vector<int> &res)
{
std::unique_ptr<int[]> results(new int[delta+1]);
for(int i = 0; i <= delta; ++i)
results[i] = 1;
int pierw = sqrt(delta);
for (int j = 2; j <= pierw; ++j)
{
if(results[j])
{
for (int k = 2*j; k <= delta; k+=j)
{
results[k]=0;
}
}
}
for (int m = 2; m <= delta; ++m)
if (results[m])
{
res.push_back(m);
std::cout<<","<<m;
}
};
void sito_segment(int n,std::vector<int> &fiPri)
{
int delta = sqrt(n);
if (delta>10)
{
sito_segment(delta,fiPri);
// COmpute using fiPri as primes
// n=n,prime = fiPri;
std::vector<int> prime=fiPri;
int offset = delta;
int low = offset;
int high = offset * 2;
while (low < n)
{
if (high >=n ) high = n;
int mark[offset+1];
for (int s=0;s<=offset;++s)
mark[s]=1;
for(int j = 0; j< prime.size(); ++j)
{
int lowMinimum = (low/prime[j]) * prime[j];
if(lowMinimum < low)
lowMinimum += prime[j];
for(int k = lowMinimum; k<=high;k+=prime[j])
mark[k-low]=0;
}
for(int i = low; i <= high; i++)
if(mark[i-low])
{
fiPri.push_back(i);
std::cout<<","<<i;
}
low=low+offset;
high=high+offset;
}
}
else
{
std::vector<int> prime;
sito_delta(delta, prime);
//
fiPri = prime;
//
int offset = delta;
int low = offset;
int high = offset * 2;
// Process segments one by one
while (low < n)
{
if (high >= n) high = n;
int mark[offset+1];
for (int s = 0; s <= offset; ++s)
mark[s] = 1;
for (int j = 0; j < prime.size(); ++j)
{
// find the minimum number in [low..high] that is
// multiple of prime[i] (divisible by prime[j])
int lowMinimum = (low/prime[j]) * prime[j];
if(lowMinimum < low)
lowMinimum += prime[j];
//Mark multiples of prime[i] in [low..high]
for (int k = lowMinimum; k <= high; k+=prime[j])
mark[k-low] = 0;
}
for (int i = low; i <= high; i++)
if(mark[i-low])
{
fiPri.push_back(i);
std::cout<<","<<i;
}
low = low + offset;
high = high + offset;
}
}
};
int main()
{
std::vector<int> fiPri;
sito_segment(1013,fiPri);
}
Based on Swapnil Kumar answer I did the following algorithm in C. It was built with mingw32-make.exe.
#include<math.h>
#include<stdio.h>
#include<stdlib.h>
int main()
{
const int MAX_PRIME_NUMBERS = 5000000;//The number of prime numbers we are looking for
long long *prime_numbers = malloc(sizeof(long long) * MAX_PRIME_NUMBERS);
prime_numbers[0] = 2;
prime_numbers[1] = 3;
prime_numbers[2] = 5;
prime_numbers[3] = 7;
prime_numbers[4] = 11;
prime_numbers[5] = 13;
prime_numbers[6] = 17;
prime_numbers[7] = 19;
prime_numbers[8] = 23;
prime_numbers[9] = 29;
const int BUFFER_POSSIBLE_PRIMES = 29 * 29;//Because the greatest prime number we have is 29 in the 10th position so I started with a block of 841 numbers
int qt_calculated_primes = 10;//10 because we initialized the array with the ten first primes
int possible_primes[BUFFER_POSSIBLE_PRIMES];//Will store the booleans to check valid primes
long long iteration = 0;//Used as multiplier to the range of the buffer possible_primes
int i;//Simple counter for loops
while(qt_calculated_primes < MAX_PRIME_NUMBERS)
{
for (i = 0; i < BUFFER_POSSIBLE_PRIMES; i++)
possible_primes[i] = 1;//set the number as prime
int biggest_possible_prime = sqrt((iteration + 1) * BUFFER_POSSIBLE_PRIMES);
int k = 0;
long long prime = prime_numbers[k];//First prime to be used in the check
while (prime <= biggest_possible_prime)//We don't need to check primes bigger than the square root
{
for (i = 0; i < BUFFER_POSSIBLE_PRIMES; i++)
if ((iteration * BUFFER_POSSIBLE_PRIMES + i) % prime == 0)
possible_primes[i] = 0;
if (++k == qt_calculated_primes)
break;
prime = prime_numbers[k];
}
for (i = 0; i < BUFFER_POSSIBLE_PRIMES; i++)
if (possible_primes[i])
{
if ((qt_calculated_primes < MAX_PRIME_NUMBERS) && ((iteration * BUFFER_POSSIBLE_PRIMES + i) != 1))
{
prime_numbers[qt_calculated_primes] = iteration * BUFFER_POSSIBLE_PRIMES + i;
printf("%d\n", prime_numbers[qt_calculated_primes]);
qt_calculated_primes++;
} else if (!(qt_calculated_primes < MAX_PRIME_NUMBERS))
break;
}
iteration++;
}
return 0;
}
It set a maximum of prime numbers to be found, then an array is initialized with known prime numbers like 2, 3, 5...29. So we make a buffer that will store the segments of possible primes, this buffer can't be greater than the power of the greatest initial prime that in this case is 29.
I'm sure there are a plenty of optimizations that can be done to improve the performance like parallelize the segments analysis process and skip numbers that are multiple of 2, 3 and 5 but it serves as an example of low memory consumption.
A number is prime if none of the smaller prime numbers divides it. Since we iterate over the prime numbers in order, we already marked all numbers, who are divisible by at least one of the prime numbers, as divisible. Hence if we reach a cell and it is not marked, then it isn't divisible by any smaller prime number and therefore has to be prime.
Remember these points:-
// Generating all prime number up to R
// creating an array of size (R-L-1) set all elements to be true: prime && false: composite
#include<bits/stdc++.h>
using namespace std;
#define MAX 100001
vector<int>* sieve(){
bool isPrime[MAX];
for(int i=0;i<MAX;i++){
isPrime[i]=true;
}
for(int i=2;i*i<MAX;i++){
if(isPrime[i]){
for(int j=i*i;j<MAX;j+=i){
isPrime[j]=false;
}
}
}
vector<int>* primes = new vector<int>();
primes->push_back(2);
for(int i=3;i<MAX;i+=2){
if(isPrime[i]){
primes->push_back(i);
}
}
return primes;
}
void printPrimes(long long l, long long r, vector<int>*&primes){
bool isprimes[r-l+1];
for(int i=0;i<=r-l;i++){
isprimes[i]=true;
}
for(int i=0;primes->at(i)*(long long)primes->at(i)<=r;i++){
int currPrimes=primes->at(i);
//just smaller or equal value to l
long long base =(l/(currPrimes))*(currPrimes);
if(base<l){
base=base+currPrimes;
}
//mark all multiplies within L to R as false
for(long long j=base;j<=r;j+=currPrimes){
isprimes[j-l]=false;
}
//there may be a case where base is itself a prime number
if(base==currPrimes){
isprimes[base-l]= true;
}
}
for(int i=0;i<=r-l;i++){
if(isprimes[i]==true){
cout<<i+l<<endl;
}
}
}
int main(){
vector<int>* primes=sieve();
int t;
cin>>t;
while(t--){
long long l,r;
cin>>l>>r;
printPrimes(l,r,primes);
}
return 0;
}
I know of a couple of routines that work as follows:
Xn+1 = Routine(Xn, max)
For example, something like a LCG generator:
Xn+1 = (a*Xn + c) mod m
There isn't enough parameterization in this generator to generate every sequence.
Dream Function:
Xn+1 = Routine(Xn, max, permutation number)
This routine, parameterized by an index into the set of all permutations, would return the next number in the sequence. The sequence may be arbitrarily large (so storing the array and using factoradic numbers is impractical.
Failing that, does anyone have pointers to similar functions that are either stateless or have a constant amount of state for arbitrary 'max', such that they will iterate a shuffled list.
There are n! permutations of n elements. Storing which one you're using requires at least log(n!) / log(2) bits. By Stirling's approximation, this takes roughly n log(n) / log (2) bits.
Explicitly storing one index takes log(n) / log(2) bits. Storing all n, as in an array of indices takes n times as many, or again n log(n) / log(2). Information-theoretically, there is no better way than explicitly storing the permutation.
In other words, the index you pass in of what permutation in the set you want takes the same asymptotic storage space as just writing out the permutation. If, for, example, you limit the index of the permutation to 32 bit values, you can only handle permutations of up to 12 elements. 64 bit indices only get you up to 20 elements.
As the index takes the same space as the permutation would, either change your representation to just use the permutation directly, or accept unpacking into an array of size N.
From my response to another question:
It is actually possible to do this in
space proportional to the number of
elements selected, rather than the
size of the set you're selecting from,
regardless of what proportion of the
total set you're selecting. You do
this by generating a random
permutation, then selecting from it
like this:
Pick a block cipher, such as TEA
or XTEA. Use XOR folding to
reduce the block size to the smallest
power of two larger than the set
you're selecting from. Use the random
seed as the key to the cipher. To
generate an element n in the
permutation, encrypt n with the
cipher. If the output number is not in
your set, encrypt that. Repeat until
the number is inside the set. On
average you will have to do less than
two encryptions per generated number.
This has the added benefit that if
your seed is cryptographically secure,
so is your entire permutation.
I wrote about this in much more detail
here.
Of course, there's no guarantee that every permutation can be generated (and depending on your block size and key size, that may not even be possible), but the permutations you can get are highly random (if they weren't, it wouldn't be a good cipher), and you can have as many of them as you want.
If you are wanting a function that takes up less stack space, then you should look into using an iterated version, rather than a function. You can also use a datastructure like a TreeMap, and have it stored on disk, and read on an as needed basis.
X(n+1) = Routine(Xn, max, permutation number)
for(i = n; i > 0; i--)
{
int temp = Map.lookup(i)
otherfun(temp,max,perm)
}
Is it possible to index a set of permutations without previously computing and storing the whole thing in memory? I tried something like this before and didn't find a solution - I think it is impossible (in the mathematical sense).
Disclaimer: I may have misunderstood your question...
Code that uses an iterate interface. Time complexity is O(n^2), Space complexity has an overhead of: copy of n (log n bits), an iteration variable (log n bits), keeping track of n-i (log n bits), , copy of current value (log n bits), copy of p (n log n bits), creation of next value (log n bits), and a bit set of used values (n bits). You can't avoid an overhead of n log n bits. Timewise, this is also O(n^2), for setting the bits. This can be reduced a bit, but at the cost of using a decorated tree to store the used values.
This can be altered to use arbitrary precision integers and bit sets by using calls to the appropriate libraries instead, and the above bounds will actually start to kick in, rather than being capped at N=8, portably (an int can be the same as a short, and as small as 16 bits). 9! = 362880 > 65536 = 2^16
#include <math.h>
#include <stdio.h>
typedef signed char index_t;
typedef unsigned int permutation;
static index_t permutation_next(index_t n, permutation p, index_t value)
{
permutation used = 0;
for (index_t i = 0; i < n; ++i) {
index_t left = n - i;
index_t digit = p % left;
p /= left;
for (index_t j = 0; j <= digit; ++j) {
if (used & (1 << j)) {
digit++;
}
}
used |= (1 << digit);
if (value == -1) {
return digit;
}
if (value == digit) {
value = -1;
}
}
/* value not found */
return -1;
}
static void dump_permutation(index_t n, permutation p)
{
index_t value = -1;
fputs("[", stdout);
value = permutation_next(n, p, value);
while (value != -1) {
printf("%d", value);
value = permutation_next(n, p, value);
if (value != -1) {
fputs(", ", stdout);
}
}
puts("]");
}
static int factorial(int n)
{
int prod = 1;
for (int i = 1; i <= n; ++i) {
prod *= i;
}
return prod;
}
int main(int argc, char **argv)
{
const index_t n = 4;
const permutation max = factorial(n);
for (permutation p = 0; p < max; ++p) {
dump_permutation(n, p);
}
}
Code that unpacks a permutation index into an array, with a certain mapping from index to permutation. There are loads of others, but this one is convenient.
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
typedef unsigned char index_t;
typedef unsigned int permutation;
static void permutation_to_array(index_t *indices, index_t n, permutation p)
{
index_t used = 0;
for (index_t i = 0; i < n; ++i) {
index_t left = n - i;
index_t digit = p % left;
for (index_t j = 0; j <= digit; ++j) {
if (used & (1 << j)) {
digit++;
}
}
used |= (1 << digit);
indices[i] = digit;
p /= left;
}
}
static void dump_array(index_t *indices, index_t n)
{
fputs("[", stdout);
for (index_t i = 0; i < n; ++i) {
printf("%d", indices[i]);
if (i != n - 1) {
fputs(", ", stdout);
}
}
puts("]");
}
static int factorial(int n)
{
int prod = 1;
for (int i = 1; i <= n; ++i) {
prod *= i;
}
return prod;
}
int main(int argc, char **argv)
{
const index_t n = 4;
const permutation max = factorial(n);
index_t *indices = malloc(n * sizeof (*indices));
for (permutation p = 0; p < max; ++p) {
permutation_to_array(indices, n, p);
dump_array(indices, n);
}
free(indices);
}