In an attempt to solve the 3rd problem on project Euler (https://projecteuler.net/problem=3), I decided to implement Pollard's Rho algorithm (at least part of it, I'm planning on including the cycling later). The odd thing is that it works for numbers such as: 82123(factor = 41) and 16843009(factor 257). However when I try the project Euler number: 600851475143, I end up getting 71 when the largest prime factor is 6857. Here's my implementation(sorry for wall of code and lack of type casting):
#include <iostream>
#include <math.h>
#include <vector>
using namespace std;
long long int gcd(long long int a,long long int b);
long long int f(long long int x);
int main()
{
long long int i, x, y, N, factor, iterations = 0, counter = 0;
vector<long long int>factors;
factor = 1;
x = 631;
N = 600851475143;
factors.push_back(x);
while (factor == 1)
{
y = f(x);
y = y % N;
factors.push_back(y);
cout << "\niteration" << iterations << ":\t";
i = 0;
while (factor == 1 && (i < factors.size() - 1))
{
factor = gcd(abs(factors.back() - factors[i]), N);
cout << factor << " ";
i++;
}
x = y;
//factor = 2;
iterations++;
}
system("PAUSE");
return 0;
}
long long int gcd(long long int a, long long int b)
{
long long int remainder;
do
{
remainder = a % b;
a = b;
b = remainder;
} while (remainder != 0);
return a;
}
long long int f(long long int x)
{
//x = x*x * 1024 + 32767;
x = x*x + 1;
return x;
}
Pollard's rho algorithm guarantees nothing. It doesn't guarantee to find the largest factor. It doesn't guarantee that any factor it finds is prime. It doesn't even guarantee to find a factor at all. The rho algorithm is probabilistic; it will probably find a factor, but not necessarily. Since your function returns a factor, it works.
That said, your implementation isn't very good. It's not necessary to store all previous values of the function, and compute the gcd to each every time through the loop. Here is pseudocode for a better version of the function:
function rho(n)
for c from 1 to infinity
h, t := 1, 1
repeat
h := (h*h+c) % n # the hare runs ...
h := (h*h+c) % n # ... twice as fast
t := (t*t+c) % n # as the tortoise
g := gcd(t-h, n)
while g == 1
if g < n then return g
This function returns a single factor of n, which may be either prime or composite. It stores only two values of the random sequence, and stops when it finds a cycle (when g == n), restarting with a different random sequence (by incrementing c). Otherwise it keeps going until it finds a factor, which shouldn't take too long as long as you limit the input to 64-bit integers. Find more factors by applying rho to the remaining cofactor, or if the factor that is found is composite, stopping when all the prime factors have been found.
By the way, you don't need Pollard's rho algorithm to solve Project Euler #3; simple trial division is sufficient. This algorithm finds all the prime factors of a number, from which you can extract the largest:
function factors(n)
f := 2
while f * f <= n
while n % f == 0
print f
n := n / f
f := f + 1
if n > 1 then print n
Related
Farey sequence of order n is the sequence of completely reduced fractions, between 0 and 1 which when in lowest terms have denominators less than or equal to n, arranged in order of increasing size. Detailed explanation here.
Problem
The problem is, given n and k, where n = order of seq and k = element index, can we find the particular element from the sequence. For examples answer for (n=5, k =6) is 1/2.
Lead
There are many less than optimal solution available, but am looking for a near-optimal one. One such algorithm is discussed here, for which I am unable to understand the logic hence unable to apply the examples.
Question
Can some please explain the solution with more detail, preferably with an example.
Thank you.
I've read the method provided in your link, and the accepted C++ solution to it. Let me post them, for reference:
Editorial Explanation
Several less-than-optimal solutions exist. Using a priority queue, one
can iterate through the fractions (generating them one by one) in O(K
log N) time. Using a fancier math relation, this can be reduced to
O(K). However, neither of these solution obtains many points, because
the number of fractions (and thus K) is quadratic in N.
The “good” solution is based on meta-binary search. To construct this
solution, we need the following subroutine: given a fraction A/B
(which is not necessarily irreducible), find how many fractions from
the Farey sequence are less than this fraction. Suppose we had this
subroutine; then the algorithm works as follows:
Determine a number X such that the answer is between X/N and (X+1)/N; such a number can be determined by binary searching the range
1...N, thus calling the subroutine O(log N) times.
Make a list of all fractions A/B in the range X/N...(X+1)/N. For any given B, there is at most one A in this range, and it can be
determined trivially in O(1).
Determine the appropriate order statistic in this list (doing this in O(N log N) by sorting is good enough).
It remains to show how we can construct the desired subroutine. We
will show how it can be implemented in O(N log N), thus giving a O(N
log^2 N) algorithm overall. Let us denote by C[j] the number of
irreducible fractions i/j which are less than X/N. The algorithm is
based on the following observation: C[j] = floor(X*B/N) – Sum(C[D],
where D divides j). A direct implementation, which tests whether any D
is a divisor, yields a quadratic algorithm. A better approach,
inspired by Eratosthene’s sieve, is the following: at step j, we know
C[j], and we subtract it from all multiples of j. The running time of
the subroutine becomes O(N log N).
Relevant Code
#include <cassert>
#include <algorithm>
#include <fstream>
#include <iostream>
#include <vector>
using namespace std;
const int kMaxN = 2e5;
typedef int int32;
typedef long long int64_x;
// #define int __int128_t
// #define int64 __int128_t
typedef long long int64;
int64 count_less(int a, int n) {
vector<int> counter(n + 1, 0);
for (int i = 2; i <= n; i += 1) {
counter[i] = min(1LL * (i - 1), 1LL * i * a / n);
}
int64 result = 0;
for (int i = 2; i <= n; i += 1) {
for (int j = 2 * i; j <= n; j += i) {
counter[j] -= counter[i];
}
result += counter[i];
}
return result;
}
int32 main() {
// ifstream cin("farey.in");
// ofstream cout("farey.out");
int64_x n, k; cin >> n >> k;
assert(1 <= n);
assert(n <= kMaxN);
assert(1 <= k);
assert(k <= count_less(n, n));
int up = 0;
for (int p = 29; p >= 0; p -= 1) {
if ((1 << p) + up > n)
continue;
if (count_less((1 << p) + up, n) < k) {
up += (1 << p);
}
}
k -= count_less(up, n);
vector<pair<int, int>> elements;
for (int i = 1; i <= n; i += 1) {
int b = i;
// find a such that up/n < a / b and a / b <= (up+1) / n
int a = 1LL * (up + 1) * b / n;
if (1LL * up * b < 1LL * a * n) {
} else {
continue;
}
if (1LL * a * n <= 1LL * (up + 1) * b) {
} else {
continue;
}
if (__gcd(a, b) != 1) {
continue;
}
elements.push_back({a, b});
}
sort(elements.begin(), elements.end(),
[](const pair<int, int>& lhs, const pair<int, int>& rhs) -> bool {
return 1LL * lhs.first * rhs.second < 1LL * rhs.first * lhs.second;
});
cout << (int64_x)elements[k - 1].first << ' ' << (int64_x)elements[k - 1].second << '\n';
return 0;
}
Basic Methodology
The above editorial explanation results in the following simplified version. Let me start with an example.
Let's say, we want to find 7th element of Farey Sequence with N = 5.
We start with writing a subroutine, as said in the explanation, that gives us the "k" value (how many Farey Sequence reduced fractions there exist before a given fraction - the given number may or may not be reduced)
So, take your F5 sequence:
k = 0, 0/1
k = 1, 1/5
k = 2, 1/4
k = 3, 1/3
k = 4, 2/5
k = 5, 1/2
k = 6, 3/5
k = 7, 2/3
k = 8, 3/4
k = 9, 4/5
k = 10, 1/1
If we can find a function that finds the count of the previous reduced fractions in Farey Sequence, we can do the following:
int64 k_count_2 = count_less(2, 5); // result = 4
int64 k_count_3 = count_less(3, 5); // result = 6
int64 k_count_4 = count_less(4, 5); // result = 9
This function is written in the accepted solution. It uses the exact methodology explained in the last paragraph of the editorial.
As you can see, the count_less() function generates the same k values as in our hand written list.
We know the values of the reduced fractions for k = 4, 6, 9 using that function. What about k = 7? As explained in the editorial, we will list all the reduced fractions in range X/N and (X+1)/N, here X = 3 and N = 5.
Using the function in the accepted solution (its near bottom), we list and sort the reduced fractions.
After that we will rearrange our k values, as in to fit in our new array as such:
k = -, 0/1
k = -, 1/5
k = -, 1/4
k = -, 1/3
k = -, 2/5
k = -, 1/2
k = -, 3/5 <-|
k = 0, 2/3 | We list and sort the possible reduced fractions
k = 1, 3/4 | in between these numbers
k = -, 4/5 <-|
k = -, 1/1
(That's why there is this piece of code: k -= count_less(up, n);, it basically remaps the k values)
(And we also subtract one more during indexing, i.e.: cout << (int64_x)elements[k - 1].first << ' ' << (int64_x)elements[k - 1].second << '\n';. This is just to basically call the right position in the generated array.)
So, for our new re-mapped k values, for N = 5 and k = 7 (original k), our result is 2/3.
(We select the value k = 0, in our new map)
If you compile and run the accepted solution, it will give you this:
Input: 5 7 (Enter)
Output: 2 3
I believe this is the basic point of the editorial and accepted solution.
Given a number 1 <= n <= 10^18, how can I factorise it in least time complexity?
There are many posts on the internet addressing how you can find prime factors but none of them (at least from what I've seen) state their benefits, say in a particular situation.
I use Pollard's rho algorithm in addition to Eratosthenes' sieve:
Using sieve, find all prime numbers in the first 107 numbers, and then divide n with these primes as much as possible.
Now use Pollard's rho algorithm to try and find the rest of the primes until n is equal to 1.
My Implementation:
#include <iostream>
#include <vector>
#include <cstdio>
#include <ctime>
#include <cmath>
#include <cstdlib>
#include <algorithm>
#include <string>
using namespace std;
typedef unsigned long long ull;
typedef long double ld;
typedef pair <ull, int> pui;
#define x first
#define y second
#define mp make_pair
bool prime[10000005];
vector <ull> p;
void initprime(){
prime[2] = 1;
for(int i = 3 ; i < 10000005 ; i += 2){
prime[i] = 1;
}
for(int i = 3 ; i * i < 10000005 ; i += 2){
if(prime[i]){
for(int j = i * i ; j < 10000005 ; j += 2 * i){
prime[j] = 0;
}
}
}
for(int i = 0 ; i < 10000005 ; ++i){
if(prime[i]){
p.push_back((ull)i);
}
}
}
ull modularpow(ull base, ull exp, ull mod){
ull ret = 1;
while(exp){
if(exp & 1){
ret = (ret * base) % mod;
}
exp >>= 1;
base = (base * base) % mod;
}
return ret;
}
ull gcd(ull x, ull y){
while(y){
ull temp = y;
y = x % y;
x = temp;
}
return x;
}
ull pollardrho(ull n){
srand(time(NULL));
if(n == 1)
return n;
ull x = (rand() % (n - 2)) + 2;
ull y = x;
ull c = (rand() % (n - 1)) + 1;
ull d = 1;
while(d == 1){
x = (modularpow(x, 2, n) + c + n) % n;
y = (modularpow(y, 2, n) + c + n) % n;
y = (modularpow(y, 2, n) + c + n) % n;
d = gcd(abs(x - y), n);
if(d == n){
return pollardrho(n);
}
}
return d;
}
int main ()
{
ios_base::sync_with_stdio(false);
cin.tie(0);
initprime();
ull n;
cin >> n;
ull c = n;
vector <pui> o;
for(vector <ull>::iterator i = p.begin() ; i != p.end() ; ++i){
ull t = *i;
if(!(n % t)){
o.push_back(mp(t, 0));
}
while(!(n % t)){
n /= t;
o[o.size() - 1].y++;
}
}
while(n > 1){
ull u = pollardrho(n);
o.push_back(mp(u, 0));
while(!(n % u)){
n /= u;
o[o.size() - 1].y++;
}
if(n < 10000005){
if(prime[n]){
o.push_back(mp(n, 1));
}
}
}
return 0;
}
Is there any faster way to factor such numbers? If possible, please explain why along with the source code.
Approach
Lets say you have a number n that goes up to 1018 and you want to prime factorise it. Since this number can be as small as unity and as big as 1018, all along it can be prime as well as composite, this would be my approach -
Using miller rabin primality testing, make sure that the number is composite.
Factorise n using primes up to 106, which can be calculated using sieve of Eratosthenes.
Now the updated value of n is such that it has prime factors only above 106 and since the value of n can still be as big as 1018, we conclude that the number is either prime or it has exactly two prime factors (not necessarily distinct).
Run Miller Rabin again to ensure the number isn't prime.
Use Pollard rho algorithm to get one prime factor.
You have the complete factorisation now.
Lets look at the time-complexity of the above approach:
Miller Rabin takes O(log n)
Sieve of Eratosthenes takes O(n*log n)
The implementation of Pollard rho I shared takes O(n^0.25)
Time Complexity
Step 2 takes maximum time which is equal to O(10^7), which is in turn the complexity of the above algorithm. This means you can find the factorisation within a second for almost all programming languages.
Space Complexity
Space is used only in the step 2 where sieve is implemented and is equal to O(10^6). Again, very practical for the purpose.
Implementation
Complete Code implemented in C++14. The code has a hidden bug. You can either reveal it in the next section, or skip towards the challenge ;)
Bug in the code
In line 105, iterate till i<=np. Otherwise, you may miss the cases where prime[np]=999983 is a prime factor
Challenge
Give me a value of n, if any, where the shared code results in wrong prime factorisation.
Bonus
How many such values of n exist ?
Hint
For such value of n, assertion in Line 119 may fail.
Solution
Lets call P=999983. All numbers of the form n = p*q*r where p, q, r are primes >= P such that at least one of them is equal to P will result in wrong prime factorisation.
Bonus Solution
There are exactly four such numbers: {P03, P02P1, P02P2, P0P12}, where P0 = P = 999983, P1 = next_prime(P0) = 1000003, P2 = next_prime(P1) = 1000033.
The fastest solution for 64-bit inputs on modern processors is a small amount of trial division (the amount will differ, but something under 100 is common) followed by Pollard's Rho. You will need a good deterministic primality test using Miller-Rabin or BPSW, and a infrastructure to handle multiple factors (e.g. if a composite is split into more composites). For 32-bit you can optimize each of these things even more.
You will want a fast mulmod, as it is the core of both Pollard's Rho, Miller-Rabin, and the Lucas test. Ideally this is done as a tiny assembler snippet.
Times should be under 1 millisecond to factor any 64-bit input. Significantly faster under 50 bits.
As shown by Ben Buhrow's spBrent implementation, algorithm P2'' from Brent's 1980 paper seems to be as fast as the other implementations I'm aware of. It uses Brent's improved cycle finding as well as the useful trick of delaying GCDs with the necessary added backtracking.
See this thread on Mersenneforum for some messy details and benchmarking of various solutions. I have a number of benchmarks of these and other implementations at different sizes, but haven't published anything (partly because there are so many ways to look at the data).
One of the really interesting things to come out of this was that SQUFOF, for many years believed to be the better solution for the high end of the 64-bit range, no longer is competitive. SQUFOF does have the advantage of only needing a fast perfect-square detector for best speed, which doesn't have to be in asm to be really fast.
I am trying to solve for closest value of n when I am given a sum of first n numbers.
Meaning if I have sum as 60, my n should be 10 as the sum of first 10 numbers is 55, if I include 11 the sum would be 66, exceeding my required sum.
int num=1, mysum = 0;
int givensum=60;
while (mysum < givensum) {
mysum += num;
num++;
}
cout<<num-1;
return 0;
The other way I can think of solving this is by solving for quadratic equation
n(n+1) / 2 = givensum and get n from it.
Is there any other way to solve this problem?
I don't think there is a better way than solving the quadratic equation. It is pretty straightforward,
n*(n+1)/2 = sum
n^2 + n - sum*2 = 0
assumin ax^2 + bx + c = 0
a = 1, b = 1, c = -2*sum
since we don't need the negative answer:
n = ( -b + sqrt(b^2 - 4ac) ) / 2a
This is the implementation:
#include <iostream>
#include <cmath>
using namespace std;
int main()
{
int sum = 60;
int a = 1;
int b = 1;
int c = -sum*2;
double delta = b*b - 4*a*c;
if ( delta >= 0 ){
double x1 = -b + sqrt(delta);
//double x2 = -b - sqrt(delta); // we don't need the negative answer
x1 /= 2*a;
//x2 /= 2*a;
cout << x1 << endl;
}
else {
cout << "no result";
}
}
the result is a floating point number, if you want the sum of n elements to be less than or equal to the input sum you should round it down with floor function.
Consider the function f(n) = n*(n+1)/2 which yields the sum of first n integers. This function is strictly increasing. So you can use binary search to find n when the value for f(n) is given to you:
#include <iostream>
#include <cmath>
using namespace std;
int main()
{
int sum = 61;
int low = 1, high = sum, mid;
while ( low < high ){
mid = ceil ( (low+high)/2.0 );
int s = mid*(mid+1)/2;
if ( s > sum ){
high = mid-1;
} else if ( s < sum ) {
low = mid;
} else {
low = mid;
break;
}
}
cout << low << endl;
}
So we want to find an integer n such that (n+1)*n/2 <= y < (n+2)*(n+1)/2
Solving the quadratic equation f(x)=y, where f(x)=(x+1)*x/2 can be done with floating point arithmetic, then taking n as integer part of x.
But we don't really need floating point because we just want the integer part of the result, we can also do it with a Newton-Raphson iteration https://en.wikipedia.org/wiki/Newton%27s_method
The derivative of f(x) is f'(x)=(x+(1/2)). Not a good integer, but we can all multiply by 2, and write the loop like this: (this is Smalltalk, but that does not really matter):
Integer>>invSumFloor
"Return the integer n such that (1 to: n) sum <= self < (1 to: n+1) sum"
| guess delta y2 |
y2 := self * 2.
guess := 1 bitShift: y2 highBit + 1 // 2.
[
delta := ((guess + 1) * guess - y2) // (guess * 2 + 1).
delta = 0 ]
whileFalse: [ guess := guess - delta ].
^guess - 1
So we are iterating like this:
x(n+1) = x(n) - (2*f(x(n))-2*y)/(2*f'(x(n)))
But instead of taking an exact division, we use // which is the quotient rounded down to nearest integer.
Normally, we should test whether the guess is overstimated or not in the final stage.
But here, we arrange to have the initial guess be an overestimate of the result, but not too much overestimated so that the guess will allways stay overestimated. Like this we can simply subtract 1 in final stage. Above implementation uses a rough guess of first bit position as initial x value.
We can then check the implementation on first ten thousand natural integers with:
(1 to: 10000) allSatisfy: [:y |
| n sum |
n := y invSumFloor.
sum := (1 to: n) sum.
(sum <= y and: [y < (sum + n + 1)])].
which answers true
What's nice with Smalltalk is that you can then try things like this:
80 factorial invSumFloor.
And get something like:
378337037695924775539900121166451777161332730835021256527654
You here see that Newton Raphson converges rapidly (7 iterations in above example). This is very different from the initial naive iteration.
After the code breaks out of the while, mysum becomes the closest sum that is greater than your givensum. For the example you have given, the while loop gets executed because 55 is less than 60, and the mysum becomes 66 and num becomes 12 in the last execution of the loop just before it stops. After this step, because 66 is not less than 60, the while does not get executed again. Therefore, you should decrease the mysum by num-2.
cout<<num-2;
I was trying to solve a problem involving large factorials modulo a prime, and found the following algorithm in another's solution:
long long factMod (long long n, long long p)
{
long long ans = 1;
while (n > 1)
{
long long cur = 1;
for (long long i = 1; i < p; i++)
{
cur = (cur * i) % p;
}
ans = (ans * modPow(cur, n/p, p)) % p;
for (long long i = 1; i <= n % p; i++)
{
ans = (ans * i) % p;
}
n /= p;
}
return (ans % p);
}
long long nChooseK(long long n, long long k, long long p)
{
int num_degree = get_degree(n, p) - get_degree(n - k, p);
int den_degree = get_degree(k, p);
if (num_degree > den_degree) { return 0; }
long long nFact = factMod(n, p);
long long kFact = factMod(k, p);
long long nMinusKFact = factMod(n-k, p);
long long ans = (((nFact * modPow(kFact, p - 2, p)) % p) * modPow(nMinusKFact, p - 2, p))%p;
return ans;
}
I know the basics of number theory but can't seem to figure out how this works.
The nChooseK function appears to use the definition of combination [n!/(n-k)!k!] with the modular inverse computed using Fermat's little theorem to replace the division. However, according to one of the answers, the factMod function does not actually compute the factorial. If this is the case, how does the nChooseK function work?
Yes, n! ≡ 0 mod p if and only if n ≥ p, but factMod isn't computing n! mod p – it's computing n!/pk mod p where k is the exponent of p in the prime factorization of n!, perhaps for the purpose of computing a binomial coefficient. Iteration i (counting from 0) of the loop counts the contribution of those factors 1…n whose prime factorization includes pi. The statement n /= p; yields the subproblem on the multiples of p.
The function get_degree(n, p) presumably returns the exponent of p in the prime factorization of n!. If get_degree(n, p) == get_degree(k, p) + get_degree(n - k, p), then the factors of p in numerator and denominator exactly cancel, and we can use factMod to account for the other factors. Otherwise, the number of combinations is divisible by p, so we return 0.
Since (p-1)! ≡ -1 mod p by Wilson's theorem, the first inner loop is redundant.
I calculated permutation of numbers as:-
nPr = n!/(n-r)!
where n and r are given .
1<= n,r <= 100
i find p=(n-r)+1
and
for(i=n;i>=p;i--)
multiply digit by digit and store in array.
But how will I calculate the nCr = n!/[r! * (n-r)!] for the same range.?
I did this using recursion as follow :-
#include <stdio.h>
typedef unsigned long long i64;
i64 dp[100][100];
i64 nCr(int n, int r)
{
if(n==r) return dp[n][r] = 1;
if(r==0) return dp[n][r] = 1;
if(r==1) return dp[n][r] = (i64)n;
if(dp[n][r]) return dp[n][r];
return dp[n][r] = nCr(n-1,r) + nCr(n-1,r-1);
}
int main()
{
int n, r;
while(scanf("%d %d",&n,&r)==2)
{
r = (r<n-r)? r : n-r;
printf("%llu\n",nCr(n,r));
}
return 0;
}
but range for n <=100 , and this is not working for n>60 .
Consider using a BigInteger type of class to represnet your big numbers. BigInteger is available in Java and C# (version 4+ of the .NET Framework). From your question, it looks like you are using C++ (which you should always add as a tag). So try looking here and here for a usable C++ BigInteger class.
One of the best methods for calculating the binomial coefficient I have seen suggested is by Mark Dominus. It is much less likely to overflow with larger values for N and K than some other methods.
static long GetBinCoeff(long N, long K)
{
// This function gets the total number of unique combinations based upon N and K.
// N is the total number of items.
// K is the size of the group.
// Total number of unique combinations = N! / ( K! (N - K)! ).
// This function is less efficient, but is more likely to not overflow when N and K are large.
// Taken from: http://blog.plover.com/math/choose.html
//
if (K > N) return 0;
long r = 1;
long d;
for (d = 1; d <= K; d++)
{
r *= N--;
r /= d;
}
return r;
}
Just replace all the long definitions with BigInt and you should be good to go.