Calculation of euler phi function - algorithm

int phi (int n) {
int result = n;
for (int i=2; i*i<=n; ++i)
if (n % i == 0) {
while (n % i == 0)
n /= i;
result -= result / i;
}
if (n > 1)
result -= result / n;
return result;
}
I saw the above implementation of Euler phi function which is of the O(sqrt n).I don't get the fact of using i*i<=n in the for loop and need of changing n.It is said that it can be done in a time much smaller O (sqrt n) How? link (in Russian)

i*i<=n is same as i<= sqrt(n) from which your iteration lasts only to order of sqrt(n).
Using the straight definition of Euler totient function you are supposed to find the prime numbers that divides n.

The function is a straight forward implementation of integer factorization by trial division, except that instead of reporting the factors as it finds them the function uses the factors to calculate phi. Calculation of phi can be done in time less than O(sqrt n) by using a better algorithm to find the factors; the best way to do that depends on the magnitude of n.

If the biggest number (N say) that you will want the totient of is small enough that you can have a table of size N in memory, then you can do a lot better, per evaluation, at the cost of having to build a table before any evaluations.
One approach would be to build a table of primes first, and then instead of using trial division by every integer at most sqrt(n), use trial division by every prime at most sqrt(n).
You could improve on this by building, instead of a table of primes, a table that gives (for each integer 2..N) the smallest prime that divides the number. A simple modification of the usual Sieve of Eratosthenes can be used to build such a table. Then to compute the totient of a number you use the table to find the smallest prime dividing the number (and accumulate that into you answer), then divide the number by the table entry, use the table to find the smallest prime that divides that, and so on.

Related

How can sieve of Eratosthenes be implemented in O(n) time complexity?

There is implementation of this algorithm of finding prime numbers upto n in O(n*log(log(n)) time complexity. How can we achieve it in O(n) time complexity?
You can perform the Sieve of Eratosthenes to determine which numbers are prime in the range [2, n] in O(n) time as follows:
For each number x in the interval [2, n], we compute the minimum prime factor of x. For implementation purposes, this can easily be done by keeping an array --- say MPF[] --- in which MPF[x] represents the minimum prime factor of x. Initially, you should set MPF[x] equal to zero for every integer x. As the algorithm progresses, this table will get filled.
Now we use a for-loop and iterate from i = 2 upto i = n (inclusive). If we encounter a number for which MPF[i] equals 0, then we conclude immediately that i is prime since it doesn't have a least prime factor. At this point, we mark i as prime by inserting it into a list, and we set MPF[i] equal to i. Conversely, if MPF[i] does not equal 0, then we know that i is composite with minimum prime factor equal to MPF[i].
During each iteration, after we've checked MPF[i], we do the following: compute the number y_j = i * p_j for each prime number p_j less than or equal to MPF[i], and set MPF[y_j] equal to p_j.
This might seem counterintuitive --- why is the runtime O(n) if we have two nested loops? The key idea is that every value is set exactly one, so the runtime is O(n). This website gives a C++ implementation, which I've provided below:
const int N = 10000000;
int lp[N+1];
vector<int> pr;
for (int i=2; i<=N; ++i) {
if (lp[i] == 0) {
lp[i] = i;
pr.push_back (i);
}
for (int j=0; j<(int)pr.size() && pr[j]<=lp[i] && i*pr[j]<=N; ++j)
lp[i * pr[j]] = pr[j];
}
The array lp[] in the implementation above is the same thing as MPF[] that I described in my explanation. Also, pr stores the list of prime numbers.
Well, if the algorithm is O(n*log(n)) you generally can’t do better without changing the algorithm.
The complexity is O(n*log(n)). But you can trade between time and resources: By making sure you have O(log(n)) computing nodes running in parallel, it would be possible to do it in O(n).
Hope I didn’t do your homework...

Fastest way to compute (n + 1)^j from (n^j)

I need to compute 0^j, 1^j, ..., k^j for some very large k and j (both in the order of a few millions). I am using GMP to handle the big integer numbers (yes, I need integer numbers as I need full precision). Now, I wonder, once I have gone through the effort of computing n^j, isn't there a way to speed up the computation of (n + 1)^j, instead of starting from scratch?
Here is the algorithm I am currently using to compute the power:
mpz_class pow(unsigned long int b, unsigned long int e)
{
mpz_class res = 1;
mpz_class m = b;
while(e)
{
if(e & 1)
{
res *= m;
}
e >>= 1;
m *= m;
}
return res;
}
As you can see, every time I start from scratch, and it takes a lot of time.
To compute n^j, why not find at least one factor of n, say k perform n^j = k^j * (n/k)^j ? By the time n^j is being computed, both k^j and (n/k)^j should be known.
However the above takes potentially O(sqrt(n)) time for n. We have a computation of n^j independently in O(log(j)) time by Exponentiation by Squaring as you have mentioned in the code above.
So you could have a mix of the above depending on which is larger:
If n is much smaller than log(j), compute n^j by factorization.
Whenever n^j is known compute {(2*n)^j, (3*n)^j, ..., ((n-1)*n)^j, n * n^j} and keep it in a lookup table.
If n is larger than log(j) and a ready computation as above is not possible, use the logarithmic method and then compute the other related powers like above.
If n is a pure power of 2 (possible const time computation), compute the jth power by shifting and calculate the related sums.
If n is even (const time computation again), use the factorization method and compute associated products.
The above should make it quite fast. For example, identification of even numbers by itself should convert half of the power computations to multiplications. There could be many more thumb rules that could be found regarding factorization that could reduce the computation further (especially for divisibility by 3, 7 etc)
You may want to use the binomial expansion of (n+1)^j as n^j + jn^(j-1)+j(j-1)/2 * n^(j-2) +... + 1 and memoize lower powers already computed and reuse them to compute (n+1)^j in O(n) time by addition. If you compute the coefficients j, j*(j-1)/2,... incrementally while adding each term, that can be done in O(n) too.

run time of this Prime Factor function?

I wrote this prime factorization function, can someone explain the runtime to me? It seems fast to me as it continuously decomposes a number into primes without having to check if the factors are prime and runs from 2 to the number in the worst case.
I know that no functions yet can factor primes in polynomial time. Also, how does the run time relate asymptotically to factoring large primes?
function getPrimeFactors(num) {
var factors = [];
for (var i = 2; i <= num; i++) {
if (num % i === 0) {
num = num / i;
factors.push(i);
i--;
}
}
return factors;
}
In your example, if num is prime then it would take exactly num - 1 steps. This would mean that the algorithm's runtime is O(num) (where O stands for a pessimistic case). But in case of algorithm that operate on numbers things get a little bit more tricky (thanks for noticing thegreatcontini and Chris)! We always describe complexity as a function of input size. In this case the input is a number num and it is represented with log(num) bits. So the input size is of log(num). Because num = 2 ^ (log(num)) then your algorithm is of complexity O(2^k) where k = log(num) - size of your input.
This is what makes this problem hard - input is very, very small and any polynomial from num leads to exponential algorithm ...
On a side note #rici is right, you need to check only up to sqrt(num), thus easily reducing the runtime to O(sqrt(num)) or more correctly O(sqrt(2) ^ k).

Time complexity of Euclid's Algorithm

I am having difficulty deciding what the time complexity of Euclid's greatest common denominator algorithm is. This algorithm in pseudo-code is:
function gcd(a, b)
while b ≠ 0
t := b
b := a mod b
a := t
return a
It seems to depend on a and b. My thinking is that the time complexity is O(a % b). Is that correct? Is there a better way to write that?
One trick for analyzing the time complexity of Euclid's algorithm is to follow what happens over two iterations:
a', b' := a % b, b % (a % b)
Now a and b will both decrease, instead of only one, which makes the analysis easier. You can divide it into cases:
Tiny A: 2a <= b
Tiny B: 2b <= a
Small A: 2a > b but a < b
Small B: 2b > a but b < a
Equal: a == b
Now we'll show that every single case decreases the total a+b by at least a quarter:
Tiny A: b % (a % b) < a and 2a <= b, so b is decreased by at least half, so a+b decreased by at least 25%
Tiny B: a % b < b and 2b <= a, so a is decreased by at least half, so a+b decreased by at least 25%
Small A: b will become b-a, which is less than b/2, decreasing a+b by at least 25%.
Small B: a will become a-b, which is less than a/2, decreasing a+b by at least 25%.
Equal: a+b drops to 0, which is obviously decreasing a+b by at least 25%.
Therefore, by case analysis, every double-step decreases a+b by at least 25%. There's a maximum number of times this can happen before a+b is forced to drop below 1. The total number of steps (S) until we hit 0 must satisfy (4/3)^S <= A+B. Now just work it:
(4/3)^S <= A+B
S <= lg[4/3](A+B)
S is O(lg[4/3](A+B))
S is O(lg(A+B))
S is O(lg(A*B)) //because A*B asymptotically greater than A+B
S is O(lg(A)+lg(B))
//Input size N is lg(A) + lg(B)
S is O(N)
So the number of iterations is linear in the number of input digits. For numbers that fit into cpu registers, it's reasonable to model the iterations as taking constant time and pretend that the total running time of the gcd is linear.
Of course, if you're dealing with big integers, you must account for the fact that the modulus operations within each iteration don't have a constant cost. Roughly speaking, the total asymptotic runtime is going to be n^2 times a polylogarithmic factor. Something like n^2 lg(n) 2^O(log* n). The polylogarithmic factor can be avoided by instead using a binary gcd.
The suitable way to analyze an algorithm is by determining its worst case scenarios.
Euclidean GCD's worst case occurs when Fibonacci Pairs are involved.
void EGCD(fib[i], fib[i - 1]), where i > 0.
For instance, let's opt for the case where the dividend is 55, and the divisor is 34 (recall that we are still dealing with fibonacci numbers).
As you may notice, this operation costed 8 iterations (or recursive calls).
Let's try larger Fibonacci numbers, namely 121393 and 75025. We can notice here as well that it took 24 iterations (or recursive calls).
You can also notice that each iterations yields a Fibonacci number. That's why we have so many operations. We can't obtain similar results only with Fibonacci numbers indeed.
Hence, the time complexity is going to be represented by small Oh (upper bound), this time. The lower bound is intuitively Omega(1): case of 500 divided by 2, for instance.
Let's solve the recurrence relation:
We may say then that Euclidean GCD can make log(xy) operation at most.
There's a great look at this on the wikipedia article.
It even has a nice plot of complexity for value pairs.
It is not O(a%b).
It is known (see article) that it will never take more steps than five times the number of digits in the smaller number. So the max number of steps grows as the number of digits (ln b). The cost of each step also grows as the number of digits, so the complexity is bound by O(ln^2 b) where b is the smaller number. That's an upper limit, and the actual time is usually less.
See here.
In particular this part:
Lamé showed that the number of steps needed to arrive at the greatest common divisor for two numbers less than n is
So O(log min(a, b)) is a good upper bound.
Here's intuitive understanding of runtime complexity of Euclid's algorithm. The formal proofs are covered in various texts such as Introduction to Algorithms and TAOCP Vol 2.
First think about what if we tried to take gcd of two Fibonacci numbers F(k+1) and F(k). You might quickly observe that Euclid's algorithm iterates on to F(k) and F(k-1). That is, with each iteration we move down one number in Fibonacci series. As Fibonacci numbers are O(Phi ^ k) where Phi is golden ratio, we can see that runtime of GCD was O(log n) where n=max(a, b) and log has base of Phi. Next, we can prove that this would be the worst case by observing that Fibonacci numbers consistently produces pairs where the remainders remains large enough in each iteration and never become zero until you have arrived at the start of the series.
We can make O(log n) where n=max(a, b) bound even more tighter. Assume that b >= a so we can write bound at O(log b). First, observe that GCD(ka, kb) = GCD(a, b). As biggest values of k is gcd(a,c), we can replace b with b/gcd(a,b) in our runtime leading to more tighter bound of O(log b/gcd(a,b)).
Here is the analysis in the book Data Structures and Algorithm Analysis in C by Mark Allen Weiss (second edition, 2.4.4):
Euclid's algorithm works by continually computing remainders until 0 is reached. The last nonzero remainder is the answer.
Here is the code:
unsigned int Gcd(unsigned int M, unsigned int N)
{
unsigned int Rem;
while (N > 0) {
Rem = M % N;
M = N;
N = Rem;
}
Return M;
}
Here is a THEOREM that we are going to use:
If M > N, then M mod N < M/2.
PROOF:
There are two cases. If N <= M/2, then since the remainder is smaller
than N, the theorem is true for this case. The other case is N > M/2.
But then N goes into M once with a remainder M - N < M/2, proving the
theorem.
So, we can make the following inference:
Variables M N Rem
initial M N M%N
1 iteration N M%N N%(M%N)
2 iterations M%N N%(M%N) (M%N)%(N%(M%N)) < (M%N)/2
So, after two iterations, the remainder is at most half of its original value. This would show that the number of iterations is at most 2logN = O(logN).
Note that, the algorithm computes Gcd(M,N), assuming M >= N.(If N > M, the first iteration of the loop swaps them.)
Worst case will arise when both n and m are consecutive Fibonacci numbers.
gcd(Fn,Fn−1)=gcd(Fn−1,Fn−2)=⋯=gcd(F1,F0)=1 and nth Fibonacci number is 1.618^n, where 1.618 is the Golden ratio.
So, to find gcd(n,m), number of recursive calls will be Θ(logn).
The worst case of Euclid Algorithm is when the remainders are the biggest possible at each step, ie. for two consecutive terms of the Fibonacci sequence.
When n and m are the number of digits of a and b, assuming n >= m, the algorithm uses O(m) divisions.
Note that complexities are always given in terms of the sizes of inputs, in this case the number of digits.
Gabriel Lame's Theorem bounds the number of steps by log(1/sqrt(5)*(a+1/2))-2, where the base of the log is (1+sqrt(5))/2. This is for the the worst case scenerio for the algorithm and it occurs when the inputs are consecutive Fibanocci numbers.
A slightly more liberal bound is: log a, where the base of the log is (sqrt(2)) is implied by Koblitz.
For cryptographic purposes we usually consider the bitwise complexity of the algorithms, taking into account that the bit size is given approximately by k=loga.
Here is a detailed analysis of the bitwise complexity of Euclid Algorith:
Although in most references the bitwise complexity of Euclid Algorithm is given by O(loga)^3 there exists a tighter bound which is O(loga)^2.
Consider; r0=a, r1=b, r0=q1.r1+r2 . . . ,ri-1=qi.ri+ri+1, . . . ,rm-2=qm-1.rm-1+rm rm-1=qm.rm
observe that: a=r0>=b=r1>r2>r3...>rm-1>rm>0 ..........(1)
and rm is the greatest common divisor of a and b.
By a Claim in Koblitz's book( A course in number Theory and Cryptography) is can be proven that: ri+1<(ri-1)/2 .................(2)
Again in Koblitz the number of bit operations required to divide a k-bit positive integer by an l-bit positive integer (assuming k>=l) is given as: (k-l+1).l ...................(3)
By (1) and (2) the number of divisons is O(loga) and so by (3) the total complexity is O(loga)^3.
Now this may be reduced to O(loga)^2 by a remark in Koblitz.
consider ki= logri +1
by (1) and (2) we have: ki+1<=ki for i=0,1,...,m-2,m-1 and ki+2<=(ki)-1 for i=0,1,...,m-2
and by (3) the total cost of the m divisons is bounded by: SUM [(ki-1)-((ki)-1))]*ki for i=0,1,2,..,m
rearranging this: SUM [(ki-1)-((ki)-1))]*ki<=4*k0^2
So the bitwise complexity of Euclid's Algorithm is O(loga)^2.
For the iterative algorithm, however, we have:
int iterativeEGCD(long long n, long long m) {
long long a;
int numberOfIterations = 0;
while ( n != 0 ) {
a = m;
m = n;
n = a % n;
numberOfIterations ++;
}
printf("\nIterative GCD iterated %d times.", numberOfIterations);
return m;
}
With Fibonacci pairs, there is no difference between iterativeEGCD() and iterativeEGCDForWorstCase() where the latter looks like the following:
int iterativeEGCDForWorstCase(long long n, long long m) {
long long a;
int numberOfIterations = 0;
while ( n != 0 ) {
a = m;
m = n;
n = a - n;
numberOfIterations ++;
}
printf("\nIterative GCD iterated %d times.", numberOfIterations);
return m;
}
Yes, with Fibonacci Pairs, n = a % n and n = a - n, it is exactly the same thing.
We also know that, in an earlier response for the same question, there is a prevailing decreasing factor: factor = m / (n % m).
Therefore, to shape the iterative version of the Euclidean GCD in a defined form, we may depict as a "simulator" like this:
void iterativeGCDSimulator(long long x, long long y) {
long long i;
double factor = x / (double)(x % y);
int numberOfIterations = 0;
for ( i = x * y ; i >= 1 ; i = i / factor) {
numberOfIterations ++;
}
printf("\nIterative GCD Simulator iterated %d times.", numberOfIterations);
}
Based on the work (last slide) of Dr. Jauhar Ali, the loop above is logarithmic.
Yes, small Oh because the simulator tells the number of iterations at most. Non Fibonacci pairs would take a lesser number of iterations than Fibonacci, when probed on Euclidean GCD.
At every step, there are two cases
b >= a / 2, then a, b = b, a % b will make b at most half of its previous value
b < a / 2, then a, b = b, a % b will make a at most half of its previous value, since b is less than a / 2
So at every step, the algorithm will reduce at least one number to at least half less.
In at most O(log a)+O(log b) step, this will be reduced to the simple cases. Which yield an O(log n) algorithm, where n is the upper limit of a and b.
I have found it here

Reverse factorial

Well, we all know that if N is given it's easy to calculate N!. But what about the inverse?
N! is given and you are about to find N - Is that possible ? I'm curious.
Set X=1.
Generate F=X!
Is F = the input? If yes, then X is N.
If not, then set X=X+1, then start again at #2.
You can optimize by using the previous result of F to compute the new F (new F = new X * old F).
It's just as fast as going the opposite direction, if not faster, given that division generally takes longer than multiplication. A given factorial A! is guaranteed to have all integers less than A as factors in addition to A, so you'd spend just as much time factoring those out as you would just computing a running factorial.
If you have Q=N! in binary, count the trailing zeros. Call this number J.
If N is 2K or 2K+1, then J is equal to 2K minus the number of 1's in the binary representation of 2K, so add 1 over and over until the number of 1's you have added is equal to the number of 1's in the result.
Now you know 2K, and N is either 2K or 2K+1. To tell which one it is, count the factors of the biggest prime (or any prime, really) in 2K+1, and use that to test Q=(2K+1)!.
For example, suppose Q (in binary) is
1111001110111010100100110000101011001111100000110110000000000000000000
(Sorry it's so small, but I don't have tools handy to manipulate larger numbers.)
There are 19 trailing zeros, which is
10011
Now increment:
1: 10100
2: 10101
3: 10110 bingo!
So N is 22 or 23. I need a prime factor of 23, and, well, I have to pick 23 (it happens that 2K+1 is prime, but I didn't plan that and it isn't needed). So 23^1 should divide 23!, it doesn't divide Q, so
N=22
int inverse_factorial(int factorial){
int current = 1;
while (factorial > current) {
if (factorial % current) {
return -1; //not divisible
}
factorial /= current;
++current;
}
if (current == factorial) {
return current;
}
return -1;
}
Yes. Let's call your input x. For small values of x, you can just try all values of n and see if n! = x. For larger x, you can binary-search over n to find the right n (if one exists). Note hat we have n! ≈ e^(n ln n - n) (this is Stirling's approximation), so you know approximately where to look.
The problem of course, is that very few numbers are factorials; so your question makes sense for only a small set of inputs. If your input is small (e.g. fits in a 32-bit or 64-bit integer) a lookup table would be the best solution.
(You could of course consider the more general problem of inverting the Gamma function. Again, binary search would probably be the best way, rather than something analytic. I'd be glad to be shown wrong here.)
Edit: Actually, in the case where you don't know for sure that x is a factorial number, you may not gain all that much (or anything) with binary search using Stirling's approximation or the Gamma function, over simple solutions. The inverse factorial grows slower than logarithmic (this is because the factorial is superexponential), and you have to do arbitrary-precision arithmetic to find factorials and multiply those numbers anyway.
For instance, see Draco Ater's answer for an idea that (when extended to arbitrary-precision arithmetic) will work for all x. Even simpler, and probably even faster because multiplication is faster than division, is Dav's answer which is the most natural algorithm... this problem is another triumph of simplicity, it appears. :-)
Well, if you know that M is really the factorial of some integer, then you can use
n! = Gamma(n+1) = sqrt(2*PI) * exp(-n) * n^(n+1/2) + O(n^(-1/2))
You can solve this (or, really, solve ln(n!) = ln Gamma(n+1)) and find the nearest integer.
It is still nonlinear, but you can get an approximate solution by iteration easily (in fact, I expect the n^(n+1/2) factor is enough).
Multiple ways. Use lookup tables, use binary search, use a linear search...
Lookup tables is an obvious one:
for (i = 0; i < MAX; ++i)
Lookup[i!] = i; // you can calculate i! incrementally in O(1)
You could implement this using hash tables for example, or if you use C++/C#/Java, they have their own hash table-like containers.
This is useful if you have to do this a lot of times and each time it has to be fast, but you can afford to spend some time building this table.
Binary search: assume the number is m = (1 + N!) / 2. Is m! larger than N!? If yes, reduce the search between 1 and m!, otherwise reduce it between m! + 1 and N!. Recursively apply this logic.
Of course, these numbers might be very big and you might end up doing a lot of unwanted operations. A better idea is to search between 1 and sqrt(N!) using binary search, or try to find even better approximations, though this might not be easy. Consider studying the gamma function.
Linear search: Probably the best in this case. Calculate 1*2*3*...*k until the product is equal to N! and output k.
If the input number is really N!, its fairly simple to calculate N.
A naive approach computing factorials will be too slow, due to the overhead of big integer arithmetic. Instead we can notice that, when N ≥ 7, each factorial can be uniquely identified by its length (i.e. number of digits).
The length of an integer x can be computed as log10(x) + 1.
Product rule of logarithms: log(a*b) = log(a) + log(b)
By using above two facts, we can say that length of N! is:
which can be computed by simply adding log10(i) until we get length of our input number, since log(1*2*3*...*n) = log(1) + log(2) + log(3) + ... + log(n).
This C++ code should do the trick:
double result = 0;
for (int i = 1; i <= 1000000; ++i) { // This should work for 1000000! (where inputNumber has 10^7 digits)
result += log10(i);
if ( (int)result + 1 == inputNumber.size() ) { // assuming inputNumber is a string of N!
std::cout << i << endl;
break;
}
}
(remember to check for cases where n<7 (basic factorial calculation should be fine here))
Complete code: https://pastebin.com/9EVP7uJM
Here is some clojure code:
(defn- reverse-fact-help [n div]
(cond (not (= 0 (rem n div))) nil
(= 1 (quot n div)) div
:else (reverse-fact-help (/ n div) (+ div 1))))
(defn reverse-fact [n] (reverse-fact-help n 2))
Suppose n=120, div=2. 120/2=60, 60/3=20, 20/4=5, 5/5=1, return 5
Suppose n=12, div=2. 12/2=6, 6/3=2, 2/4=.5, return 'nil'
int p = 1,i;
//assume variable fact_n has the value n!
for(i = 2; p <= fact_n; i++) p = p*i;
//i is the number you are looking for if p == fact_n else fact_n is not a factorial
I know it isn't a pseudocode, but it's pretty easy to understand
inverse_factorial( X )
{
X_LOCAL = X;
ANSWER = 1;
while(1){
if(X_LOCAL / ANSWER == 1)
return ANSWER;
X_LOCAL = X_LOCAL / ANSWER;
ANSWER = ANSWER + 1;
}
}
This function is based on successive approximations! I created it and implemented it in Advanced Trigonometry Calculator 1.7.0
double arcfact(double f){
double result=0,precision=1000;
int i=0;
if(f>0){
while(precision>1E-309){
while(f>fact(result+precision)&&i<10){
result=result+precision;
i++;
}
precision=precision/10;
i=0;
}
}
else{
result=0;
}
return result;
}
If you do not know whether a number M is N! or not, a decent test is to test if it's divisible by all the small primes until the Sterling approximation of that prime is larger than M. Alternatively, if you have a table of factorials but it doesn't go high enough, you can pick the largest factorial in your table and make sure M is divisible by that.
In C from my app Advanced Trigonometry Calculator v1.6.8
double arcfact(double f) {
double i=1,result=f;
while((result/(i+1))>=1) {
result=result/i;
i++;
}
return result;
}
What you think about that? Works correctly for factorials integers.
Simply divide by positive numbers, i.e: 5!=120 ->> 120/2 = 60 || 60/3 = 20 || 20/4 = 5 || 5/5 = 1
So the last number before result = 1 is your number.
In code you could do the following:
number = res
for x=2;res==x;x++{
res = res/x
}
or something like that. This calculation needs improvement for non-exact numbers.
Most numbers are not in the range of outputs of the factorial function. If that is what you want to test, it's easy to get an approximation using Stirling's formula or the number of digits of the target number, as others have mentioned, then perform a binary search to determine factorials above and below the given number.
What is more interesting is constructing the inverse of the Gamma function, which extends the factorial function to positive real numbers (and to most complex numbers, too). It turns out construction of an inverse is a difficult problem. However, it was solved explicitly for most positive real numbers in 2012 in the following paper: http://www.ams.org/journals/proc/2012-140-04/S0002-9939-2011-11023-2/S0002-9939-2011-11023-2.pdf . The explicit formula is given in Corollary 6 at the end of the paper.
Note that it involves an integral on an infinite domain, but with a careful analysis I believe a reasonable implementation could be constructed. Whether that is better than a simple successive approximation scheme in practice, I don't know.
C/C++ code for what the factorial (r is the resulting factorial):
int wtf(int r) {
int f = 1;
while (r > 1)
r /= ++f;
return f;
}
Sample tests:
Call: wtf(1)
Output: 1
Call: wtf(120)
Output: 5
Call: wtf(3628800)
Output: 10
Based on:
Full inverted factorial valid for x>1
Use the suggested calculation. If factorial is expressible in full binary form the algorithm is:
Suppose input is factorial x, x=n!
Return 1 for 1
Find the number of trailing 0's in binary expansion of the factorial x, let us mark it with t
Calculate x/fact(t), x divided by the factorial of t, mathematically x/(t!)
Find how many times x/fact(t) divides t+1, rounded down to the nearest integer, let us mark it with m
Return m+t
__uint128_t factorial(int n);
int invert_factorial(__uint128_t fact)
{
if (fact == 1) return 1;
int t = __builtin_ffs(fact)-1;
int res = fact/factorial(t);
return t + (int)log(res)/log(t+1);
}
128-bit is giving in on 34!

Resources