Factorial of a big number [duplicate] - algorithm

This question already has answers here:
Calculating factorial of large numbers in C
(16 answers)
Closed 2 years ago.
Consider problem of calculating factorial of a number.
When result is bigger than 2^32 then we will get overflow error.
How can we design a program to calculate factorial of big numbers?
EDIT: assume we are using C++ language.
EDIT2: it is a duplicate question of this one

As a question with just algorithm tagged. Your 2^32 is not an issue because an algorithm can never have an Overflow error. Implementations of an algorithm can and do have overflow errors. So what language are you using?
Most languages have a BigNumber or BigInteger that can be used.
Here's a C++ BigInteger library: https://mattmccutchen.net/bigint/
I suggest that you google for: c++ biginteger

If you can live with approximate values, consider using the Stirling approximation and compute it in double precision.
If you want exact values, you'll need arbitrary-precision arithmetic and a lot of computation time...

Doing this requires you to take one of a few approaches, but basically boils down to:
splitting your number across multiple variables (stored in an array) and
managing your operations across the array.
That way each int/element in the array has a positional magnitude and can be strung together in the end to make your whole number.
A good example here in C: http://www.go4expert.com/forums/c-program-calculate-factorial-t25352/

Test this script:
import gmpy as gm
print gm.fac(3000)
For very big number is difficult to stock or print result.

For some purposes, such as working out the number of combinations, it is sufficient to compute the logarithm of the factorial, because you will be dividing factorials by factorials and the final result is of a more reasonable size - you just subtract logarithms before taking the exponential of the result.
You can compute the logarithm of the factorial by adding logarithms, or by using the http://en.wikipedia.org/wiki/Gamma_function, which is often available in mathematical libraries (there are good ways to approximate this).

First invent a way to store and use big numbers. Common way is to interpret array of integers as digits of a big number. Then add basic operations to your system, such as multiplication. Then multiply.
Or use already made solutions. Google for: c++ big integer library

You can use BigInteger for finding factorial of a Big numbers probably greater than 65 as the range of data type long ends at 65! and it starts returning 0 after that. Please refer to below Java code. Hope it would help:
import java.math.BigInteger;
public class factorial {
public factorial() {
// TODO Auto-generated constructor stub
}
public static void main(String args[])
{
factorial f = new factorial();
System.out.println(f.fact(100));
}
public BigInteger fact(int num)
{
BigInteger sum = BigInteger.valueOf(1);
for(int i = num ; i>= 2; i --)
{
sum = sum.multiply(BigInteger.valueOf(i));
}
return sum;
}
}

If you want to improve the range of your measurement, you can use logarithms. Logarithms will convert your multiplication to additions making it much smaller to store.
factorial(n) => n * factorial(n-1)
log(factorial(n)) => log(n) * log(factorial(n-1))
5! = 5*4*3*2*1 = 120
log(5!) = log(5) + log(4) + log(3) + log(2) + log(1) = 2.0791812460476247
In this example, I used base 10 logarithms, but any base works.
10^2.0791812460476247
Or 10^0.0791812460476247*10^2 or 1.2*10^2
Implementation example in javascript

Related

First number that divides all numbers (1,2,...,100)

Given that the first number to divide all (1,2,..,10) is 2520.
And given that the first number to divide all (1,2,..,20) is 232792560.
Find the first number to divide all (1,2,..,100). (all consecutive numbers from 1 to 100).
The answer should run in less than a minute.
I'm writing the solution in Java, and I'm facing two problems:
How can I compute this is the solution itself is a number so huge that cannot be handled?
I tried using "BigInteger" by I'm doing many additions and divisions and I don't know if this is increasing my time complexity.
How can I calculate this in a less than a minute? The solution I though about so far haven't even stopped.
This is my Java code (using big integers):
public static boolean solved(int start, int end, BigInteger answer) {
for (int i=start; i<=end; i++) {
if (answer.mod(new BigInteger(valueOf(i))).compareTo(new BigInteger(valueOf(0)))==0) {
return false;
}
}
return true;
}
public static void main(String[] args) {
BigInteger answer = new BigInteger("232792560");
BigInteger y = new BigInteger("232792560");
while(!solved(21,100, answer)) {
answer = answer.add(y);
}
System.out.println(answer);
}
I take advantage of the fact that I know already the solution for (1,..,20).
Currently is simply not stopping.
I though I could improve it by changing the function solved to check for only values we care about.
For example:
100 = 25,50,100
99 = 33,99
98 = 49,98
97 = 97
96 = 24,32,48,96
And so on. But, this simple calculation of identifying the smallest group of number needed has become a problem itself to which I didn't look for / found a solution. Of course, the time complexity should stay under a minute either case.
Any other ideas?
The first number that can be divided by all elements of some set (which is what you have there, despite the slightly different formulation) is also known as the Least Common Multiple of that set. LCM(a, b, c) = LCM(LCM(a, b), c) and so on, so in general, it can be computed by taking n - 1 pairwise LCMs where n is the number of items in the set. BigInteger does not have an lcm function, but the LCM can be computed via a * b / gcd(a, b) so in Java with BigInteger:
static BigInteger lcm(BigInteger a, BigInteger b) {
return a.multiply(b).divide(a.gcd(b));
}
For 1 to 20, computing the LCM in that way indeed results in 232792560. It's easy to do it up to 100 too.
Find all max prime powers in your range and take their product.
E.g. 1-10: 2^3, 3^2, 5^1, 7^1: product is 2520, which is the right answer (not 5250). You could find the primes via the sieve of Eratosthenes or just download them from a list of primes.
As 100 is small, you can work this out by producing the prime factorization of all numbers from 2 to 100 and keep the largest exponent of each prime among all factorizations. In fact, trying divisions by 2, 3, 5 and 7 will be enough to check primality up to 100, and there are just 25 primes to consider. You can implement a simple sieve to find the primes and perform the factorizations.
After you found all exponents of the prime decomposition of the lcm, you can either leave this as the answer, or perform the multiplications explicitly.

square numbers given in an sequence

Given a positive integer sequence of numbers in an array with common difference 2
for e.g 2 4 6 8
Now replace each number by its square. Perform the computations efficiently.
I was asked this question in an interview and i gave him o(n) solution using bitwise operator since it is operation in the multiples of 2.If there is any better method please suggest.
I dunno if its better but it's recursive!!! :-)
(n+2)(n+2) = n**2 + 4*n + 4 // and you got n**2
class Square
{
public static int[] sequence(int[] array)
{
int[] result=new int[array.length];
for(int i=0;i<array.length;i++)
{
result[i]=array[i]*array[i];
}
return result;
}
}
// test cases:
// Square.sequence(new int[]{2,4,6,8})
//out put->{ 4, 16, 36, 64 }
It really depends on the interviewer, and what they think is "the right thing". If it were me, I'd think the (n << 2) + 4 were neat, but on the other hand, I'd hate to see it in my code. It takes more thinking to maintain, and there's a fair chance a good optimizer might do just as good a job.
I think the phrase "perform the operation efficiently" is probably our clue that the interviewer was looking for a fast computation. It's still O(n), but let's not forget that when you are comparing two O(n) algorithms, the coefficients start to matter again.

Fast algorithm for finding prime numbers? [duplicate]

This question already has answers here:
Which is the fastest algorithm to find prime numbers?
(20 answers)
Closed 9 years ago.
First of all - I checked a lot in this forum and I haven't found something fast enough.
I try to make a function that returns me the prime numbers in a specified range.
For example I did this function (in C#) using the sieve of Eratosthenes. I tried also Atkin's sieve but the Eratosthenes one runs faster (in my implementation):
public static void SetPrimesSieve(int Range)
{
Primes = new List<uint>();
Primes.Add(2);
int Half = (Range - 1) >> 1;
BitArray Nums = new BitArray(Half, false);
int Sqrt = (int)Math.Sqrt(Range);
for (int i = 3, j; i <= Sqrt; )
{
for (j = ((i * i) >> 1) - 1; j < Half; j += i)
Nums[j] = true;
do
i += 2;
while (i <= Sqrt && Nums[(i >> 1) - 1]);
}
for (int i = 0; i < Half; ++i)
if (!Nums[i])
Primes.Add((uint)(i << 1) + 3);
}
It runs about twice faster than codes & algorithms I found...
There's should be a faster way to find prime numbers, could you help me?
When searching around for algorithms on this topic (for project Euler) I don't remember finding anything faster. If speed is really the concern, have you thought about just storing the primes so you simply need to look it up?
EDIT: quick google search found this, confirming that the fastest method would be just to page the results and look them up as needed.
One more edit - you may find more information here, essentially a duplicate of this topic. Top post there states that atkin's sieve was faster than eras' as far as generating on the fly.
The fastest algorithm in my experience so far is the Sieve of Erathostenes with wheel factorization for 2, 3 and 5, where the primes among the remaining numbers are represented as bits in a byte array. In Java on one core of my 3 year old Laptop it takes 23 seconds to compute the primes up to 1 billion.
With wheel factorization the Sieve of Atkin was about a factor of two slower, while with an ordinary BitSet it was about 30% faster.
See also this answer.
I did an algorithm that can find prime numbers from range 2-90 000 000 for 0.65 sec on I 350M-notebook, written in C .... you have to use bitwise operations and have "code" for recalculating index of your array to index of concrete bit you want. for example If you want folds of number 2, concrete bits will be for example ....10101000 ... so if you read from left ... you get index 4,6,8 ... thats it
Several comments.
For speed, precompute then load from disk. It's super fast. I did it in Java long ago.
Don't store as an array, store as a bitsequence for odd numbers. Way more efficient on memory
If your speed question is that you want this particular computation to run fast (you need to justify why you can't precompute and load it from disk) you need to code a better Atkin's sieve. It is faster. But only slightly.
You haven't indicated the end use for these primes. We may be missing something completely because you've not told us the application. Tell us a sketch of the application and the answers will be targetted better for your context.
Why on earth do you think something faster exists? You haven't justified your hunch. This is a very hard problem. (that is to find something faster)
You can do better than that using the Sieve of Atkin, but it is quite tricky to implement it fast and correctly. A simple translation of the Wikipedia pseudo-code is probably not good enough.

random permutation

I would like to genrate a random permutation as fast as possible.
The problem: The knuth shuffle which is O(n) involves generating n random numbers.
Since generating random numbers is quite expensive.
I would like to find an O(n) function involving a fixed O(1) amount of random numbers.
I realize that this question has been asked before, but I did not see any relevant answers.
Just to stress a point: I am not looking for anything less than O(n), just an algorithm involving less generation of random numbers.
Thanks
Create a 1-1 mapping of each permutation to a number from 1 to n! (n factorial). Generate a random number in 1 to n!, use the mapping, get the permutation.
For the mapping, perhaps this will be useful: http://en.wikipedia.org/wiki/Permutation#Numbering_permutations
Of course, this would get out of hand quickly, as n! can become really large soon.
Generating a random number takes long time you say? The implementation of Javas Random.nextInt is roughly
oldseed = seed;
nextseed = (oldseed * multiplier + addend) & mask;
return (int)(nextseed >>> (48 - bits));
Is that too much work to do for each element?
See https://doi.org/10.1145/3009909 for a careful analysis of the number of random bits required to generate a random permutation. (It's open-access, but it's not easy reading! Bottom line: if carefully implemented, all of the usual methods for generating random permutations are efficient in their use of random bits.)
And... if your goal is to generate a random permutation rapidly for large N, I'd suggest you try the MergeShuffle algorithm. An article published in 2015 claimed a factor-of-two speedup over Fisher-Yates in both parallel and sequential implementations, and a significant speedup in sequential computations over the other standard algorithm they tested (Rao-Sandelius).
An implementation of MergeShuffle (and of the usual Fisher-Yates and Rao-Sandelius algorithms) is available at https://github.com/axel-bacher/mergeshuffle. But caveat emptor! The authors are theoreticians, not software engineers. They have published their experimental code to github but aren't maintaining it. Someday, I imagine someone (perhaps you!) will add MergeShuffle to GSL. At present gsl_ran_shuffle() is an implementation of Fisher-Yates, see https://www.gnu.org/software/gsl/doc/html/randist.html?highlight=gsl_ran_shuffle.
Not what you asked exactly, but if provided random number generator doesn't satisfy you, may be you should try something different. Generally, pseudorandom number generation can be very simple.
Probably, best-known algorithm
http://en.wikipedia.org/wiki/Linear_congruential_generator
More
http://en.wikipedia.org/wiki/List_of_pseudorandom_number_generators
As other answers suggest, you can make a random integer in the range 0 to N! and use it to produce a shuffle. Although theoretically correct, this won't be faster in general since N! grows fast and you'll spend all your time doing bigint arithmetic.
If you want speed and you don't mind trading off some randomness, you will be much better off using a less good random number generator. A linear congruential generator (see http://en.wikipedia.org/wiki/Linear_congruential_generator) will give you a random number in a few cycles.
Usually there is no need in full-range of next random value, so to use exactly the same amount of randomness you can use next approach (which is almost like random(0,N!), I guess):
// ...
m = 1; // range of random buffer (single variant)
r = 0; // random buffer (number zero)
// ...
for(/* ... */) {
while (m < n) { // range of our buffer is too narrow for "n"
r = r*RAND_MAX + random(); // add another random to our random-buffer
m *= RAND_MAX; // update range of random-buffer
}
x = r % n; // pull-out next random with range "n"
r /= n; // remove it from random-buffer
m /= n; // fix range of random-buffer
// ...
}
P.S. of course there will be some errors related with division by value different from 2^n, but they will be distributed among resulted samples.
Generate N numbers (N < of the number of random number you need) before to do the computation, or store them in an array as data, with your slow but good random generator; then pick up a number simply incrementing an index into the array inside your computing loop; if you need different seeds, create multiple tables.
Are you sure that your mathematical and algorithmical approach to the problem is correct?
I hit exactly same problem where Fisher–Yates shuffle will be bottleneck in corner cases. But for me the real problem is brute force algorithm that doesn't scale well to all problems. Following story explains the problem and optimizations that I have come up with so far.
Dealing cards for 4 players
Number of possible deals is 96 bit number. That puts quite a stress for random number generator to avoid statical anomalies when selecting play plan from generated sample set of deals. I choose to use 2xmt19937_64 seeded from /dev/random because of the long period and heavy advertisement in web that it is good for scientific simulations.
Simple approach is to use Fisher–Yates shuffle to generate deals and filter out deals that don't match already collected information. Knuth shuffle takes ~1400 CPU cycles per deal mostly because I have to generate 51 random numbers and swap 51 times entries in the table.
That doesn't matter for normal cases where I would only need to generate 10000-100000 deals in 7 minutes. But there is extreme cases when filters may select only very small subset of hands requiring huge number of deals to be generated.
Using single number for multiple cards
When profiling with callgrind (valgrind) I noticed that main slow down was C++ random number generator (after switching away from std::uniform_int_distribution that was first bottleneck).
Then I came up with idea that I can use single random number for multiple cards. The idea is to use least significant information from the number first and then erase that information.
int number = uniform_rng(0, 52*51*50*49);
int card1 = number % 52;
number /= 52;
int cards2 = number % 51;
number /= 51;
......
Of course that is only minor optimization because generation is still O(N).
Generation using bit permutations
Next idea was exactly solution asked in here but I ended up still with O(N) but with larger cost than original shuffle. But lets look into solution and why it fails so miserably.
I decided to use idea Dealing All the Deals by John Christman
void Deal::generate()
{
// 52:26 split, 52!/(26!)**2 = 495,918,532,948,1041
max = 495918532948104LU;
partner = uniform_rng(eng1, max);
// 2x 26:13 splits, (26!)**2/(13!)**2 = 10,400,600**2
max = 10400600LU*10400600LU;
hands = uniform_rng(eng2, max);
// Create 104 bit presentation of deal (2 bits per card)
select_deal(id, partner, hands);
}
So far good and pretty good looking but select_deal implementation is PITA.
void select_deal(Id &new_id, uint64_t partner, uint64_t hands)
{
unsigned idx;
unsigned e, n, ns = 26;
e = n = 13;
// Figure out partnership who owns which card
for (idx = CARDS_IN_SUIT*NUM_SUITS; idx > 0; ) {
uint64_t cut = ncr(idx - 1, ns);
if (partner >= cut) {
partner -= cut;
// Figure out if N or S holds the card
ns--;
cut = ncr(ns, n) * 10400600LU;
if (hands > cut) {
hands -= cut;
n--;
} else
new_id[idx%NUM_SUITS] |= 1 << (idx/NUM_SUITS);
} else
new_id[idx%NUM_SUITS + NUM_SUITS] |= 1 << (idx/NUM_SUITS);
idx--;
}
unsigned ew = 26;
// Figure out if E or W holds a card
for (idx = CARDS_IN_SUIT*NUM_SUITS; idx-- > 0; ) {
if (new_id[idx%NUM_SUITS + NUM_SUITS] & (1 << (idx/NUM_SUITS))) {
uint64_t cut = ncr(--ew, e);
if (hands >= cut) {
hands -= cut;
e--;
} else
new_id[idx%NUM_SUITS] |= 1 << (idx/NUM_SUITS);
}
}
}
Now that I had the O(N) permutation solution done to prove algorithm could work I started searching for O(1) mapping from random number to bit permutation. Too bad it looks like only solution would be using huge lookup tables that would kill CPU caches. That doesn't sound good idea for AI that will be using very large amount of caches for double dummy analyzer.
Mathematical solution
After all hard work to figure out how to generate random bit permutations I decided go back to maths. It is entirely possible to apply filters before dealing cards. That requires splitting deals to manageable number of layered sets and selecting between sets based on their relative probabilities after filtering out impossible sets.
I don't yet have code ready for that to tests how much cycles I'm wasting in common case where filter is selecting major part of deal. But I believe this approach gives the most stable generation performance keeping the cost less than 0.1%.
Generate a 32 bit integer. For each index i (maybe only up to half the number of elements in the array), if bit i % 32 is 1, swap i with n - i - 1.
Of course, this might not be random enough for your purposes. You could probably improve this by not swapping with n - i - 1, but rather by another function applied to n and i that gives better distribution. You could even use two functions: one for when the bit is 0 and another for when it's 1.

Is there an efficient implementation of tetration?

After recently answering a question involving the Ackerman function part of which involved a function for computing the tetration of a number. Which led me to ponder if there was a more efficient way to do it. I did some testing on my own but I'm limited mainly by the fact that even a number such as 5^^3=5^3125 given 5^3 is roughly 10^2, meaning 5^3125 ~= 10^(3125*2/3) around 2000 digits.
The function doesn't lend itself to divide and conquer methods due to the nature of how the exponentiation is done, ie:
2^^5=2^(2^(2^(2^2))))=2^(2^(2^4))=2^(2^16)=2^65536~=10^(65536*3/10) so around 20k digits...
The nature of the problem, since it begins at the top of the power tree and works it way down strikes me as factorial. A fast power algorithm can be used to do the exponentiation operation obviously, but I haven't been able to see a way to shrink the number of exponentiation operations.
In case anyone is unclear what I'm talking about here's the wiki article , essentially though tetration is:
a^^b= a^a^a....^a, b times and then starting the exponentiation at the top element of the power tree and working down.
The algorithm I'm currently using would be (although I'm using a ruby version if I actually want values):
long int Tetration(int number, int tetrate)
{
long int product=1;
if(tetrate==0)
return product;
product=number;
while(tetrate>1)
{
product=FastPower(number,product);
tetrate--;
}
return product;
}
Any thoughts would be appreciated.
With tetration, if the final answer is d digits, then all intermediate results are O(log d) digits, as opposed to O(d) digits with exponentiation. Because the intermediate results for tetration are so small compared to the final result, there's no savings to be had via divide and conquer. It's also unlikely that there is a useful way to save exponentiation operations in a unit-cost RAM, since exponentiation isn't associative.
I dont think that there is a simple way to do tetration,
So I did This:
<!DOCTYPE html>
<html>
<head>
<script>
var x = 1
//That is ▽ The Number You Are Powering//
var test = 3
var test2 = test
setInterval (function () {
// that is ▽ How many times you power it so this would be 3 tetra 3 which is 7625597484987//
if (x < 3) {
document.getElementById('test').innerHTML = test=test2**test
x++
}
}, 1);
</script>
<p id="test">0</p>

Resources