There is a loop which perform a brute-force algorithm to calculate 5 * 3 without using arithmetical operators.
I just need to add Five 3times so that it takes O(3) which is O(y) if inputs are x * y.
However, in a book, it says it takes O(2^n) where n is the number of bits in the input. I don't understand why it use O(2^n) to represent it O(y). Is it more good way to show time complexity?. Could you please explain me?
I'm not asking other algorithm to calculate this.
int result = 0
for(int i=0; i<3; i++){
result += 5
}
You’re claiming that the time complexity is O(y) on the input, and the book is claiming that the time complexity is O(2n) on the number of bits in the input. Good news: you’re both right! If a number y can be represented by n bits, y is at most 2n − 1.
I think that you're misreading the passage from the book.
When the book is talking about the algorithm for computing the product of two numbers, it uses the example of multiplying 3 × 5 as a concrete instance of the more general idea of computing x × y by adding y + y + ... + y, x total times. It's not claiming that the specific algorithm "add 5 + 5 + 5" runs in time O(2n). Instead, think about this algorithm:
int total = 0;
for (int i = 0; i < x; i++) {
total += y;
}
The runtime of this algorithm is O(x). If you measure the runtime as a function of the number of bits n in the number x - as is suggested by the book - then the runtime is O(2n), since to represent the number x you need O(log n) bits. This is the distinction between polynomial time and pseudopolynomial time, and the reason the book then goes on to describe a better algorithm for solving this problem is so that the runtime ends up being a polynomial in the number of bits used to represent the input rather than in the numeric value of the numbers. The exposition about grade-school multiplication and addition is there to help you get a better sense for the difference between these two quantities.
Do not think with 3 and 5. Think how to calculate 2 billion x 2 billion (roughly 2^31 multiplied with 2^31)
Your inputs are 31 bits each (N) and your loop will be executed 2 billion times i.e. 2^N.
So, book is correct. For 5x3 case, 3 is 2 bits. So it is complexity is O(2^2). Again correct.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
What is the best amortized time complexity to compute floor(log(x)) amongst the algorithms that find floor(log(x)) for base 2?
There are many different algorithms for computing logarithms, each of which represents a different tradeoff of some sort. This answer surveys a variety of approaches and some of the tradeoffs involved.
Approach 1: Iterated Multiplication
One simple approach for computing ⌊logb n⌋ is to compute the sequence b0, b1, b2, etc. until we find a value greater than n. At that point, we can stop, and return the exponent just before this. The code for this is fairly straightforward:
x = 0; # Exponent
bX = 1; # b^Exponent
while (bx <= n) {
x++;
bX *= b;
}
return x - 1;
How fast is this? Notice that the inner loop counts up x = 0, x = 1, x = 2, etc. until eventually we reach x = ⌊logb x⌋, doing one multiplication per iteration. If we assume all the integers we're dealing with fit into individual machine words - say, we're working with int or long or something like that - then each multiply takes time O(1) and the overall runtime is O(logb n), with a memory usage of O(1).
Approach 2: Repeated Squaring
There's an old interview question that goes something like this. I have a number n, and I'm not going to tell you what it is. You can make queries of the form "is x equal to n, less than n, or greater than n?," and your goal is to use the fewest queries to figure out what n is. Assuming you have literally no idea what n can be, one reasonable approach works like this: guess the values 1, 2, 4, 8, 16, 32, ..., 2k, ..., until you overshoot n. At that point, use a binary search on the range you just found to discover what n is. This runs in time O(log n), since after computing 21 + log2 n = 2n you'll overshoot n, and after that you're binary searching over a range of size n for a total runtime of O(log n).
Finding logarithms, in a sense, kinda sorta matches this problem. We have a number n written as bx for some unknown x, and we want to find x. Using the above strategy as a starting point, we can therefore compute b20, b21, b22, etc. until we overshoot bx. From there, we can run a secondary binary search to figure out the exact exponent required.
We can compute the series of values b2k by using the fact that
b2k+1 = b2 · 2k = (b2k)2
and find a value that overshoots as follows:
x = 1; # exponent
bX = b; # b^x
while (bX <= n) {
bX *= bX; # bX = bX^2
x *= 2;
}
# Overshot, now do the binary search
The problem is how to do that binary search to figure things out. Specifically, we know that b2x is too big, but we don't know by how much. And unlike the "guess the number" game, binary searching over the exponent is a bit tricky.
One cute solution to this is based on the idea that if x is the value that we're looking for, then we can write x as a series of bits in binary. For example, let's write x = am-12m-1 + am-22m-2 + ... + a121 + a020. Then
bx = bam-12m-1 + am-22m-2 + ... + a121 + a020
= 2am-12m-1 · 2am-22m-2 · ... · 2a0 20
In other words, we can try to discover what bx is by building up x one bit at a time. To do so, as we compute the values b1, b2, b4, b8, etc., we can write down the values we discover. Then, once we overshoot, we can try multiplying them in and see which ones should be included and which should be excluded. Here's what that looks like:
x = 1; // Exponent
bX = b; // b^x
powers = [b]; // b^{2^0}
exps = [1]; // 2^0
while (bX <= n) {
bX *= bX; // bX = bX^2
powers += bX; // Append bX
x++;
exps += x;
}
# Overshot, now recover the bits
resultExp = 1
result = 0;
while (x > 0) {
# If including this bit doesn't overshoot, it's part of the
# representation of x.
if (resultExp * powers[x] <= n) {
resultExp *= powers[x];
result += exps[x];
}
x--;
}
return result;
This is certainly a more involved approach, but it is faster. Since the value we're looking for is ⌊bx⌋ and we're essentially using the solution from the "guess the number game" to figure out what x is, the number of multiplications is O(log logb n), with a memory usage of O(log logb n) to hold the intermediate powers. This is exponentially faster than the previous solution!
Approach 3: Zeckendorf Representations
There's a slight modification on the previous approach that keeps the runtime of O(log logb n) but drops the auxiliary space usage to O(1). The idea is that, instead of writing the exponent in binary using the regular system, we write the number out using Zeckendorf's theorem, which is a binary number system based on the Fibonacci sequence. The advantage is that instead of having to store the intermediate powers of two, we can use the fact that any two consecutive Fibonacci numbers are sufficient to let you compute the next or previous Fibonacci number, allowing us to regenerate the powers of b as needed. Here's an implementation of that idea in C++.
Approach 4: Bitwise Iterated Multiplication
In some cases, you need to find logarithms where the log base is two. In that case, you can take advantage of the fact that numbers on a computer are represented in binary and that multiplications and divisions by two correspond to bit shifts.
For example, let's take the iterated multiplication approach from before, where we computed larger and larger powers of b until we overshot. We can do that same technique using bitshifts, and it's much faster:
x = 0; # Exponent
while ((1 << x) <= n) {
x++;
}
return x - 1;
This approach still runs in time O(log n), as before, but is probably faster implemented this way than with multiplications because the CPU can do bit shifts much more quickly.
Approach 5: Bitwise Binary Search
The base-two logarithm of a number, written in binary, is equivalent to the position of the most-significant bit of that number. To find that bit, we can use a binary search technique somewhat reminiscent of Approach 2, though done faster because the machine can process multiple bits in parallel in a single instruction. Basically, as before, we generate the sequence 220, 221, etc. until we overshoot the number, giving an upper bound on how high the highest bit can be. From there, we do a binary search to find the highest 1 bit. Here's what that looks like:
x = 1;
while ((1 << x) <= n) {
x *= 2;
}
# We've overshot the high-order bit. Do a binary search to find it.
low = 0;
high = x;
while (low < high) {
mid = (low + high) / 2;
# Form a bitmask with 1s up to and including bit number mid.
# This can be done by computing 2^{m+1} - 1.
mask = (1 << (mid + 1)) - 1
# If the mask overlaps, branch higher
if (mask & n) {
low = mid + 1
}
# Otherwise, branch lower
else {
high = mid
}
}
return high - 1
This approach runs in time O(log log n), since the binary search takes time logarithmic in the quantity being searched for and the quantity we're searching for is O(log n). It uses O(1) auxiliary space.
Approach 6: Magical Word-Level Parallelism
The speedup in Approach 5 is largely due to the fact that we can test multiple bits in parallel using a single bitwise operation. By doing some truly amazing things with machine words, it's possible to harness this parallelism to find the most-significant bit in a number in time O(1) using only basic arithmetic operations and bit-shifts, and to do so in a way where the runtime is completely independent of the size of a machine word (e.g. the algorithm works equally quickly on a 16-bit, 32-bit, and 64-bit machine). The techniques involved are fairly complex and I will confess that I had no idea this was possible to do until recently when I learned the technique when teaching an advanced data structures course.
Summary
To summarize, here are the approaches listed, their time complexity, and their space complexity.
Approach Which Bases? Time Complexity Space Complexity
--------------------------------------------------------------------------
Iter. Multiplication Any O(log_b n) O(1)
Repeated Squaring Any O(log log_b n) O(log log_b n)
Zeckendorf Logarithm Any O(log log_b n) O(1)
Bitwise Multiplication 2 O(log n) O(1)
Bitwise Binary Search 2 O(log log n) O(1)
Word-Level Parallelism 2 O(1) O(1)
There are many other algorithms I haven't mentioned here that are worth exploring. Some algorithms work by segmenting machine words into blocks of some fixed size, precomputing the position of the first 1 bit in each block, then testing one block at a time. These approaches have runtimes that depend on the size of the machine word, and (to the best of my knowledge) none of them are asymptotically faster than the approaches I've outlined here. Other approaches work by using the fact that some processors have instructions that immediately output the position of the most significant bit in a number, or work by using floating-point hardware. Those are also interesting and fascinating, be sure to check them out!
Another area worth exploring is when you have arbitrary-precision integers. There, the costs of multiplications, divisions, shifts, etc. are not O(1), and the relative costs of these algorithms change. This is definitely worth exploring in more depth if you're curious!
The code included here is written in pseudocode because it's designed mostly for exposition. In a real implementation, you'd need to worry about overflow, handling the case where the input is negative or zero, etc. Just FYI. :-)
Hope this helps!
for(i=1;i<=n;i=pow(2,i)) { print i }
What will be the time complexity of this?
Approximate kth term for value of i will be pow(2,(pow(2,pow(2,pow(2, pow(2,pow(2,...... k times)))))))
How can the above value, let's say kth value of i < n be solved for k.
What you have is similar to tetration(2,n) but its not it as you got wrong ending condition.
The complexity greatly depends on the domain and implementation. From your sample code I infer real domain and integers.
This function grows really fast so after 5 iterations you need bigints where even +,-,*,/,<<,>> are not O(1). Implementation of pow and print have also a great impact.
In case of small n<tetration(2,4) you can assume the complexity is O(1) as there is no asymptotic to speak of for such small n.
Beware pow is floating point in most languages and powering 2 by i can be translated into simple bit shift so let assume this:
for (i=1;i<=n;i=1<<i) print(i);
We could use previous state of i to compute 1<<i like this:
i0=i; i<<=(i-i0);
but there is no speedup on such big numbers.
Now the complexity of decadic print(i) is one of the following:
O( log(i)) // power of 10 datawords (like 1000000000 for 32 bit)
O((log(i))^2) // power of 2 datawords naive print implementation
O( log(i).log(log(i))) // power of 2 datawords subdivision or FFT based print implementation
The complexity of bit shift 1<<i and comparison i<=n is:
O(log(i)) // power of 2 datawords
So choosing the best implementation for print in power of 2 datawords lead to iteration:
O( log(i).log(log(i) + log(i) + log(i) ) -> O(log(i).log(log(i)))
At first look one would think we would need to know the number of iterations k from n:
n = tetration(2,k)
k = slog2(n)
or Knuth's notation which is directly related to Ackermann function:
n = 2↑↑k
k = 2↓↓n
but the number of iterations is so small in comparison to inner complexity of the stuff inside loop and next iterations grows so fast that the previous iteration is negligible fraction of the next one so we can ignore them all and only consider the last therm/iteration...
After all these assumptions I got final complexity:
O(log(n).log(log(n)))
when we do the sum of n numbers using for loop for(i=1;i<=n;i++)complexity of this is O(n), but if we do this same computation using the formula of arithmetic/geometric progression series n(n-1)/2 that time if we compute the time complexity, its O(n^2). How ? please solve my doubt.
You are confused by what the numbers are representing.
Basically we are counting the # of steps when we talking about complexity.
n(n+1)/2 is the answer of Summation(1..n), that's correct, but different way take different # of steps to compute it, and we are counting the # of such steps.
Compare the following:
int ans = 0;
for(int i=1; i<=n;i++) ans += i;
// this use n steps only
int ans2 = 0;
ans2 = n*(n+1)/2;
// this use 1 step!!
int ans3 = 0;
for(int i=1, mx = n*(n+1)/2; i<=mx; i++) ans3++;
// this takes n*(n+1)/2 step
// You were thinking the formula would look like this when translated into code!
All three answers give the same value!
So, you can see only the first method & the third method (which is of course not practical at all) is affected by n, different n will cause them take different steps, while the second method which uses the formula, always take 1 step no matter what is n
Being said, if you know the formula beforehand, it is always the best you just compute the answer directly with the formula
Your second formula has O(1) complexity, that is, it runs in constant time, independent of n.
There's no contradiction. The complexity is a measure of how long the algorithm takes to run. Different algorithms can compute the same result at different speeds.
[BTW the correct formula is n*(n+1)/2.]
Edit: Perhaps your confusion has to do with an algorithm that takes n*(n+1)/2 steps, which is (n^2 + n)/2 steps. We call that O(n^2) because it grows essentially (asymptotically) as n^2 when n gets large. That is, it grows on the order of n^2, the high order term of the polynomial.
I was trying to find the complexities of an algorithm via different approaches. Mathematically I came across one O(m+n) and another O(mn) approach. However I am unable to grasp or say visualize this. It's not like I look at them and get the "Ahh! That's what's going on" feeling! Can someone explain this using their own examples or any other tool?
O(m+n) example:
for(int i = 0, i < m, i++)
//code
for(int j = 0, j < n, j++)
//code
m iterations of code happen. Then n iterations of code happens.
O(mn) example:
for(int i = 0, i < m, i++)
for(int j = 0, j < n, j++)
//code
For every iteration of m, we have n iterations of code. Imagine iterating over a non-square 2D array.
m and n do not necessarily equal the same value. If they did equal the same value, then for O(m+n):
O(m+n) => O(m+m) => O(2m) => O(m)
I'd recommend looking at this question/answer in order to understand that last transition.
And for O(mn):
O(mn) => O(mm) => O(m^2)
My recommendation for finding intuition is thought experiments as follows:
First, realize that m and n are two different measurements of the input. They might be the lengths of two input streams, the lengths of sides of a matrix, or the counts of two different attributes of the same data structure, such as edge and node count of the same graph, or any similar measures.
The intuition is that big-O expresses a bound on the true run time (or some other aspect such as comparison count or space needed) of an algorithm in terms of a simple function - call that R(m, n) - multiplied by some arbitrary constant. We ignore the constant factors and think of all algorithms bounded by the same R as a family by calling their run times O( R(m, n) ).
Consequently, big O(m + n) says that the true run time is bounded by some function R(m,n) = C(m + n) for suitably big m and n. For the graph example, this says that the actual run time of the algorithm will be bounded by a multiple of the sum of the number of vertices and edges.
You can think of the bounding function as a graph in 3d with axes m, n, and R(m,n). Or you can think of charts:
R(m,n) = m + n
--------------
m= 1 2 3 4
n=1 1 2 3 4
2 3 4 5 6
3 4 5 6 7
4 5 6 7 8
For R(m,n) = mn, you have
R(m,n) = mn
--------------
m= 1 2 3 4
n=1 1 2 3 4
2 2 4 6 8
3 3 6 9 12
4 4 8 12 16
As a 3d graph, the first function is a plane and the second is a much faster-growing function at almost all points. This means that if m and n grow large enough, an O(mn) bound will ultimately be larger (corresponding to a potentially slower program) than an O(m+n) because the constants become insignificant.
For an example of the cost of rapid growth, suppose an O(m+n) algorithm has an extra constant factor of 3 in its runtime bound (making it potentially very slow on small inputs compared to both algorithms above):
R(m,n) = 3(m + n)
--------------
m= 1 2 3 4
n=1 3 9 12 15
2 9 12 15 18
3 12 15 18 21
4 15 18 21 24
So the the O(m + n) looks like it's bound is less constrained than the O(mn) one in the chart above. But look at the case m=n=100. Here bound on the O(m + n) algorithm is 3(m + n) = 600. But the O(mn) algorithm with the small constant has bound mn = 10000. Clearly you want the first if m and n are large.
#Anonymous raised a fine point on the initial version of this article, which confused big-O and big-Theta. Big-O only deals with bounds or upper limits on the quantity being measured. For example, this means that every O(n) algorithm is also O(n log n) and O(n^2). If the real run time is bounded by the slower-growing function, it is also bounded by all faster-growing ones.
Yet it is quite common for people to say "this algorithms is O(n)" while meaning that the bound is tight. That is, that Cn is an upper bound on the run time for some constant C and Dn is also a lower bound for some other constant D (and suitably large n). Such a tight bound is properly stated as Theta(n), not O(n). The run time of a Theta(R(m, n)) algorithm is (roughly speaking) proportional to R(m, n).
I'll add finally that there are many cases where you can't ignore constants. There exist lots of algorithms in the literature that are asymptotically "faster" than those in common use, but have constants so large that for practical problem sizes they are always too slow. Computational geometry has many examples. Radix 2 sort is another. It's Theta(n), but in practice a good quicksort (Theta(n log n) average case) will beat it on arrays of size up to at least 10^8 integers.
O(m+n) is much (an order of magnitude) faster than O(mn).
The O(m+n) algorithm could be one that iterates 2 sets and does a constant time (O(1)) operation on each element.
The O(mn) algorithm could be one that iterates the first set and does a linear search (O(n)) for the matching element in the second set.
The O(mn) algorithm is probably what professors would call The Naive Approach
Link to the original problem
It's not a homework question. I just thought that someone might know a real solution to this problem.
I was on a programming contest back in 2004, and there was this problem:
Given n, find sum of digits of n!. n can be from 0 to 10000. Time limit: 1 second. I think there was up to 100 numbers for each test set.
My solution was pretty fast but not fast enough, so I just let it run for some time. It built an array of pre-calculated values which I could use in my code. It was a hack, but it worked.
But there was a guy, who solved this problem with about 10 lines of code and it would give an answer in no time. I believe it was some sort of dynamic programming, or something from number theory. We were 16 at that time so it should not be a "rocket science".
Does anyone know what kind of an algorithm he could use?
EDIT: I'm sorry if I didn't made the question clear. As mquander said, there should be a clever solution, without bugnum, with just plain Pascal code, couple of loops, O(n2) or something like that. 1 second is not a constraint anymore.
I found here that if n > 5, then 9 divides sum of digits of a factorial. We also can find how many zeros are there at the end of the number. Can we use that?
Ok, another problem from programming contest from Russia. Given 1 <= N <= 2 000 000 000, output N! mod (N+1). Is that somehow related?
I'm not sure who is still paying attention to this thread, but here goes anyway.
First, in the official-looking linked version, it only has to be 1000 factorial, not 10000 factorial. Also, when this problem was reused in another programming contest, the time limit was 3 seconds, not 1 second. This makes a huge difference in how hard you have to work to get a fast enough solution.
Second, for the actual parameters of the contest, Peter's solution is sound, but with one extra twist you can speed it up by a factor of 5 with 32-bit architecture. (Or even a factor of 6 if only 1000! is desired.) Namely, instead of working with individual digits, implement multiplication in base 100000. Then at the end, total the digits within each super-digit. I don't know how good a computer you were allowed in the contest, but I have a desktop at home that is roughly as old as the contest. The following sample code takes 16 milliseconds for 1000! and 2.15 seconds for 10000! The code also ignores trailing 0s as they show up, but that only saves about 7% of the work.
#include <stdio.h>
int main() {
unsigned int dig[10000], first=0, last=0, carry, n, x, sum=0;
dig[0] = 1;
for(n=2; n <= 9999; n++) {
carry = 0;
for(x=first; x <= last; x++) {
carry = dig[x]*n + carry;
dig[x] = carry%100000;
if(x == first && !(carry%100000)) first++;
carry /= 100000; }
if(carry) dig[++last] = carry; }
for(x=first; x <= last; x++)
sum += dig[x]%10 + (dig[x]/10)%10 + (dig[x]/100)%10 + (dig[x]/1000)%10
+ (dig[x]/10000)%10;
printf("Sum: %d\n",sum); }
Third, there is an amazing and fairly simple way to speed up the computation by another sizable factor. With modern methods for multiplying large numbers, it does not take quadratic time to compute n!. Instead, you can do it in O-tilde(n) time, where the tilde means that you can throw in logarithmic factors. There is a simple acceleration due to Karatsuba that does not bring the time complexity down to that, but still improves it and could save another factor of 4 or so. In order to use it, you also need to divide the factorial itself into equal sized ranges. You make a recursive algorithm prod(k,n) that multiplies the numbers from k to n by the pseudocode formula
prod(k,n) = prod(k,floor((k+n)/2))*prod(floor((k+n)/2)+1,n)
Then you use Karatsuba to do the big multiplication that results.
Even better than Karatsuba is the Fourier-transform-based Schonhage-Strassen multiplication algorithm. As it happens, both algorithms are part of modern big number libraries. Computing huge factorials quickly could be important for certain pure mathematics applications. I think that Schonhage-Strassen is overkill for a programming contest. Karatsuba is really simple and you could imagine it in an A+ solution to the problem.
Part of the question posed is some speculation that there is a simple number theory trick that changes the contest problem entirely. For instance, if the question were to determine n! mod n+1, then Wilson's theorem says that the answer is -1 when n+1 is prime, and it's a really easy exercise to see that it's 2 when n=3 and otherwise 0 when n+1 is composite. There are variations of this too; for instance n! is also highly predictable mod 2n+1. There are also some connections between congruences and sums of digits. The sum of the digits of x mod 9 is also x mod 9, which is why the sum is 0 mod 9 when x = n! for n >= 6. The alternating sum of the digits of x mod 11 equals x mod 11.
The problem is that if you want the sum of the digits of a large number, not modulo anything, the tricks from number theory run out pretty quickly. Adding up the digits of a number doesn't mesh well with addition and multiplication with carries. It's often difficult to promise that the math does not exist for a fast algorithm, but in this case I don't think that there is any known formula. For instance, I bet that no one knows the sum of the digits of a googol factorial, even though it is just some number with roughly 100 digits.
This is A004152 in the Online Encyclopedia of Integer Sequences. Unfortunately, it doesn't have any useful tips about how to calculate it efficiently - its maple and mathematica recipes take the naive approach.
I'd attack the second problem, to compute N! mod (N+1), using Wilson's theorem. That reduces the problem to testing whether N is prime.
Small, fast python script found at http://www.penjuinlabs.com/blog/?p=44. It's elegant but still brute force.
import sys
for arg in sys.argv[1:]:
print reduce( lambda x,y: int(x)+int(y),
str( reduce( lambda x, y: x*y, range(1,int(arg)))))
$ time python sumoffactorialdigits.py 432 951 5436 606 14 9520
3798
9639
74484
5742
27
141651
real 0m1.252s
user 0m1.108s
sys 0m0.062s
Assume you have big numbers (this is the least of your problems, assuming that N is really big, and not 10000), and let's continue from there.
The trick below is to factor N! by factoring all n<=N, and then compute the powers of the factors.
Have a vector of counters; one counter for each prime number up to N; set them to 0. For each n<= N, factor n and increase the counters of prime factors accordingly (factor smartly: start with the small prime numbers, construct the prime numbers while factoring, and remember that division by 2 is shift). Subtract the counter of 5 from the counter of 2, and make the counter of 5 zero (nobody cares about factors of 10 here).
compute all the prime number up to N, run the following loop
for (j = 0; j< last_prime; ++j) {
count[j] = 0;
for (i = N/ primes[j]; i; i /= primes[j])
count[j] += i;
}
Note that in the previous block we only used (very) small numbers.
For each prime factor P you have to compute P to the power of the appropriate counter, that takes log(counter) time using iterative squaring; now you have to multiply all these powers of prime numbers.
All in all you have about N log(N) operations on small numbers (log N prime factors), and Log N Log(Log N) operations on big numbers.
and after the improvement in the edit, only N operations on small numbers.
HTH
1 second? Why can't you just compute n! and add up the digits? That's 10000 multiplications and no more than a few ten thousand additions, which should take approximately one zillionth of a second.
You have to compute the fatcorial.
1 * 2 * 3 * 4 * 5 = 120.
If you only want to calculate the sum of digits, you can ignore the ending zeroes.
For 6! you can do 12 x 6 = 72 instead of 120 * 6
For 7! you can use (72 * 7) MOD 10
EDIT.
I wrote a response too quickly...
10 is the result of two prime numbers 2 and 5.
Each time you have these 2 factors, you can ignore them.
1 * 2 * 3 * 4 * 5 * 6 * 7 * 8 * 9 * 10 * 11 * 12 * 13 * 14 * 15...
1 2 3 2 5 2 7 2 3 2 11 2 13 2 3
2 3 2 3 5 2 7 5
2 3
The factor 5 appears at 5, 10, 15...
Then a ending zero will appear after multiplying by 5, 10, 15...
We have a lot of 2s and 3s... We'll overflow soon :-(
Then, you still need a library for big numbers.
I deserve to be downvoted!
Let's see. We know that the calculation of n! for any reasonably-large number will eventually lead to a number with lots of trailing zeroes, which don't contribute to the sum. How about lopping off the zeroes along the way? That'd keep the sizer of the number a bit smaller?
Hmm. Nope. I just checked, and integer overflow is still a big problem even then...
Even without arbitrary-precision integers, this should be brute-forceable. In the problem statement you linked to, the biggest factorial that would need to be computed would be 1000!. This is a number with about 2500 digits. So just do this:
Allocate an array of 3000 bytes, with each byte representing one digit in the factorial. Start with a value of 1.
Run grade-school multiplication on the array repeatedly, in order to calculate the factorial.
Sum the digits.
Doing the repeated multiplications is the only potentially slow step, but I feel certain that 1000 of the multiplications could be done in a second, which is the worst case. If not, you could compute a few "milestone" values in advance and just paste them into your program.
One potential optimization: Eliminate trailing zeros from the array when they appear. They will not affect the answer.
OBVIOUS NOTE: I am taking a programming-competition approach here. You would probably never do this in professional work.
another solution using BigInteger
static long q20(){
long sum = 0;
String factorial = factorial(new BigInteger("100")).toString();
for(int i=0;i<factorial.length();i++){
sum += Long.parseLong(factorial.charAt(i)+"");
}
return sum;
}
static BigInteger factorial(BigInteger n){
BigInteger one = new BigInteger("1");
if(n.equals(one)) return one;
return n.multiply(factorial(n.subtract(one)));
}