I thought of the following problem recently, and I'm quite surprised that there doesn't seem to be anybody who asked this question yet:
Given a string, how many distinct permutations of it exist, modulo ?
I know the formula where is the length of the string, and are the count of each character (considering an alphabet of size ). So, the string toffee would have different permutations.
But this doesn't quite work anymore when can be really large (say ), since computing would go out of the range of long long int, and using BigIntegers would be too slow. Is there any way to compute this in, say, or time?
If I preprocessed the factorials from to , and my "strings" came in the form of an array of length where each element contained the count of each letter, would it be possible to compute it in or time?
Would appreciate any help on this :)
The trick is to note that p = 10^9 + 7 is a prime number. Therefore, we can use multiplicative inverses and Fermat's little theorem to turn the divisions in your formula into multiplications by the inverses:
n! / (a1!*...*ak!) =
n! * a1!^(p - 2) * ... * ak!^(p - 2) (mod p)
Which will be your formula mod p with no divisions and an easy implementation (just use modular exponentiation by squaring).
Complexity will be O(k log p + n), since we have O(k) multiplications, and for each one, an O(log p) exponentiation, and we must also compute n! and the factorial of each count.
This is easier to implement than cancelling out factors in the fraction.
The number of distinct permutations of a string is always an integer, despite being the result of a division. That's because the factors of the denominator essentially "knock out" some of the factors of the numerator. So you can eliminate the division as a post-factorial operation, instead dividing out the particular factors of the factorial which you've matched up with factors of the denominator.
Once you've got the division removed, you're just left with modular multiplication, which is simple.
Yes .. a solution exists. You can read about Modular multiplicative inverse algorithm. This
As the answer is with modulo 1000000007(which is a prime also), you can try Fermat's little theorem to solve this problem. If modulo number is mod Complexity is O(N + K * log(mod)).
If N isn't gigantic (that is, it's small enough to sift it using something like Sieve of Eratosthenes), you can calculate the prime factorisation of N! with a modified version of the sieve.
Then you can use the prime factorisation to calculate the division, cancelling out the factors present on both sides of the division.
Though this doesn't take into account the fact that you want the result modulo a prime number (where better solutions exist), it's probably useful to know in the general case.
Related
This is identical to the question found on Check if one integer is an integer power of another, but I am wondering about the complexity of a method that I came up with to solve this problem.
Given an integer n and another integer m, is n = m^p for some integer p. Note that ^ here is exponentiation and not xor.
There is a simple O(log_m n) solution based on dividing n repeatedly by m until it's 1 or until there's a non-zero remainder.
I'm thinking of a method inspired by binary search, and it's not clear to me how complexity should be calculated in this case.
Essentially you start with m, then you go to m^2, then m^4, m^8, m^16, .....
When you find that m^{2^k} > n, you check the range bounded by m^{2^{k-1}} and m^{2^k}. Is this solution O(log_2 (log_m(n)))?
Somewhat related to this, if I do something like
m^2 * m^2
vs.
m * m * m * m
Do these 2 have the same complexity? If they do, then I think the algorithm I came up with is still O(log_m (n))
Not quite. First of all, let's assume that multiplication is O(1), and exponentiation a^b is O(log b) (using exponentiation by squaring).
Now using your method of doubling the exponent p_candidate and then doing a binary search, you can find the real p in log(p) steps (or observe that p does not exist). But each try within the binary search requires you to compute m^p_candidate, which is bounded by m^p, which by assumption is O(log(p)). So the overall time complexity is O(log^2(p)).
But we want to express the time complexity in terms of the inputs n and m. From the relationship n = m^p, we get p = log(n)/log(m), and hence log(p) = log(log(n)/log(m)). Hence the overall time complexity is
O(log^2(log(n)/log(m)))
If you want to get rid of the m, you can provide a looser upper bound by using
O(log^2(log(n)))
which is close to, but not quite O(log(log(n))).
(Note that you can always omit the logarithmic bases in the O-notation since all logarithmic functions differ only by a constant factor.)
Now, the interesting question is: is this algorithm better than one that is O(log(n))? I haven't proved it, but I'm pretty certain it is the case that O(log^2(log(n))) is in O(log(n)) but not vice versa. Anyone cares to prove it?
I asked myself if one can compute the nth Fibonacci number in time O(n) or O(1) and why?
Can someone explain please?
Yes. It is called Binet's Formula, or sometimes, incorrectly, De Moivre's Formula (the real De Moivre's formula is another, but De Moivre did discover Binet's formula before Binet), and involves the golden ratio Phi. The mathematical reasoning behind this (see link) is a bit involved, but doable:
While it is an approximate formula, Fibonacci numbers are integers -- so, once you achieve a high enough precision (depends on n), you can just approximate the number from Binet's formula to the closest integer.
Precision however depends on constants, so you basically have two versions, one with float numbers and one with double precision numbers, with the second also running in constant time, but slightly slower. For large n you will need an arbitrary precision number library, and those have processing times that do depend on the numbers involved; as observed by #MattTimmermans, you'll then probably end up with a O(log^2 n) algorithm. This should happen for large enough values of n that you'd be stuck with a large-number library no matter what (but I'd need to test this to be sure).
Otherwise, the Binet formula is mainly made up of two exponentiations and one division (the three sums and divisions by 2 are probably negligible), while the recursive formula mainly employs function calls and the iterative formula uses a loop. While the first formula is O(1), and the other two are O(n), the actual times are more like a, b n + c and d n + e, with values for a, b, c, d and e that depend on the hardware, compiler, implementation etc. . With a modern CPU it is very likely that a is not too larger than b or d, which means that the O(1) formula should be faster for almost every n. But most implementations of the iterative algorithm start with
if (n < 2) {
return n;
}
which is very likely to be faster for n = 0 and n = 1. I feel confident that Binet's formula is faster for any n beyond the single digits.
Instead of thinking about the recursive method, think of building the sequence from the bottom up, starting at 1+1.
You can also use a matrix m like this:
1 1
1 0
and calculate power n of it. then output m^n[0,0].
Everyone knows that factorization is hard. But what if I wanted to calculate the prime factorization of every number from 2 to N? If we have computed the prime factorization of every number in [2, n-1] and if a number, n, has a small prime factor, then computing the factorization of n is easy, because roughly 73% of numbers are divisible by either 2, 3 or 5. Of course, some cases, such as when n is product of two prime of a similar size, are still difficult, but on average, we might expect this problem to be reasonably easy, as we should only ever have to find one factor of a number, to reduce our problem to two problems we've solved before (i.e. factoring d and n/d).
I ask because I'm interested in finding the sum of the sum of squares r(n) (http://mathworld.wolfram.com/SumofSquaresFunction.html), as n ranges from 0 to N. This counts the number of integer points in a circle. As can be seen on the Wolfram Mathworld page, there is a formula for r(n) in terms of the prime factorization of n.
I've taken two approaches thus far:
1) Count the number of points satisfying x^2 + y^2 = n, with 0 < x < y, and then use some permutation argument to find r(n)
2) Compute the prime factorization of n (independently, each time), and compute r(n) with this information.
Experimentally, 2) seems to be faster, but it doesn't scale up well, in comparison to the first method, which is slower, but doesn't get THAT much slower. I'm interested in computing R(N) = sum from 1 to N of r(n) for 40 digit N.
Another option would be to use something like the Sieve of Eratosthenes to generate all primes up to N, then combine them in various ways, to calculate the prime factorizations of all the numbers from 2 to N, and use the same formula as before.
Does anyone have any ideas which of these options may work most effectively? 1) is the easiest to implement, starts off slow, but probably scales up quite well. 2) starts off quick, doesn't scale up well, a quick factor-finding is certainly more difficult to implement, but may do very well if modified to use memoization of previous factorizations, or using some prime generating technique as mentioned above.
Even if 1) is the quickest, I'd still be interested in learning the quickest way of generating all prime factorisations from 0 to N.
The Sieve of Eratosthenes can be modified to compute the factorizations of all numbers from 2 to N. Instead of just marking off multiples of primes, keep track of each multiple as it strikes a number from the list. I give a complete solution with code at my blog.
This is the pseudo code for calculating integer factorisation took from CLRS. But what is the point in calculating GCD involved in Line 8 and the need for doubling k when i == k in Line 13.? Help please.
That pseudocode is not Pollard-rho factorization despite the label. It is one trial of the related Brent's factorization method. In Pollard-rho factorization, in the ith step you compute x_i and x_(2i), and check the GCD of x_(2i)-x_i with n. In Brent's factorization method, you compute GCD(x_(2^a)-x_(2^a+b),n) for b=1,2, ..., 2^a. (I used the indices starting with 1 to agree with the pseudocode, but elsewhere the sequence is initialized with x_0.) In the code, k=2^a and i=2^a+b. When you detect that i has reached the next power of 2, you increase k to 2^(a+1).
GCDs can be computed very rapidly by Euclid's algorithm without knowing the factorizations of the numbers. Any time you find a nontrivial GCD with n, this helps you to factor n. In both Pollard-rho factorization and Brent's algorithm, one idea is that if you iterate a polynomial such as x^2-c, the differences between the values of the iterates mod n tend to be good candidates for numbers that share nontrivial factors with n. This is because (by the Chinese Remainder Theorem) iterating the polynomial mod n is the same as simultaneously iterating the polynomial mod each prime power in the prime factorization of n. If x_i=x_j mod p1^e1 but not mod p2^e2, then GCD(xi-xj,n) will have p1^e1 as a factor but not p2^e2, so it will be a nontrivial factor.
This is one trial because x_1 is initialized once. If you get unlucky, the value you choose for x_1 starts a preperiodic sequence that repeats at the same time mod each prime power in the prime factorization of n, even though n is not prime. For example, suppose n=1711=29*59, and x_1 = 4, x_2=15, x_3=224, x_4=556, x_5=1155, x_6=1155, ... This sequence does not help you to find a nontrivial factorization, since all of the GCDs of differences between distinct elements and 1711 are 1. If you start with x_1=5, then x_2=24, x_3=575, x_4=401, x_5=1677, x_6=1155, x_7=1155, ... In either factorization method, you would find that GCD(x_4-x_2,1711)=GCD(377,1711)=29, a nontrivial factor of 1711. Not only are some sequences not helpful, others might work, but it might be faster to give up and start with another initial value. So, normally you don't keep increasing i forever, normally there is a termination threshold where you might try a different initial value.
If you need to generate primes from 1 to N, the "dumb" way to do it would be to iterate through all the numbers from 2 to N and check if the numbers are divisable by any prime number found so far which is less than the square root of the number in question.
As I see it, sieve of Eratosthenes does the same, except other way round - when it finds a prime N, it marks off all the numbers that are multiples of N.
But whether you mark off X when you find N, or you check if X is divisable by N, the fundamental complexity, the big-O stays the same. You still do one constant-time operation per a number-prime pair. In fact, the dumb algorithm breaks off as soon as it finds a prime, but sieve of Eratosthenes marks each number several times - once for every prime it is divisable by. That's a minimum of twice as many operations for every number except primes.
Am I misunderstanding something here?
In the trial division algorithm, the most work that may be needed to determine whether a number n is prime, is testing divisibility by the primes up to about sqrt(n).
That worst case is met when n is a prime or the product of two primes of nearly the same size (including squares of primes). If n has more than two prime factors, or two prime factors of very different size, at least one of them is much smaller than sqrt(n), so even the accumulated work needed for all these numbers (which form the vast majority of all numbers up to N, for sufficiently large N) is relatively insignificant, I shall ignore that and work with the fiction that composite numbers are determined without doing any work (the products of two approximately equal primes are few in number, so although individually they cost as much as a prime of similar size, altogether that's a negligible amount of work).
So, how much work does the testing of the primes up to N take?
By the prime number theorem, the number of primes <= n is (for n sufficiently large), about n/log n (it's n/log n + lower order terms). Conversely, that means the k-th prime is (for k not too small) about k*log k (+ lower order terms).
Hence, testing the k-th prime requires trial division by pi(sqrt(p_k)), approximately 2*sqrt(k/log k), primes. Summing that for k <= pi(N) ~ N/log N yields roughly 4/3*N^(3/2)/(log N)^2 divisions in total. So by ignoring the composites, we found that finding the primes up to N by trial division (using only primes), is Omega(N^1.5 / (log N)^2). Closer analysis of the composites reveals that it's Theta(N^1.5 / (log N)^2). Using a wheel reduces the constant factors, but doesn't change the complexity.
In the sieve, on the other hand, each composite is crossed off as a multiple of at least one prime. Depending on whether you start crossing off at 2*p or at p*p, a composite is crossed off as many times as it has distinct prime factors or distinct prime factors <= sqrt(n). Since any number has at most one prime factor exceeding sqrt(n), the difference isn't so large, it has no influence on complexity, but there are a lot of numbers with only two prime factors (or three with one larger than sqrt(n)), thus it makes a noticeable difference in running time. Anyhow, a number n > 0 has only few distinct prime factors, a trivial estimate shows that the number of distinct prime factors is bounded by lg n (base-2 logarithm), so an upper bound for the number of crossings-off the sieve does is N*lg N.
By counting not how often each composite gets crossed off, but how many multiples of each prime are crossed off, as IVlad already did, one easily finds that the number of crossings-off is in fact Theta(N*log log N). Again, using a wheel doesn't change the complexity but reduces the constant factors. However, here it has a larger influence than for the trial division, so at least skipping the evens should be done (apart from reducing the work, it also reduces storage size, so improves cache locality).
So, even disregarding that division is more expensive than addition and multiplication, we see that the number of operations the sieve requires is much smaller than the number of operations required by trial division (if the limit is not too small).
Summarising:
Trial division does futile work by dividing primes, the sieve does futile work by repeatedly crossing off composites. There are relatively few primes, but many composites, so one might be tempted to think trial division wastes less work.
But: Composites have only few distinct prime factors, while there are many primes below sqrt(p).
In the naive method, you do O(sqrt(num)) operations for each number num you check for primality. Ths is O(n*sqrt(n)) total.
In the sieve method, for each unmarked number from 1 to n you do n / 2 operations when marking multiples of 2, n / 3 when marking those of 3, n / 5 when marking those of 5 etc. This is n*(1/2 + 1/3 + 1/5 + 1/7 + ...), which is O(n log log n). See here for that result.
So the asymptotic complexity is not the same, like you said. Even a naive sieve will beat the naive prime-generation method pretty fast. Optimized versions of the sieve can get much faster, but the big-oh remains unchanged.
The two are not equivalent like you say. For each number, you will check divisibility by the same primes 2, 3, 5, 7, ... in the naive prime-generation algorithm. As you progress, you check divisibility by the same series of numbers (and you keep checking against more and more as you approach your n). For the sieve, you keep checking less and less as you approach n. First you check in increments of 2, then of 3, then 5 and so on. This will hit n and stop much faster.
Because with the sieve method, you stop marking mutiples of the running primes when the running prime reaches the square root of N.
Say, you want to find all primes less than a million.
First you set an array
for i = 2 to 1000000
primetest[i] = true
Then you iterate
for j=2 to 1000 <--- 1000 is the square root of 10000000
if primetest[j] <--- if j is prime
---mark all multiples of j (except j itself) as "not a prime"
for k = j^2 to 1000000 step j
primetest[k] = false
You don't have to check j after 1000, because j*j will be more than a million.
And you start from j*j (you don't have to mark multiples of j less than j^2 because they are already marked as multiples of previously found, smaller primes)
So, in the end you have done the loop 1000 times and the if part only for those j's that are primes.
Second reason is that with the sieve, you only do multiplication, not division. If you do it cleverly, you only do addition, not even multiplication.
And division has larger complexity than addition. The usual way to do division has O(n^2) complexity, while addition has O(n).
Explained in this paper: http://www.cs.hmc.edu/~oneill/papers/Sieve-JFP.pdf
I think it's quite readable even without Haskell knowledge.
the first difference is that division is much more expensive than addition. Even if each number is 'marked' several times, it's trivial when compared with the huge number of divisions needed for the 'dumb' algorithm.
A "naive" Sieve of Eratosthenes will mark non-prime numbers multiple times.
But, if you have your numbers on a linked list and remove numbers taht are multiples (you will still need to walk the remainder of the list), the work left to do after finding a prime is always smaller than it was before finding the prime.
http://en.wikipedia.org/wiki/Prime_number#Number_of_prime_numbers_below_a_given_number
the "dumb" algorithm does i/log(i) ~= N/log(N) work for each prime number
the real algorithm does N/i ~= 1 work for each prime number
Multiply by roughly N/log(N) prime numbers.
It was a time when I was trying to find an efficient way of finding sum of primes less than x:
There I decided to use N by N square table and started checking if numbers with a unit digits in [1,3,7,9]
But Eratosthenes Method of prime made it a little easier: How
Let you want to know if N is prime or Not
You started finding factorization. So you will realize when N is factorized
when you divide N with the highest factor quotient will be less.
So, You take a number: int(sqrt(N)) = K(say) divides N you get somewhat same and close number to K
Now let's say you divide N with u<K, but if "U" is not prime and one of the prime factors of U is V(prime) then will obviously be less than U (V<U) and V will also divide N
then
why not divide and check if N is prime or not by DIVIDING 'N' WITH ONLY PRIMES LESS THAN K=int(sqrt(N))
Number of times for which loop Keeps executing = π(√n)
This is how the brilliant idea of Eratosthenes starts taking pictures and will start giving you intuition behind this all.
Btw using the Sieve of Eratosthenes one can find sum of primes less than a multiple of 10.
because for a given column you just check need to check their unit digits[1,3,7,9] and for how many times a particular unit digit is repeating.
Being new to Stack Overflow Community! Would like to know suggestions on the same if anything is wrong.