Power of two to get an integer - algorithm

I need a (fairly) fast way to get the following for my code.
Background: I have to work with powers of numbers and their product, so I decided to use logs.
Now I need a way to convert the log back to an integer.
I can't just take 2^log_val (I'm working with log base 2) because the answer will be too large. In fact i need to give the answer mod M for given M.
I tried doing this. I wrote log_val as p+q, where q is a float, q < 1 and p is an integer.
Now i can calculate 2^p very fast using log n exponentiation along with the modulo, but i can't do anything with the 2^q. What I thought of doing is finding the first integral power of 2, say x, such that 2^(x+q) is very close to an integer, and then calculate 2^p-x.
This is too long for me because in the worst case I'll take O(p) steps.
Is there a better way?

While working with large numbers as logs is usually a good approach, it won't work here. The issue is that working in log space throws away the least significant digits, thus you have lost information, and won't be able to go back. Working in mod space will also throw away information (otherwise your number gets to big, as you say), but it throws away the most significant ones instead.
For your particular problem POWERMUL, what I would do is to calculate the prime factorizations of the numbers from 1 to N. You have to be careful how you do it, since your N is fairly large.
Now, if your number is k with the prime factorization {2: 3, 5: 2} you get the factorization of k^m by {2: m*3, 5:m*2}. Division similarly turns into subtraction.
Once you have the prime factorization representation of f(N)/(f(r)*f(N-r)) you can recreate the integer with a combination of modular multiplication and exponentiation. The later is a cool technique to look up. (In fact languages like python has it built in with pow(3, 16, 7)=4.
Have fun :)

If you need an answer mod N, you can often do each step of your whole calculation mod N. That way, you never exceed your system's integer size restrictions.

Related

finding power of a number

I have a very big number which is a product of several small primes. I know the number and also I know the prime factors but I don't know their powers. for example:
(2^a)x(3^b)x(5^c)x(7^d)x(11^e)x .. = 2310
Now I want to recover the exponents in a very fast and efficient manner. I want to implement it in an FPGA.
Regards,
The issue is that you are doing a linear search for the right power when you should be doing a binary search. Below is an example showing how to the case where the power of p is 10 (p^10). This method finds the power in O(log N) divisions rather than O(N).
First find the upper limit by increasing the power quickly until it's too high, which happens at step 5. Then it uses a binary search to find the actual power.
Check divisibility by p. Works.
Check divisibility by p^2. Works.
Check divisibility by p^4. Works.
Check divisibility by p^8. Works.
Check divisibility by p^16. Doesn't work. Undo/ignore this one.
Check divisibility by p^((8+16)/2)=p^12. Doesn't work. Undo/ignore this one.
Check divisibility by p^((8+12)/2)=p^10. Works, but might be too low.
Check divisibility by p^((10+12)/2)=p^11. Doesn't work. Undo/ignore this one.
Since ((10+11)/2)=10.5 is not an integer, the power most be the low end, which is 10.
Note, there is a method where you actually divide by p, and at step 4, you've actually divided the number by p^(1+2+4+8)=p^15, but it's a bit more difficult to explain the binary search part. However, the size of the number being divided gets smaller, so division operations are faster.

Generate a random number using coin flips while guarenteeing termination

The usual method to generate a uniform random number 0..n using coin flips is to build a rng for the smallest power of two greater than n in the obvious way, then whenever this algorithm generates a number larger than n-1, throw that number away and try again.
Unfortunately this has worst case runtime of infinity.
Is there any way to solve this problem while guaranteeing termination?
Quote from this answer https://stackoverflow.com/a/137809/261217:
There is no (exactly correct) solution which will run in a constant
amount of time, since 1/7 is an infinite decimal in base 5.
Now ask Adam Rosenfield why it is true :)

Algorithm to find an arbitrarily large number

Here's something I've been thinking about: suppose you have a number, x, that can be infinitely large, and you have to find out what it is. All you know is if another number, y, is larger or smaller than x. What would be the fastest/best way to find x?
An evil adversary chooses a really large number somehow ... say:
int x = 9^9^9^9^9^9^9^9^9^9^9^9^9^9^9
and provides isX, isBiggerThanX, and isSmallerThanx functions. Example code might look something like this:
int c = 2
int y = 2
while(true)
if isX(y) return true
if(isBiggerThanX(y)) fn()
else y = y^c
where fn() is a function that, once a number y has been found (that's bigger than x) does something to determine x (like divide the number in half and compare that, then repeat). The thing is, since x is arbitrarily large, it seems like a bad idea to me to use a constant to increase y.
This is just something that I've been wondering about for a while now, I'd like to hear what other people think
Use a binary search as in the usual "try to guess my number" game. But since there is no finite upper end point, we do a first phase to find a suitable one:
Initially set the upper end point arbitrarily (e.g. 1000000, though 1 or 1^100 would also work -- given the infinite space to work in, all finite values are equally disproportionate).
Compare the mystery number X with the upper end point.
If it's not big enough, double it, and try again.
Once the upper end point is bigger than the mystery number, proceed with a normal binary search.
The first phase is itself similar to a binary search. The difference is that instead of halving the search space with each step, it's doubling it! The cost for each phase is O(log X). A small improvement would be to set the lower end point at each doubling step: we know X is at least as high as the previous upper end point, so we can reuse it as the lower end point. The size of the search space still doubles at each step, but in the end it will be half as large as would have been. The cost of the binary search will be reduced by only 1 step, so its overall complexity remains the same.
Some notes
A couple of notes in response to other comments:
It's an interesting question, and computer science is not just about what can be done on physical machines. As long as the question can be defined properly, it's worth asking and thinking about.
The range of numbers is infinite, but any possible mystery number is finite. So the above method will eventually find it. Eventually is defined such as that, for any possible finite input, the algorithm will terminate within a finite number of steps. However since the input is unbounded, the number of steps is also unbounded (it's just that, in every particular case, it will "eventually" terminate.)
If I understand your question correctly (advise if I do not), you're asking about how to solve "pick a number from 1 to 10", except that instead of 10, the upper bound is infinity.
If your number space is truly infinite, the following are true:
The value will never be held in an int (or any other data type) on any physical hardware
You will NEVER find your number
If the space is immensely large but bound, I think the best you can do is a binary search. Start at the middle of the number range. If the desired number turns out to be higher or lower, divide that half of the number space, and repeat until the desired number is found.
In your suggested implementation you raise y ^ c. However, no matter how large c is chosen to be, it will not even move the needle in infinite space.
Infinity isn't a number. Thus you can't find it, even with a computer.
That's funny. I've wondered the same thing for years, though I've never heard anyone else ask the question.
As simple as your scenario is, it still seems to provide insufficient information to allow the choice of an optimal strategy. All one can choose is a suitable heuristic. My heuristic had been to double y, but I think that I like yours better. Yours doubles log(y).
The beauty of your heuristic is that, so long as the integer fits in the computer's memory, it finds a suitable y in logarithmic time.
Counter-question. Once you find y, how do you proceed?
I agree with using binary search, though I believe that a ONE-SIDED binary search would be more suitable, since here the complexity would NOT be O( log n ) [ Where n is the range of allowable numbers ], but O( log k ) - where k is the number selected by your adversary.
This would work as follows : ( Pseudocode )
k = 1;
while( isSmallerThanX( k ) )
{
k = k*2;
}
// At this point, once the loop is exited, k is bigger than x
// Now do normal binary search for the range [ k/2, k ] to find your number :)
So even if the allowable range is infinity, as long as your number is finite, you should be able to find it :)
Your method of tetration is guaranteed to take longer than the age of the universe to find an answer, if the opponent merely uses a paradigm which is better (for example, pentation). This is how you should do it:
You can only do this with symbolic representations of numbers, because it is trivial to name a number your computer cannot store in floating-point representation, even if it used arbitrary-precision arithmetic and all its memory.
Required reading: http://www.scottaaronson.com/writings/bignumbers.html - that pretty much sums it up
How do you represent a number then? You represent it by a program which will, if run to completion, print out that number. Even then, your computer is incapable of computing BusyBeaver(10^100) (if you dictated a program 1 terabyte in size, this well over the maximum number of finite clock cycles it could run without looping forever). You can see that we could easily have the computer print out 1 0 0... each clock cycle, making the maximum number it could say (if we waited nearly an eternity) would be 10^BusyBeaver(10^100). If you allowed it to say more complicated expressions like eval(someprogram), power-towers, Ackermann's function, whatever-- then I believe that would be no better than increasing the original 10^100 by some constant proportional to the complexity of what you described (plus some logarithmic interpreter factor, see Kolmogorov complexity).
So let's frame this another way:
Your opponent picks a finite computable number, and gives you a function tells you if the number is smaller/larger/equal by computing it. He also gives you a representation for the output (in a sane world this would be "you can only print numbers like 99999", but he can make it more complicated; it actually doesn't matter). Proceed to measure the size of this function in bits.
Now, answer with your own function, which is twice the size of his function (in bits), and prints out the largest number it can while keeping the code to less than 2N bits in length. (You use the same representation he chose: In a world where you can only print out numbers like "99999", that's what you do. If you can define functions, it gets slightly more complicated.)
I do not understand the purpose here, but I this is what I thought of:
Reading your comments, I suppose you aren't looking for infinitely large number, but a "super large number" instead. And whatever be the number, it will have a large no. of digits. How you got them, isn't the concern. Keeping this in mind:
No complex computation is required. Just type random keys on your numeric keyboard to have a super large number, and then have a program randomly add/remove/modify digits of that number. You get a list of very large numbers - select any one out of them.
e.g: 3672036025039629036790672927305060260103610831569252706723680972067397267209
and keep modifying/adding digits to get more numbers
PS: If you state the purpose in your question clearly, we might be able to give better answers.

Most efficient algorithm to compute a common numerator of a sum of fractions

I'm pretty sure that this is the right site for this question, but feel free to move it to some other stackexchange site if it fits there better.
Suppose you have a sum of fractions a1/d1 + a2/d2 + … + an/dn. You want to compute a common numerator and denominator, i.e., rewrite it as p/q. We have the formula
p = a1*d2*…*dn + d1*a2*d3*…*dn + … + d1*d2*…d(n-1)*an
q = d1*d2*…*dn.
What is the most efficient way to compute these things, in particular, p? You can see that if you compute it naïvely, i.e., using the formula I gave above, you compute a lot of redundant things. For example, you will compute d1*d2 n-1 times.
My first thought was to iteratively compute d1*d2, d1*d2*d3, … and dn*d(n-1), dn*d(n-1)*d(n-2), … but even this is inefficient, because you will end up computing multiplications in the "middle" twice (e.g., if n is large enough, you will compute d3*d4 twice).
I'm sure this problem could be expressed somehow using maybe some graph theory or combinatorics, but I haven't studied enough of that stuff to have a good feel for it.
And one note: I don't care about cancelation, just the most efficient way to multiply things.
UPDATE:
I should have known that people on stackoverflow would be assuming that these were numbers, but I've been so used to my use case that I forgot to mention this.
We cannot just "divide" out an from each term. The use case here is a symbolic system. Actually, I am trying to fix a function called .as_numer_denom() in the SymPy computer algebra system which presently computes this the naïve way. See the corresponding SymPy issue.
Dividing out things has some problems, which I would like to avoid. First, there is no guarantee that things will cancel. This is because mathematically, (a*b)**n != a**n*b**n in general (if a and b are positive it holds, but e.g., if a == b ==-1 and n == 1/2, you get (a*b)**n == 1**(1/2) == 1 but (-1)**(1/2)*(-1)**(1/2) == I*I == -1). So I don't think it's a good idea to assume that dividing by an will cancel it in the expression (this may be actually be unfounded, I'd need to check what the code does).
Second, I'd like to also apply a this algorithm to computing the sum of rational functions. In this case, the terms would automatically be multiplied together into a single polynomial, and "dividing" out each an would involve applying the polynomial division algorithm. You can see in this case, you really do want to compute the most efficient multiplication in the first place.
UPDATE 2:
I think my fears for cancelation of symbolic terms may be unfounded. SymPy does not cancel things like x**n*x**(m - n) automatically, but I think that any exponents that would combine through multiplication would also combine through division, so powers should be canceling.
There is an issue with constants automatically distributing across additions, like:
In [13]: 2*(x + y)*z*(S(1)/2)
Out[13]:
z⋅(2⋅x + 2⋅y)
─────────────
2
But this is first a bug and second could never be a problem (I think) because 1/2 would be split into 1 and 2 by the algorithm that gets the numerator and denominator of each term.
Nonetheless, I still want to know how to do this without "dividing out" di from each term, so that I can have an efficient algorithm for summing rational functions.
Instead of adding up n quotients in one go I would use pairwise addition of quotients.
If things cancel out in partial sums then the numbers or polynomials stay smaller, which makes computation faster.
You avoid the problem of computing the same product multiple times.
You could try to order the additions in a certain way, to make canceling more likely (maybe add quotients with small denominators first?), but I don't know if this would be worthwhile.
If you start from scratch this is simpler to implement, though I'm not sure it fits as a replacement of the problematic routine in SymPy.
Edit: To make it more explicit, I propose to compute a1/d1 + a2/d2 + … + an/dn as (…(a1/d1 + a2/d2) + … ) + an/dn.
Compute two new arrays:
The first contains partial multiples to the left: l[0] = 1, l[i] = l[i-1] * d[i]
The second contains partial multiples to the right: r[n-1] = 1, r[i] = d[i] * r[i+1]
In both cases, 1 is the multiplicative identity of whatever ring you are working in.
Then each of your terms on the top, t[i] = l[i-1] * a[i] * r[i+1]
This assumes multiplication is associative, but it need not be commutative.
As a first optimization, you don't actually have to create r as an array: you can do a first pass to calculate all the l values, and accumulate the r values during a second (backward) pass to calculate the summands. No need to actually store the r values since you use each one once, in order.
In your question you say that this computes d3*d4 twice, but it doesn't. It does multiply two different values by d4 (one a right-multiplication and the other a left-multiplication), but that's not exactly a repeated operation. Anyway, the total number of multiplications is about 4*n, vs. 2*n multiplications and n divisions for the other approach that doesn't work in non-commutative multiplication or non-field rings.
If you want to compute p in the above expression, one way to do this would be to multiply together all of the denominators (in O(n), where n is the number of fractions), letting this value be D. Then, iterate across all of the fractions and for each fraction with numerator ai and denominator di, compute ai * D / di. This last term is equal to the product of the numerator of the fraction and all of the denominators other than its own. Each of these terms can be computed in O(1) time (assuming you're using hardware multiplication, otherwise it might take longer), and you can sum them all up in O(n) time.
This gives an O(n)-time algorithm for computing the numerator and denominator of the new fraction.
It was also pointed out to me that you could manually sift out common denominators and combine those trivially without multiplication.

Counting permutation of Strings

I need help with a problem. Given an input string with repetitions, say "aab", how to
count the number of distinct permutations of that string.
One formula that could be used is n!/n1!n2!.....nr!.
However calculating these ni's takes time O(rn) and O(n),if we
use a lookup table.
However I need a solution without use of such tables.Is any recursive or
dynamic programming solution possible for this problem.
Thanks in advance.
no. of distinct permutations will be n!/(c1!*c2*..*cn!)
here n is length of the string
ck denotes the no. of occurence of each distinct character.
For eg: string :aabb n=4 ca=2,cb=2
solution=4!/(2!*2!)=6
If you want to do this for very large strings, consider using the gamma function (with gamma(n+1)=n!), which is faster for large n and still gives you floating-point accuracy even in cases where you would get an int overflow.
If you have arbitrary precision arithmetic, you could probably push the effort down to O(r+n) by exploiting the fact that you can, e.g. write 1*2*3 * 1*2*3*4 * 1*2*3*4*5*6*7 as (1*2*3)^3 * 4^2 * 6*7. The end result will still have O(rn) digits and you'll still have an O(rn) time consumption, because multiplication cost increases with the size of the number.
I don't see the difference between lookup tables and dynamic programming - basically, dynamic programming uses a lookup table that you build on-the-fly. (i.e., use a lookup table, but only populate it on-demand).
Do you need approximate answers, or exact ones? Which part of this calculation do you think is slow?
If you need approximate answers, use the gamma function as #Yannick Versley suggested.
If you need exact answers, here is how I'd do it. I'd first figure out the prime factorization of the answer, then multiply those factors out. This avoids division. The hard part of figuring out the prime factorization is figuring out the prime factorization of n!. For that you can use a trick. Suppose that p is a prime, and k is the integer part of n/p'. Then the number of times thatpdividesn!iskplus the number of times thatpdividesk. Proceed recursively and it is quick to see that, for instance, the number of times that3is a factor of80!is26 + 8 + 2 = 36`. So after you find the primes up to 'n', it isn't hard to find the prime factorization of 'n!'.
Once you know the prime factorization, you can multiply it out. You expect to be dealing with large numbers, so try to arrange to do lots of small multiplications first, and only a few big ones. Here is a simple way to do that.
Make an array of the prime factors. Scramble it (to mix up big and small factors). Then as long as you have at least 2 factors in your array grab the first two, multiply them, push them onto the end. When you have one number left, that is your answer.
This should be much, much faster for large strings than the naive approach of multiplying the numbers one at a time. However in the end you will have very large numbers, and nothing can make multiplying those fast.
You can keep a running counts for each character, and build the result up as you go along. It's impossible to do better than O(n), since without looking at every character in the string you can't know how many of each character there are.
I've written some code in Python, with some simple unit tests. The code carefully avoids large intermediate values when the result is going to be small (in fact, the variable result is never larger than len(s) times the final result). If you were going to code this up in another language, say C, then you might use an array of size 256 rather than the defaultdict.
If you want an exact result, then I don't think you can do better than this.
from collections import defaultdict
def permutations(s):
seen = defaultdict(int)
for c in s:
seen[c] += 1
result = 1
n = 0
for k, count in seen.iteritems():
for j in xrange(count):
n += 1
result *= n
result //= j + 1
return result
test_cases = [
('abc', 6),
('aab', 3),
('abcd', 24),
('aabb', 6),
('aaaaa', 1),
('a', 1)]
for s, want in test_cases:
got = permutations(s)
if got != want:
print 'permutations(%s) = %s want %s' % (s, got, want)
As #MRalwasser says, the number of permutations should be n!. You can generate those permutations fairly simply, but the run time is going to be exponential because you have to hit exponentially many output strings. (Quick way to show O(n!) = O(2n) is by using Stirling's Formula.)

Resources