Is SOD(A^b) % mod == SOD(bigmod(A,b,mod))? - c++11

I have to calculate A^b where A^b can be a huge number then find The SOD(Sum of divisors) of that number.
So, If I calculate SOD(bigmod(A,b,mod)) will this return the same value as SOD(A^b) % mod
Asking for this As I am getting WA and could not get where I mistaken...
Link to the Problem

Related

Minimum number of steps required to convert A to B

I have recently participated in a coding contest where one of the problems was as follows:
Given two integers X and Y, find the minimum number of steps required to convert X to Y. You can perform the following operations any number of times in any order:
1) Divide X by any integer A, 2)Multiply X by any integer B.
Example: If X=15 and Y=10, then first multiply X by 2 which gives 30 and then divide 30 by 3 to get Y(i.e.,10). So the minimum no. of steps in this case is 2.
I have no idea how to solve it.
As stated, the minimum number of steps is no greater than two: you can always choose A=X and B=Y so that X/A*B = X/X*Y = Y.
The only times you can do better than this are the following:
if X % Y = 0, the minimum number of steps is 1 and the correct choice is A=(X/Y).
if Y % X = 0, the minimum number of steps is 1 and the correct choice is B=(Y/X).
if X = Y, the minimum number of steps is 0 as no multiplications or divisions are required at all.

How to solve SPOJ : SCALE using binary search?

http://www.spoj.com/problems/SCALE/
I am trying to do it using recursion but getting TLE.
The tags of the problem say BINARY SEARCH.
How can one do it using binary search ?
Thanx in advance.
First thing to notice here is that if you had two weights of each size instead of one, then the problem would be quite trivial, as we we would only need to represent X in its base 3 representation and take corresponding number of weights. For, example if X=21 then we could take two times P_3 and one time P_2, and put those into another scale.
Now let's try to make something similar using the fact that we can add to both scales (including the one where X is placed):
Assume that X <= P_1+P_2+...+P_n, that would mean that X <= P_n + (P_n-1)/2 (easy to understand why). Therefore, X + P_(n-1) + P_(n-2)+...+P_1 < 2*P_n.
(*) What that means is that if we add some of the weights from 1 to n-1 to same scale as X, then the number on that scale still does
not have 2 in its n-th rightmost digit (either 0 or 1).
From now on assume that digit means a digit of a number in its base-3 representation (but it can temporarily become larger than 2 :P ). Now lets denote the total weight of first scale (where X is placed) as A=X and the other scale is B=0 and our goal is to make them equal (both A and B will change as we will make our progress) .
Let's iterate through all digits of the A from smallest to largest (leftmost). If the current digit index is i and it:
Equals to 0 then just ignore and proceed further
Equals to 1 then we place weight P_i=3^(i-1) on scale B.
Equals to 2 then we add P_i=3^(i-1) to scale A. Note that it would result in the increase of the digit (i+1).
Equals to 3 (yes this case is possible, if both current and previous digit were 2) add 1 to digit at index i+1 and go further (no weights are added to any scale).
Due to (*) obviously the procedure will run correctly (as the last digit will be equal to 1 in A), as we will choose only one weight from the set and place them correctly, and obviously the numbers A and B will be equal after the procedure is complete.
Now second case X > P_1+P_2+...+P_n. Obviously we cannot balance even if we place all weights on the second scale.
This completes the proof and shows when it is possible and the way how to place the weights to both scales to equalise them.
EDIT:
C++ code which I successfully submitted on SPOJ just now https://ideone.com/tbB7Ve
The solution to this problem is quite trivial. The idea is the same as #Yerken's answer, but expressed in a bit different way:
Only the first weight has a mass not divisible by 3. So the first weight is the only one has effect on balancing mod 3 property of the 2 scales:
If X mod 3 == 0, the first weight must not be used
If X mod 3 == 1, the first weight must be on scale B (the currently empty one)
If X mod 3 == 2, the first weight must be on scale A
Subtract both scales by weight(B) --> solution doesn't change, and now weight(A) is divisible by 3 while weight(B) == 0
Set X' = weight(A)/3 and divide every weights Pi by 3 ==> Solution doesn't change, and now it's the same problem with N' = N-1 and X' = (X+1)/3
pseudo-code:
listA <- empty
listB <- empty
for i = 1 to N {
if (X == 0) break for loop; // done!
if (X mod 3 == 1) then push i to listB;
if (X mod 3 == 2) then push i to listA;
X = (X + 1)/3; // integer division
}
hasSolution <- (X == 0)
C++ code: http://ideone.com/LXLGmE

Criteria for choosing mod value

I have an array Array = {}, size of array is n
My constraints are like this:
n <= 100000
and Arrayi <=100
I have to find the products of all the elements in the array, I will be given a mod value with which I have to mod the product. The mod will value changes all the time and this mod value is always less than or equal to n.
My problem is when I chose, a global mod value say R = 1000000000 (which is far bigger than mod constraint) and whenever my product exceeds this value, I mod the result.
But I dont know why the result Im obtaining is zero.
My question is how do I chose R in such situations?
I dont know your code but it is likey that 0 is correct result.
Pick R large prime and make sure that none of elements is divisible by this number in order to get result different from 0.
You haven't showed us your code, but presumably it looks something like the following pseudo-Python code:
limit = 1000000
def product_mod( array, m ):
product = 1
for k in array:
product = product * k
if product > limit: product = product % m
return product % m
This algorithm should work, provided that the limit is low enough that product * k can never overflow. If it doesn't, you probably have a bug in your code.
However, note that it's quite likely that the result of this function may often be legitimately zero: specifically, this will happen whenever the product of the numbers in your array evenly divides the modulus. Since the product will typically be a highly composite number (it will have at least as many factors as there are numbers in the array), this is pretty likely.
In particular, the output of the function will be zero whenever:
any one of the numbers in the array is zero,
any one of the numbers in the array is a multiple of the modulus, or
the product of any subset of the numbers in the array is a multiple of the modulus.
In all those cases, the product of all the numbers in the array will be either zero or a multiple of the modulus (and will thus reduce to zero).
It sounds that you are calculating the product of the values in the array modulo some given value, but are using another value to limit integer overflow in the intermediate calculations. Then you have a high risk of getting a wrong result.
E.g. 120 mod 9 = 3 while (120 mod 100) mod 9 = 20 mod 9 = 2
The correct procedure is then to do all the calculations modulo the same number as you are to use for the final result since (a * b) mod n = (a mod n) * (b mod n) mod n
E.g. (24 * 5) mod 9 = (24 mod 9) * (5 mod 9) mod 9 = (6 * 5) mod 9 = 30 mod 9 = 3

convert real number to radicals

Suppose I have a real number. I want to approximate it with something of the form a+sqrt(b) for integers a and b. But I don't know the values of a and b. Of course I would prefer to get a good approximation with small values of a and b. Let's leave it undefined for now what is meant by "good" and "small". Any sensible definitions of those terms will do.
Is there a sane way to find them? Something like the continued fraction algorithm for finding fractional approximations of decimals. For more on the fractions problem, see here.
EDIT: To clarify, it is an arbitrary real number. All I have are a bunch of its digits. So depending on how good of an approximation we want, a and b might or might not exist. Brute force is naturally not a particularly good algorithm. The best I can think of would be to start adding integers to my real, squaring the result, and seeing if I come close to an integer. Pretty much brute force, and not a particularly good algorithm. But if nothing better exists, that would itself be interesting to know.
EDIT: Obviously b has to be zero or positive. But a could be any integer.
No need for continued fractions; just calculate the square-root of all "small" values of b (up to whatever value you feel is still "small" enough), remove everything before the decimal point, and sort/store them all (along with the b that generated it).
Then when you need to approximate a real number, find the radical whose decimal-portion is closet to the real number's decimal-portion. This gives you b - choosing the correct a is then a simple matter of subtraction.
This is actually more of a math problem than a computer problem, but to answer the question I think you are right that you can use continued fractions. What you do is first represent the target number as a continued fraction. For example, if you want to approximate pi (3.14159265) then the CF is:
3: 7, 15, 1, 288, 1, 2, 1, 3, 1, 7, 4 ...
The next step is create a table of CFs for square roots, then you compare the values in the table to the fractional part of the target value (here: 7, 15, 1, 288, 1, 2, 1, 3, 1, 7, 4...). For example, let's say your table had square roots for 1-99 only. Then you would find the closest match would be sqrt(51) which has a CF of 7: 7,14 repeating. The 7,14 is the closest to pi's 7,15. Thus your answer would be:
sqrt(51)-4
As the closest approximation given a b < 100 which is off by 0.00016. If you allow larger b's then you could get a better approximation.
The advantage of using CFs is that it is faster than working in, say, doubles or using floating point. For example, in the above case you only have to compare two integers (7 and 15), and you can also use indexing to make finding the closest entry in the table very fast.
This can be done using mixed integer quadratic programming very efficiently (though there are no run-time guarantees as MIQP is NP-complete.)
Define:
d := the real number you wish to approximate
b, a := two integers such that a + sqrt(b) is as "close" to d as possible
r := (d - a)^2 - b, is the residual of the approximation
The goal is to minimize r. Setup your quadratic program as:
x := [ s b t ]
D := | 1 0 0 |
| 0 0 0 |
| 0 0 0 |
c := [0 -1 0]^T
with the constraint that s - t = f (where f is the fractional part of d)
and b,t are integers (s is not)
This is a convex (therefore optimally solvable) mixed integer quadratic program since D is positive semi-definite.
Once s,b,t are computed, simply derive the answer using b=b, s=d-a and t can be ignored.
Your problem may be NP-complete, it would be interesting to prove if so.
Some of the previous answers use methods that are of time or space complexity O(n), where n is the largest “small number” that will be accepted. By contrast, the following method is O(sqrt(n)) in time, and O(1) in space.
Suppose that positive real number r = x + y, where x=floor(r) and 0 ≤ y < 1. We want to approximate r by a number of the form a + √b. If x+y ≈ a+√b then x+y-a ≈ √b, so √b ≈ h+y for some integer offset h, and b ≈ (h+y)^2. To make b an integer, we want to minimize the fractional part of (h+y)^2 over all eligible h. There are at most √n eligible values of h. See following python code and sample output.
import math, random
def findb(y, rhi):
bestb = loerror = 1;
for r in range(2,rhi):
v = (r+y)**2
u = round(v)
err = abs(v-u)
if round(math.sqrt(u))**2 == u: continue
if err < loerror:
bestb, loerror = u, err
return bestb
#random.seed(123456) # set a seed if testing repetitively
f = [math.pi-3] + sorted([random.random() for i in range(24)])
print (' frac sqrt(b) error b')
for frac in f:
b = findb(frac, 12)
r = math.sqrt(b)
t = math.modf(r)[0] # Get fractional part of sqrt(b)
print ('{:9.5f} {:9.5f} {:11.7f} {:5.0f}'.format(frac, r, t-frac, b))
(Note 1: This code is in demo form; the parameters to findb() are y, the fractional part of r, and rhi, the square root of the largest small number. You may wish to change usage of parameters. Note 2: The
if round(math.sqrt(u))**2 == u: continue
line of code prevents findb() from returning perfect-square values of b, except for the value b=1, because no perfect square can improve upon the accuracy offered by b=1.)
Sample output follows. About a dozen lines have been elided in the middle. The first output line shows that this procedure yields b=51 to represent the fractional part of pi, which is the same value reported in some other answers.
frac sqrt(b) error b
0.14159 7.14143 -0.0001642 51
0.11975 4.12311 0.0033593 17
0.12230 4.12311 0.0008085 17
0.22150 9.21954 -0.0019586 85
0.22681 11.22497 -0.0018377 126
0.25946 2.23607 -0.0233893 5
0.30024 5.29150 -0.0087362 28
0.36772 8.36660 -0.0011170 70
0.42452 8.42615 0.0016309 71
...
0.93086 6.92820 -0.0026609 48
0.94677 8.94427 -0.0024960 80
0.96549 11.95826 -0.0072333 143
0.97693 11.95826 -0.0186723 143
With the following code added at the end of the program, the output shown below also appears. This shows closer approximations for the fractional part of pi.
frac, rhi = math.pi-3, 16
print (' frac sqrt(b) error b bMax')
while rhi < 1000:
b = findb(frac, rhi)
r = math.sqrt(b)
t = math.modf(r)[0] # Get fractional part of sqrt(b)
print ('{:11.7f} {:11.7f} {:13.9f} {:7.0f} {:7.0f}'.format(frac, r, t-frac, b,rhi**2))
rhi = 3*rhi/2
frac sqrt(b) error b bMax
0.1415927 7.1414284 -0.000164225 51 256
0.1415927 7.1414284 -0.000164225 51 576
0.1415927 7.1414284 -0.000164225 51 1296
0.1415927 7.1414284 -0.000164225 51 2916
0.1415927 7.1414284 -0.000164225 51 6561
0.1415927 120.1415831 -0.000009511 14434 14641
0.1415927 120.1415831 -0.000009511 14434 32761
0.1415927 233.1415879 -0.000004772 54355 73441
0.1415927 346.1415895 -0.000003127 119814 164836
0.1415927 572.1415909 -0.000001786 327346 370881
0.1415927 911.1415916 -0.000001023 830179 833569
I do not know if there is any kind of standard algorithm for this kind of problem, but it does intrigue me, so here is my attempt at developing an algorithm that finds the needed approximation.
Call the real number in question r. Then, first I assume that a can be negative, in that case we can reduce the problem and now only have to find a b such that the decimal part of sqrt(b) is a good approximation of the decimal part of r. Let us now write r as r = x.y with x being the integer and y the decimal part.
Now:
b = r^2
= (x.y)^2
= (x + .y)^2
= x^2 + 2 * x * .y + .y^2
= 2 * x * .y + .y^2 (mod 1)
We now only have to find an x such that 0 = .y^2 + 2 * x * .y (mod 1) (approximately).
Filling that x into the formulas above we get b and can then calculate a as a = r - b. (All of these calculations have to be carefully rounded of course.)
Now, for the time being I am not sure if there is a way to find this x without brute forcing it. But even then, one can simple use a simple loop to find an x good enough.
I am thinking of something like this(semi pseudo code):
max_diff_low = 0.01 // arbitrary accuracy
max_diff_high = 1 - max_diff_low
y = r % 1
v = y^2
addend = 2 * y
x = 0
while (v < max_diff_high && v > max_diff_low)
x++;
v = (v + addend) % 1
c = (x + y) ^ 2
b = round(c)
a = round(r - c)
Now, I think this algorithm is fairly efficient, while even allowing you to specify the wished accuracy of the approximation. One thing that could be done that would turn it into an O(1) algorithm is calculating all the x and putting them into a lookup table. If one only cares about the first three decimal digits of r(for example), the lookup table would only have 1000 values, which is only 4kb of memory(assuming that 32bit integers are used).
Hope this is helpful at all. If anyone finds anything wrong with the algorithm, please let me know in a comment and I will fix it.
EDIT:
Upon reflection I retract my claim of efficiency. There is in fact as far as I can tell no guarantee that the algorithm as outlined above will ever terminate, and even if it does, it might take a long time to find a very large x that solves the equation adequately.
One could maybe keep track of the best x found so far and relax the accuracy bounds over time to make sure the algorithm terminates quickly, at the possible cost of accuracy.
These problems are of course non-existent, if one simply pre-calculates a lookup table.

Number of ways to add up to a sum S with N numbers

Say S = 5 and N = 3 the solutions would look like - <0,0,5> <0,1,4> <0,2,3> <0,3,2> <5,0,0> <2,3,0> <3,2,0> <1,2,2> etc etc.
In the general case, N nested loops can be used to solve the problem. Run N nested loop, inside them check if the loop variables add upto S.
If we do not know N ahead of time, we can use a recursive solution. In each level, run a loop starting from 0 to N, and then call the function itself again. When we reach a depth of N, see if the numbers obtained add up to S.
Any other dynamic programming solution?
Try this recursive function:
f(s, n) = 1 if s = 0
= 0 if s != 0 and n = 0
= sum f(s - i, n - 1) over i in [0, s] otherwise
To use dynamic programming you can cache the value of f after evaluating it, and check if the value already exists in the cache before evaluating it.
There is a closed form formula : binomial(s + n - 1, s) or binomial(s+n-1,n-1)
Those numbers are the simplex numbers.
If you want to compute them, use the log gamma function or arbitrary precision arithmetic.
See https://math.stackexchange.com/questions/2455/geometric-proof-of-the-formula-for-simplex-numbers
I have my own formula for this. We, together with my friend Gio made an investigative report concerning this. The formula that we got is [2 raised to (n-1) - 1], where n is the number we are looking for how many addends it has.
Let's try.
If n is 1: its addends are o. There's no two or more numbers that we can add to get a sum of 1 (excluding 0). Let's try a higher number.
Let's try 4. 4 has addends: 1+1+1+1, 1+2+1, 1+1+2, 2+1+1, 1+3, 2+2, 3+1. Its total is 7.
Let's check with the formula. 2 raised to (4-1) - 1 = 2 raised to (3) - 1 = 8-1 =7.
Let's try 15. 2 raised to (15-1) - 1 = 2 raised to (14) - 1 = 16384 - 1 = 16383. Therefore, there are 16383 ways to add numbers that will equal to 15.
(Note: Addends are positive numbers only.)
(You can try other numbers, to check whether our formula is correct or not.)
This can be calculated in O(s+n) (or O(1) if you don't mind an approximation) in the following way:
Imagine we have a string with n-1 X's in it and s o's. So for your example of s=5, n=3, one example string would be
oXooXoo
Notice that the X's divide the o's into three distinct groupings: one of length 1, length 2, and length 2. This corresponds to your solution of <1,2,2>. Every possible string gives us a different solution, by counting the number of o's in a row (a 0 is possible: for example, XoooooX would correspond to <0,5,0>). So by counting the number of possible strings of this form, we get the answer to your question.
There are s+(n-1) positions to choose for s o's, so the answer is Choose(s+n-1, s).
There is a fixed formula to find the answer. If you want to find the number of ways to get N as the sum of R elements. The answer is always:
(N+R-1)!/((R-1)!*(N)!)
or in other words:
(N+R-1) C (R-1)
This actually looks a lot like a Towers of Hanoi problem, without the constraint of stacking disks only on larger disks. You have S disks that can be in any combination on N towers. So that's what got me thinking about it.
What I suspect is that there is a formula we can deduce that doesn't require the recursive programming. I'll need a bit more time though.

Resources