Minimal positive integer n which is divisible by d and has sum of digits equal to s - number-theory

I found this problem on codeforces (http://codeforces.com/problemset/problem/1070/A) and I'm trying to understand a pretty elegant solution that was posted:
#include<bits/stdc++.h>
using namespace std;
typedef long long ll;
int d,s;
struct node
{
int mod,sum;
char s[700];
int len;
};
queue<node> Q;
bool v[512][5200];
int main()
{
scanf("%d%d",&d,&s);
Q.push({0,0,0,0});
while(!Q.empty())
{
node a=Q.front();Q.pop();
for(int i=0;i<10;i++)
{
node b=a;
b.s[b.len++]=i+'0';
b.mod=(b.mod*10+i)%d;
b.sum+=i;
if(v[b.mod][b.sum] || b.sum>s) continue;
v[b.mod][b.sum]=1;
if(b.mod==0&&b.sum==s)
{
puts(b.s);
return 0;
}
Q.push(b);
}
}
puts("-1");
return 0;
}
I understand that a tree-like search is being done by adding digits to prefixes and putting them on a queue. The search goes like this 1,2,3,4... then 10, 11, 12... 20, 21, 22... etc.
What I don't understand is the following stop condition:
if(v[b.mod][b.sum] || b.sum>s) continue;
It is clear that if the sum of digits is greater than s, the current path is not worth pursuing. But what is the basis for discarding the path if we have previously encountered a number with the same remainder and sum of digits?
One example is this:
d = 13
s = 50
When the path hits 120, it triggers the condition because we have already seen the number 3, which has the same remainder as 120, and the same sum of digits.

Using your example of 120 and 3 and a bit of modular arithmetic, it's fairly easy to show, why 120 is sorted out based on the fact that 3 was already tested:
Addition and multiplication in modular arithmetic are defined as:
((a mod n) + (b mod n)) mod n = (a + b) mod n
((a mod n) * (b mod n)) mod n = (a * b) mod n
Using these, we can show that for any additional digit x, the remainder modulo d will remain the same:
(120 * 10 + x) mod d = ((120 mod d) * (10 mod d) + (x mod d)) mod d
( 3 * 10 + x) mod d = (( 3 mod d) * (10 mod d) + (x mod d)) mod d
since we know that 3 mod d = 120 mod d, we know that the two above terms will have the same remainder, if we test them with the same additional digit.
But their sum of digits is also equal, which means that the same set of new digits can be applied to both numbers. So 120 and 3 are equivalent as far as the problem is concerned and the former can be discarded.

Related

Binary number with two one insite it

is there an algorithm that find all binaries numbers between a and b, in which there are exactly two one?
For example:
a = 5
b = 10
find(a, b)
It will find
5 = 00000101
6 = 00000110
9 = 00001001
10 = 00001010
A bit-hacking trick that iterates through all bit-paterns that contain the same number of 1-bits looks as follows
unsigned next_combination(unsigned x)
{
unsigned u = x & -x;
unsigned v = u + x;
x = v + (((v ^ x) / u) >> 2);
return x;
}
It generates the values in ascending order. It takes the previous value and transforms it into the next one with the same number of 1-bits. This means that you just have to start from the minimal bit combination that is greater or equal to a and iterate until you encounter a value greater than b.
Of course, in this form it will only work if your a and b are within the range of unsigned.
These numbers are of the form
2^m + 2^n
with m > n.
You can find them by exhaustive search on m, n.
M= 1
while M < b:
N= 1
while M + N <= b:
if a <= M + N:
print M + N
N+= N
M+= M
This can probably slightly be optimized to avoid searching when 2^m < a, but the benefit will be tiny: the complexity is O(log²b), which is already small.

Multiples within interval

Given a number n and some interval (L : R), how can I count the multiples of n within this interval?
If I do (R-L+1)/n, it won't give me right answer, cos for example, within 3 and 5, there are one multiple of 4, but (5-3+1)/4 = 0, within 4 and 8, there are 2 multiples of 4, but (8-4+1)/4 = 1.
I tried this but it wont work too (fail in div(4,4,13) = 2 )
int div(int n, int l, int r){
let mod = n - l % n;
let first = mod == n? l : l + mod;
return first > r? 0 : (r-first+1)/n + 1;
}
the point is: I don't wanna check a thousand things, I guess there's some fast way to do it.
Wouldn't
R/n - (L-1)/n
work assuming integer divisions here? Since R/n is the number of multiples of n <= R, and (L-1)/n the number of multiples of n < L, the difference is what you want.

Double Squares: counting numbers which are sums of two perfect squares

Source: Facebook Hacker Cup Qualification Round 2011
A double-square number is an integer X which can be expressed as the sum of two perfect squares. For example, 10 is a double-square because 10 = 32 + 12. Given X, how can we determine the number of ways in which it can be written as the sum of two squares? For example, 10 can only be written as 32 + 12 (we don't count 12 + 32 as being different). On the other hand, 25 can be written as 52 + 02 or as 42 + 32.
You need to solve this problem for 0 ≤ X ≤ 2,147,483,647.
Examples:
10 => 1
25 => 2
3 => 0
0 => 1
1 => 1
Factor the number n, and check if it has a prime factor p with odd valuation, such that p = 3 (mod 4). It does if and only if n is not a sum of two squares.
The number of solutions has a closed form expression involving the number of divisors of n. See this, Theorem 3 for a precise statement.
Here is my simple answer in O(sqrt(n)) complexity
x^2 + y^2 = n
x^2 = n-y^2
x = sqrt(n - y^2)
x should be integer so (n-y^2) should be perfect square. Loop to y=[0, sqrt(n)] and check whether (n-y^2) is perfect square or not
Pseudocode :
count = 0;
for y in range(0, sqrt(n))
if( isPerfectSquare(n - y^2))
count++
return count/2
Here's a much simpler solution:
create list of squares in the given range (that's 46340 values for the example given)
for each square value x
if list contains a value y such that x + y = target value (i.e. does [target - x] exist in list)
output √x, √y as solution (roots can be stored in a std::map lookup created in the first step)
Looping through all pairs (a, b) is infeasible given the constrains on X. There is a faster way though!
For fixed a, we can work out b: b = √(X - a2). b won't always be an integer though, so we have to check this. Due to precision issues, perform the check with a small tolerance: if b is x.99999, we can be fairly certain it's an integer. So we loop through all possible values of a and count all cases where b is an integer. We need to be careful not to double-count, so we place the constraint that a <= b. For X = a2 + b2, a will be at most √(X/2) with this constraint.
Here is an implementation of this algorithm in C++:
int count = 0;
// add EPS to avoid flooring x.99999 to x
for (int a = 0; a <= sqrt(X/2) + EPS; a++) {
int b2 = X - a*a; // b^2
int b = (int) (sqrt(b2) + EPS);
if (abs(b - sqrt(b2)) < EPS) // check b is an integer
count++;
}
cout << count << endl;
See it on ideone with sample input
Here's a version which is trivially O(sqrt(N)) and avoids all loop-internal branches.
Start by generating all squares up to the limit, easily done without any multiplications, then initialize a l and r index.
In each iteration you calculate the sum, then update the two indices and the count based on a comparison with the target value. This is sqrt(N) iterations to generate the table and maximum sqrt(N) iterations of the search loop. Estimated running time with a reasonable compiler is max 10 clock cycles per sqrt(N), so for a maximum input value if 2^31 (sqrt(N) ~= 46341) this should correspond to less than 500K clock cycles or a few tenths of a second:
unsigned countPairs(unsigned n)
{
unsigned sq = 0, i;
unsigned square[65536];
for (i = 0; sq <= n; i++) {
square[i] = sq;
sq += i+i+1;
}
unsigned l = 0, r = i-1, count = 0;
do {
unsigned sum = square[l] + square[r];
l += sum <= n; // Increment l if the sum is <= N
count += sum == n; // Increment the count if a match
r -= sum >= n; // Decrement r if the sum is >= N
} while (l <= r);
return count;
}
A good compiler can note that the three compares at the end are all using the same operands so it only needs a single CMP opcode followed by three different conditional move operations (CMOVcc).
I was in a hurry, so solved it using a rather brute-force approach (very similar to marcog's) using Python 2.6.
def is_perfect_square(x):
rt = int(math.sqrt(x))
return rt*rt == x
def double_sqaures(n):
rng = int(math.sqrt(n))
ways = 0
for i in xrange(rng+1):
if is_perfect_square(n - i*i):
ways +=1
if ways % 2 == 0:
ways = ways // 2
else:
ways = ways // 2 + 1
return ways
Note: ways will be odd when the number is a perfect sqaure.
The number of solutions (x,y) of
x^2+y^2=n
over the integers is exactly 4 times the number of divisors of n congruent to 1 mod 4.
Similar identities exist also for the problems
x^2 + 2y^2 = n
and
x^2 + y^2 + z^2 + w^2 = n.

finding a^b^c^... mod m

I would like to calculate:
abcd... mod m
Do you know any efficient way since this number is too big but a , b , c , ... and m fit in a simple 32-bit int.
Any Ideas?
Caveat: This question is different from finding ab mod m.
Also please note that abc is not the same as (ab)c. The later is equal to abc. Exponentiation is right-associative.
abc mod m = abc mod n mod m, where n = φ(m) Euler's totient function.
If m is prime, then n = m-1.
Edit: as Nabb pointed out, this only holds if a is coprime to m. So you would have to check this first.
The answer does not contain full formal mathematical proof of correctness. I assumed that it is unnecessary here. Besides, it would be very illegible on SO, (no MathJax for example).
I will use (just a little bit) specific prime factorization algorithm. It's not best option, but enough.
tl;dr
We want calculate a^x mod m. We will use function modpow(a,x,m). Described below.
If x is small enough (not exponential form or exists p^x | m) just calculate it and return
Split into primes and calculate p^x mod m separately for each prime, using modpow function
Calculate c' = gcd(p^x,m) and t' = totient(m/c')
Calculate w = modpow(x.base, x.exponent, t') + t'
Save pow(p, w - log_p c', m) * c' in A table
Multiple all elements from A and return modulo m
Here pow should look like python's pow.
Main problem:
Because current best answer is about only special case gcd(a,m) = 1, and OP did not consider this assumption in question, I decided to write this answer. I will also use Euler's totient theorem. Quoting wikipedia:
Euler's totient theorem:
If n and a are coprime positive integers, then
where φ(n) is Euler's totient function.
The assumption numbers are co-primeis very important, as Nabb shows in comment. So, firstly we need to ensure that the numbers are co-prime. (For greater clarity assume x = b^(c^...).) Because , where we can factorize a, and separately calculate q1 = (p1^alpha)^x mod m,q2 = (p2^beta)^x mod m... and then calculate answer in easy way (q1 * q2 * q3 * ... mod m). Number has at most o(log a) prime factors, so we will be force to perform at most o(log a) calculations.
In fact we doesn't have to split to every prime factor of a (if not all occur in m with other exponents) and we can combine with same exponent, but it is not noteworthy by now.
Now take a look at (p^z)^x mod m problem, where p is prime. Notice some important observation:
If a,b are positive integers smaller than m and c is some positive integer and , then true is sentence .
Using the above observation, we can receive solution for actual problem. We can easily calculate gcd((p^z)^x, m). If x*z are big, it is number how many times we can divide m by p. Let m' = m /gcd((p^z)^x, m). (Notice (p^z)^x = p^(z*x).) Let c = gcd(p^(zx),m). Now we can easily (look below) calculate w = p^(zx - c) mod m' using Euler's theorem, because this numbers are co-prime! And after, using above observation, we can receive p^(zx) mod m. From above assumption wc mod m'c = p^(zx) mod m, so the answer for now is p^(zx) mod m = wc and w,c are easy to calculate.
Therefore we can easily calculate a^x mod m.
Calculate a^x mod m using Euler's theorem
Now assume a,m are co-prime. If we want calculate a^x mod m, we can calculate t = totient(m) and notice a^x mod m = a^(x mod t) mod m. It can be helpful, if x is big and we know only specific expression of x, like for example x = 7^200.
Look at example x = b^c. we can calculate t = totient(m) and x' = b^c mod t using exponentiation by squaring algorithm in Θ(log c) time. And after (using same algorithm) a^x' mod m, which is equal to solution.
If x = b^(c^(d^...) we will solve it recursively. Firstly calculate t1 = totient(m), after t2 = totient(t1) and so on. For example take x=b^(c^d). If t1=totient(m), a^x mod m = a^(b^(c^d) mod t1), and we are able to say b^(c^d) mod t1 = b^(c^d mod t2) mod t1, where t2 = totient(t1). everything we are calculating using exponentiation by squaring algorithm.
Note: If some totient isn't co-prime to exponent, it is necessary to use same trick, as in main problem (in fact, we should forget that it's exponent and recursively solve problem, like in main problem). In above example, if t2 isn't relatively prime with c, we have to use this trick.
Calculate φ(n)
Notice simple facts:
if gcd(a,b)=1, then φ(ab) = φ(a)*φ(b)
if p is prime φ(p^k)=(p-1)*p^(k-1)
Therefore we can factorize n (ak. n = p1^k1 * p2^k2 * ...) and separately calculate φ(p1^k1),φ(p2^k2),... using fact 2. Then combine this using fact 1. φ(n)=φ(p1^k1)*φ(p2^k2)*...
It is worth remembering that, if we will calculate totient repeatedly, we may want to use Sieve of Eratosthenes and save prime numbers in table. It will reduce the constant.
python example: (it is correct, for the same reason as this factorization algorithm)
def totient(n) : # n - unsigned int
result = 1
p = 2 #prime numbers - 'iterator'
while p**2 <= n :
if(n%p == 0) : # * (p-1)
result *= (p-1)
n /= p
while(n%p == 0) : # * p^(k-1)
result *= p
n /= p
p += 1
if n != 1 :
result *= (n-1)
return result # in O(sqrt(n))
Case: abc mod m
Cause it's in fact doing the same thing many times, I believe this case will show you how to solve this generally.
Firstly, we have to split a into prime powers. Best representation will be pair <number,
exponent>.
c++11 example:
std::vector<std::tuple<unsigned, unsigned>> split(unsigned n) {
std::vector<std::tuple<unsigned, unsigned>> result;
for(unsigned p = 2; p*p <= n; ++p) {
unsigned current = 0;
while(n % p == 0) {
current += 1;
n /= p;
}
if(current != 0)
result.emplace_back(p, current);
}
if(n != 1)
result.emplace_back(n, 1);
return result;
}
After split, we have to calculate (p^z)^(b^c) mod m=p^(z*(b^c)) mod m for every pair. Firstly we should check, if p^(z*(b^c)) | m. If, yes the answer is just (p^z)^(b^c), but it's possible only in case in which z,b,c are very small. I believe I don't have to show code example to it.
And finally if p^(z*b^c) > m we have to calculate the answer. Firstly, we have to calculate c' = gcd(m, p^(z*b^c)). After we are able to calculate t = totient(m'). and (z*b^c - c' mod t). It's easy way to get an answer.
function modpow(p, z, b, c, m : integers) # (p^z)^(b^c) mod m
c' = 0
m' = m
while m' % p == 0 :
c' += 1
m' /= p
# now m' = m / gcd((p^z)^(b^c), m)
t = totient(m')
exponent = z*(b^c)-c' mod t
return p^c' * (p^exponent mod m')
And below Python working example:
def modpow(p, z, b, c, m) : # (p^z)^(b^c) mod m
cp = 0
while m % p == 0 :
cp += 1
m /= p # m = m' now
t = totient(m)
exponent = ((pow(b,c,t)*z)%t + t - (cp%t))%t
# exponent = z*(b^c)-cp mod t
return pow(p, cp)*pow(p, exponent, m)
Using this function, we can easily calculate (p^z)^(b^c) mod m, after we just have to multiple all results (mod m), we can also calculate everything on an ongoing basis. Example below. (I hope I didn't make mistake, writing.) Only assumption, b,c are big enough (b^c > log(m) ak. each p^(z*b^k) doesn't divide m), it's simple check and I don't see point to make clutter by it.
def solve(a,b,c,m) : # split and solve
result = 1
p = 2 # primes
while p**2 <= a :
z = 0
while a % p == 0 :
# calculate z
a /= p
z += 1
if z != 0 :
result *= modpow(p,z,b,c,m)
result %= m
p += 1
if a != 1 : # Possible last prime
result *= modpow(a, 1, b, c, m)
return result % m
Looks, like it works.
DEMO and it's correct!
Since for any relationship a=x^y, the relationship is invariant with respect to the numeric base you are using (base 2, base 6, base 16, etc).
Since the mod N operation is equivalent to extracting the least significant digit (LSD) in base N
Since the LSD of the result A in base N can only be affected by the LSD of X in base N, and not digits in higher places. (e.g. 34*56 = 30*50+30*6+50*4+4*5 = 10*(3+50+3*6+5*4)+4*6)
Therefore, from LSD(A)=LSD(X^Y) we can deduce
LSD(A)=LSD(LSD(X)^Y)
Therefore
A mod N = ((X mod N) ^ Y) mod N
and
(X ^ Y) mod N = ((X mod N) ^ Y) mod N)
Therefore you can do the mod before each power step, which keeps your result in the range of integers.
This assumes a is not negative, and for any x^y, a^y < MAXINT
This answer answers the wrong question. (alex)
Modular Exponentiation is a correct way to solve this problem, here's a little bit of hint:
To find abcd % m
You have to start with calculating
a % m, then ab % m, then abc % m and then abcd % m ... (you get the idea)
To find ab % m, you basically need two ideas: [Let B=floor(b/2)]
ab = (aB)2 if b is even OR ab = (aB)2*a if b is odd.
(X*Y)%m = ((X%m) * (Y%m)) % m
(% = mod)
Therefore,
if b is even
ab % m = (aB % m)2 % m
or if b is odd
ab % m = (((aB % m)2) * (a % m)) % m
So if you knew the value of aB, you can calculate this value.
To find aB, apply similar approach, dividing B until you reach 1.
e.g. To calculate 1613 % 11:
1613 % 11 = (16 % 11)13 % 11 = 513 % 11
= (56 % 11) * (56 % 11) * (5 % 11) <---- (I)
To find 56 % 11:
56 % 11 = ((53 % 11) * (53 % 11)) % 11 <----(II)
To find 53%11:
53 % 11 = ((51 % 11) * (51 % 11) * (5 % 11)) % 11
= (((5 * 5) % 11) * 5) % 11 = ((25 % 11) * 5) % 11 = (3 * 5) % 11 = 15 % 11 = 4
Plugging this value to (II) gives
56 % 11 = (((4 * 4) % 11) * 5) % 11 = ((16 % 11) * 5) % 11 = (5 * 5) % 11 = 25 % 11 = 3
Plugging this value to (I) gives
513 % 11 = ((3 % 11) * (3 % 11) * 5) % 11 = ((9 % 11) * 5) % 11 = 45 % 11 = 4
This way 513 % 11 = 4
With this you can calculate anything of form a513 % 11 and so on...
Look at the behavior of A^X mod M as X increases. It must eventually go into a cycle. Suppose the cycle has length P and starts after N steps. Then X >= N implies A^X = A^(X+P) = A^(X%P + (-N)%P + N) (mod M). Therefore we can compute A^B^C by computing y=B^C, z = y < N ? y : y%P + (-N)%P + N, return A^z (mod m).
Notice that we can recursively apply this strategy up the power tree, because the derived equation either has an exponent < M or an exponent involving a smaller exponent tower with a smaller dividend.
The only question is if you can efficiently compute N and P given A and M. Notice that overestimating N is fine. We can just set N to M and things will work out. P is a bit harder. If A and M are different primes, then P=M-1. If A has all of M's prime factors, then we get stuck at 0 and P=1. I'll leave it as an exercise to figure that out, because I don't know how.
///Returns equivalent to list.reverse().aggregate(1, acc,item => item^acc) % M
func PowerTowerMod(Link<int> list, int M, int upperB = M)
requires M > 0, upperB >= M
var X = list.Item
if list.Next == null: return X
var P = GetPeriodSomehow(base: X, mod: M)
var e = PowerTowerMod(list.Next, P, M)
if e^X < upperB then return e^X //todo: rewrite e^X < upperB so it doesn't blowup for large x
return ModPow(X, M + (e-M) % P, M)
Tacet's answer is good, but there are substantial simplifications possible.
The powers of x, mod m, are preperiodic. If x is relatively prime to m, the powers of x are periodic, but even without that assumption, the part before the period is not long, at most the maximum of the exponents in the prime factorization of m, which is at most log_2 m. The length of the period divides phi(m), and in fact lambda(m), where lambda is Carmichael's function, the maximum multiplicative order mod m. This can be significantly smaller than phi(m). Lambda(m) can be computed quickly from the prime factorization of m, just as phi(m) can. Lambda(m) is the GCD of lambda(p_i^e_i) over all prime powers p_i^e_i in the prime factorization of m, and for odd prime powers, lambda(p_i^e_i) = phi(p_i^e^i). lambda(2)=1, lamnda(4)=2, lambda(2^n)=2^(n-2) for larger powers of 2.
Define modPos(a,n) to be the representative of the congruence class of a in {0,1,..,n-1}. For nonnegative a, this is just a%n. For a negative, for some reason a%n is defined to be negative, so modPos(a,n) is (a%n)+n.
Define modMin(a,n,min) to be the least positive integer congruent to a mod n that is at least min. For a positive, you can compute this as min+modPos(a-min,n).
If b^c^... is smaller than log_2 m (and we can check whether this inequality holds by recursively taking logarithms), then we can simply compute a^b^c^... Otherwise, a^b^c^... mod m = a^modMin(b^c^..., lambda(m), [log_2 m])) mod m = a^modMin(b^c^... mod lambda(m), lambda(m),[log_2 m]).
For example, suppose we want to compute 2^3^4^5 mod 100. Note that 3^4^5 only has 489 digits, so this is doable by other methods, but it's big enough that you don't want to compute it directly. However, by the methods I gave here, you can compute 2^3^4^5 mod 100 by hand.
Since 3^4^5 > log_2 100,
2^3^4^5 mod 100
= 2^modMin(3^4^5,lambda(100),6) mod 100
= 2^modMin(3^4^5 mod lambda(100), lambda(100),6) mod 100
= 2^modMin(3^4^5 mod 20, 20,6) mod 100.
Let's compute 3^4^5 mod 20. Since 4^5 > log_2 20,
3^4^5 mod 20
= 3^modMin(4^5,lambda(20),4) mod 20
= 3^modMin(4^5 mod lambda(20),lambda(20),4) mod 20
= 3^modMin(4^5 mod 4, 4, 4) mod 20
= 3^modMin(0,4,4) mod 20
= 3^4 mod 20
= 81 mod 20
= 1
We can plug this into the previous calculation:
2^3^4^5 mod 100
= 2^modMin(3^4^5 mod 20, 20,6) mod 100
= 2^modMin(1,20,6) mod 100
= 2^21 mod 100
= 2097152 mod 100
= 52.
Note that 2^(3^4^5 mod 20) mod 100 = 2^1 mod 100 = 2, which is not correct. You can't reduce down to the preperiodic part of the powers of the base.

Is there an algorithm for calculating the multiplicative order of x modulo y (for y < 1000) that doesn't require a BigInteger type?

The algorithm I'm using at the moment runs into extremely high numbers very quickly. A step in the algorithm I'm to raises x to the result of the totient function applied to y. The result is that you can run into very large numbers.
Eg. When calculating the multiplicative order of 10 modulo 53:
10^totient(53) == 10^52 == 1 * 10^52
The following algorithm fares a bit better either in terms of avoiding large numbers, but it still fails where 10^mOrder is greater than the capacity of the data type:
mOrder = 1
while 10^mOrder % 53 != 1
if mOrder >= i
mOrder = 0;
break
else
mOrder = mOrder + 1
Using Modular exponentiation, it is possible to calculate (10 ^ mOrder % 53) or in general, any (a ^ b mod c) without getting values much bigger than c. See Wikipedia for details, there's this sample code, too:
Bignum modpow(Bignum base, Bignum exponent, Bignum modulus) {
Bignum result = 1;
while (exponent > 0) {
if ((exponent & 1) == 1) {
// multiply in this bit's contribution while using modulus to keep result small
result = (result * base) % modulus;
}
// move to the next bit of the exponent, square (and mod) the base accordingly
exponent >>= 1;
base = (base * base) % modulus;
}
return result;
}
Why exponentiate? Can't you just multiply modulo n in a loop?
(defun multiplicative-order (a n)
(if (> (gcd a n) 1)
0
(do ((order 1 (+ order 1))
(mod-exp (mod a n) (mod (* mod-exp a) n)))
((= mod-exp 1) order))))
Or, in ptheudo (sic) code:
def multiplicative_order (a, n) :
if gcd (a, n) > 1 :
return 0
else:
order = 1
mod_exp = a mod n
while mod_exp != 1 :
order += 1
mod_exp = (mod_exp * a) mod n
return order

Resources