The task is to find the sum of the equation given n and a. So for the equation 1a + 2a^2 + 3a^3 + ... + na^n, we can find the n-th element in the sequence with the following formula (from observation):
n-th element = a^n * (n-(n-2)/n-(n-1)) * (n-(n-3)/n-(n-2)) * ... * (n/(n-1))
I think that it's impossible to simplify the sum of n elements by modifying the above formula to a sum formula. Even if it is possible, I assume that it will involve the use of exponent n, which will introduce a n-time loop; thus causing the solution to not be O(log n). The best solution I can get is simply find the ratio of each element, which is a(n+1)/n and apply that to the n-1 element to find the n-th element.
I think that I may be missing something. Could someone provide me with solution(s)?
You can solve this problem, and lots of problems like it, with matrix exponentiation.
Let's start with this sequence:
A[n] = a + a^2 + a^3 ... + a^n
That sequence can be generated with a simple formula:
A[i] = a*(A[i-1] + 1)
Now if we consider your sequence:
B[n] = a + 2a^2 + 3a^3 ... + na^n
We can generate that with a formula that makes use of the previous one:
B[i] = (B[i-1] + A[i-1] + 1) * a
If we make a sequence of vectors containing all the components we need:
V[n] = (B[n], A[n], 1)
Then we can construct a matrix M so that:
V[i] = M*V[i-1]
And so:
V[n] = (M^(n-1))V[1]
Since the size of the matrix is fixed at 3x3, you can use exponentiation by squaring on the matrix itself to calculate M^(n-1) in O(log n) time, and the final multiplication takes constant time.
Here's an implementation in python with numpy (so I don't have to include matrix multiply code):
import numpy as np
def getSum(a,n):
# A[n] = a + a^2 + a^3...a^n
# B[n] = a + 2a^2 + 3a^3 +. .. na^n
# V[n] = [B[n],A[n],1]
M = np.matrix([
[a, a, a], # B[i] = B[i-1]*a + A[i-1]*a + a
[0, a, a], # A[i] = A[i-1]*a + a
[0, 0, 1]
])
# calculate MsupN = M^(n-1)
n-=1
MsupN=np.matrix([[1,0,0],[0,1,0],[0,0,1]]);
while(n>0):
if n%2 > 0:
MsupN *= M
n-=1
M*=M
n=n/2
# calculate V[n] = MsupN*V
Vn = MsupN*np.matrix([a,a,1]).T;
return Vn.item(0,0);
I assume a, n are nonnegative integers. The explicit formula for a > 1 is
a * (n * a^{n + 1} - (n + 1) * a^n + 1) / (a - 1)^2
It can be evaluated efficiently in O(log(n)) using
square and multiply for a^n.
To derive the formula, you could use the following ingredients:
explicit formula for geometric series
You have to notice that this polynomial looks almost like a derivative of a geometric series
Gaussian sum formula for the special case a = 1.
Now you can simply calculate:
sum_{i = 1}^n i * a^i // [0] ugly sum
= a * sum_{i = 1}^n i * a^{i-1} // [1] linearity
= a * d/da (sum_{i = 1}^n a^i) // [2] antiderivative
= a * d/da (sum_{i = 0}^n a^i - 1) // [3] + 1 - 1
= a * d/da ((a^{n + 1} - 1) / (a - 1) - 1) // [4] geom. series
= a * ((n + 1)*a^n / (a - 1) - (a^{n+1} - 1)/(a - 1)^2) // [5] derivative
= a * (n * a^{n + 1} - (n + 1)a^n + 1) / (a - 1)^2 // [6] explicit formula
This is just a simple arithmetic expression with a^n, which can be evaluated in O(log(n)) time using square-and-multiply.
This doesn't work for a = 0 or a = 1, so you have to treat those cases specially: for a = 0 you just return 0 immediately, for a = 1, you return n * (n + 1) / 2.
Scala snippet to test the formula:
def fast(a: Int, n: Int): Int = {
def pow(a: Int, n: Int): Int =
if (n == 0) 1
else if (n == 1) a
else {
val r = pow(a, n / 2)
if (n % 2 == 0) r * r else r * r * a
}
if (a == 0) 0
else if (a == 1) n * (n + 1) / 2
else {
val aPowN = pow(a, n)
val d = a - 1
a * (n * aPowN * a - (n + 1) * aPowN + 1) / (d * d)
}
}
Slower, but simpler version, for comparison:
def slow(a: Int, n: Int): Int = {
def slowPow(a: Int, n: Int): Int = if (n == 0) 1 else slowPow(a, n - 1) * a
(1 to n).map(i => i * slowPow(a, i)).sum
}
Comparison:
for (a <- 0 to 5; n <- 0 to 5) {
println(s"${slow(a, n)} <-> ${fast(a, n)}")
}
Output:
0 <-> 0
0 <-> 0
0 <-> 0
0 <-> 0
0 <-> 0
0 <-> 0
0 <-> 0
1 <-> 1
3 <-> 3
6 <-> 6
10 <-> 10
15 <-> 15
0 <-> 0
2 <-> 2
10 <-> 10
34 <-> 34
98 <-> 98
258 <-> 258
0 <-> 0
3 <-> 3
21 <-> 21
102 <-> 102
426 <-> 426
1641 <-> 1641
0 <-> 0
4 <-> 4
36 <-> 36
228 <-> 228
1252 <-> 1252
6372 <-> 6372
0 <-> 0
5 <-> 5
55 <-> 55
430 <-> 430
2930 <-> 2930
18555 <-> 18555
So, yes, the O(log(n)) formula gives the same numbers as the O(n^2) formula.
a^n can be indeed computed in O(log n).
The method is called Exponentiation by squaring and the main idea is that if you know a^n you also know a^(2*n) which is just a^n * a^n.
So if you want to compute a^n (if n is even) you can just compute a^(n/2) and multiply the result with itself: a^n = a^(n/2) * a^(n/2). So instead of having a loop up to n, now you only have a loop up to n/2. But n/2 is just another number, and can be computed the same way, thus doing only half the operations. Halving the number of operations each time leads to the logarithmic complexity.
As mentioned by #Sopel in the comment, the series can be written as a simple equation/function:
a * (n * a^(n+1) - (n+1) * a^n + 1)
f(a,n) = ------------------------------------
(a- 1) ^ 2
So to find the answer you only have to compute the above formula, using the fast exponentiation described above to do it in O(logN) complexity.
Related
The problem is as follows:
We have 3 integer values as input: n, i, j
Suppose we have n × n matrix, that is filled with numbers from 1 to n² in clockwise spiral way. Find the number at the index (i, j).
I know how to construct such matrix, I can solve it by filling the matrix and looking at the index (i, j), and I consider such solution a bit too "brute force". I believe there should be some mathematical relationship between the number n, the indices i, j and the number sitting at that cell. I've tried some approaches but couldn't find a way to do it. Any suggestions to help me in my attempt?
Edit: an example 5x5 matrix:
1 2 3 4 5
16 17 18 19 6
15 24 25 20 7
14 23 22 21 8
13 12 11 10 9
You can do it with some basic arithmetic (assuming 0-based indexing, i stands for the row and j for the column):
Find the ring in which the number is in. It is r = min(i, j, n - i - 1, n - j - 1). This is counting the rings from outer to inner. If we count from inside to outside (which will come in handy later), then we get q = (n - 1) / 2 - r for odd n or q = (n - 2) / 2 - r for even n. Or generalized: q = (n - 2 + n % 2) / 2 - r which is the same as q = (n - 1) / 2 - r for integer division (as mentioned by #Stef).
It is not that hard to see that the number of elements covered by a ring (including the numbers inside of it) from most inner going outwards is 1^2, 3^2, 5^2, ... if n is odd and 2^2, 4^2, 6^2, ... if n is even. So the side length of the square covered by ring q is (in generalized form) m = q * 2 + 2 - n % 2. This means the element in the upper left corner of the ring is b = n^2 - m^2 + 1.
Get the number:
If r == i: b + j - r (element is on top side).
If r == n - j - 1: b + m - 1 + i - r (element is on the right side)
If r == n - i - 1: b + 2 * m - 2 + n - j - 1 - r (element is on the bottom)
Otherwise (r == j): b + 3 * m - 3 + n - i - 1 - r (element is on the left side)
This is O(1). The formulas can be simplified, but then the explanation would be harder to understand.
Example: n = 5, i = 3, j = 2
r = min(2, 3, 1, 2) = 1, q = (3 + 1) / 2 - 1 = 1
m = 2 + 2 - 1 = 3, b = 25 - 9 + 1 = 17
Third condition applies: 17 + 6 - 2 + 5 - 2 - 1 - 1 = 22
How to solve the following equation?
I am interested in the methods of solutions.
n^3 mod P = (n+1)^3 mod P
P- Prime number
Short example with the answer.
Could you gives step-by-step solutions for my example.
n^3 mod 61 = (n + 1)^3 mod 61
Integer solutions:
n = 61 m + 4,
n = 61 m + 56,
m element Z
Z - is set of integers.
An other way to state n^3 ≡ (n+1)^3 is n^3 ≡ n^3 + 3 n^2 + 3 n + 1 (just work out the cube of n+1) then the cubic terms cancel out to give a nicer quadratic 3 n^2 + 3 n + 1 ≡ 0
Then the usual quadratic formula applies, though all of its operations are now modulo P, and the determinant is not always a quadratic residue in which case there are no solutions to the original equation (this happens about half the time). This involves finding a square root modulo a prime, which is not hard for a computer to do for example with the Tonelli–Shanks algorithm, though not trivial to implement.
By the way 3 n^2 + 3 n + 1 = 0 has the property that if n is a solution, then -n - 1 is too.
For example, with some Python, once all the support functions exist it is pretty simple:
def solve(p):
# solve 3 n^2 + 3 n + 1 ≡ 0
D = -3 % p
sqrtD = modular_sqrt(D, p)
if sqrtD == 0:
return None
else:
n = (sqrtD - 3) * inverse(6, p) % p
return (n, -(n+1) % p)
Inverse modulo a prime is really easy,
def inverse(x, p):
return pow(x, p - 2, p)
I adapted this implementation of Tonelli-Shanks to Python3 (// instead of / for integer division)
def modular_sqrt(a, p):
""" Find a quadratic residue (mod p) of 'a'. p
must be an odd prime.
Solve the congruence of the form:
x^2 = a (mod p)
And returns x. Note that p - x is also a root.
0 is returned is no square root exists for
these a and p.
The Tonelli-Shanks algorithm is used (except
for some simple cases in which the solution
is known from an identity). This algorithm
runs in polynomial time (unless the
generalized Riemann hypothesis is false).
"""
# Simple cases
#
if legendre_symbol(a, p) != 1:
return 0
elif a == 0:
return 0
elif p == 2:
return 0
elif p % 4 == 3:
return pow(a, (p + 1) // 4, p)
# Partition p-1 to s * 2^e for an odd s (i.e.
# reduce all the powers of 2 from p-1)
#
s = p - 1
e = 0
while s % 2 == 0:
s //= 2
e += 1
# Find some 'n' with a legendre symbol n|p = -1.
# Shouldn't take long.
#
n = 2
while legendre_symbol(n, p) != -1:
n += 1
# Here be dragons!
# Read the paper "Square roots from 1; 24, 51,
# 10 to Dan Shanks" by Ezra Brown for more
# information
#
# x is a guess of the square root that gets better
# with each iteration.
# b is the "fudge factor" - by how much we're off
# with the guess. The invariant x^2 = ab (mod p)
# is maintained throughout the loop.
# g is used for successive powers of n to update
# both a and b
# r is the exponent - decreases with each update
#
x = pow(a, (s + 1) // 2, p)
b = pow(a, s, p)
g = pow(n, s, p)
r = e
while True:
t = b
m = 0
for m in range(r):
if t == 1:
break
t = pow(t, 2, p)
if m == 0:
return x
gs = pow(g, 2 ** (r - m - 1), p)
g = (gs * gs) % p
x = (x * gs) % p
b = (b * g) % p
r = m
def legendre_symbol(a, p):
""" Compute the Legendre symbol a|p using
Euler's criterion. p is a prime, a is
relatively prime to p (if p divides
a, then a|p = 0)
Returns 1 if a has a square root modulo
p, -1 otherwise.
"""
ls = pow(a, (p - 1) // 2, p)
return -1 if ls == p - 1 else ls
You can see some results on ideone
I was solving a coding question, and found out the following relation to find the number of possible arrangements:
one[1] = two[1] = three[1] = 1
one[i] = two[i-1] + three[i-1]
two[i] = one[i-1] + three[i-1]
three[i] = one[i-1] + two[i-1] + three[i-1]
I could have easily used a for loop to find out the values of the individual arrays till n, but the value of n is of the order 10^9, and I won't be able to iterate from 1 to such a huge number.
For every value of n, I need to output the value of (one[n] + two[n] + three[n]) % 10^9+7 in O(1) time.
Some results:
For n = 1, result = 3
For n = 2, result = 7
For n = 3, result = 17
For n = 4, result = 41
I was not able to find out a general formula for n for the above after spending hours on it. Can someone help me out.
Edit:
n = 1, result(1) = 3
n = 2, result(2) = 7
n = 3, result(3) = result(2)*2 + result(1) = 17
n = 4, result(4) = result(3)*2 + result(2) = 41
So, result(n) = result(n-1)*2 + result(n-2) OR
T(n) = 2T(n-1) + T(n-2)
You can use a matrix to represent the recurrence relation. (I've renamed one, two, three to a, b, c).
(a[n+1]) = ( 0 1 1 ) (a[n])
(b[n+1]) ( 1 0 1 ) (b[n])
(c[n+1]) ( 1 1 1 ) (c[n])
With this representation, it's feasible to compute values for large n, by matrix exponentation (modulo your large number), using exponentation by squaring. That'll give you the result in O(log n) time.
(a[n]) = ( 0 1 1 )^(n-1) (1)
(b[n]) ( 1 0 1 ) (1)
(c[n]) ( 1 1 1 ) (1)
Here's some Python that implements this all from scratch:
# compute a*b mod K where a and b are square matrices of the same size
def mmul(a, b, K):
n = len(a)
return [
[sum(a[i][k] * b[k][j] for k in xrange(n)) % K for j in xrange(n)]
for i in xrange(n)]
# compute a^n mod K where a is a square matrix
def mpow(a, n, K):
if n == 0: return [[i == j for i in xrange(len(a))] for j in xrange(len(a))]
if n % 2: return mmul(mpow(a, n-1, K), a, K)
a2 = mpow(a, n//2, K)
return mmul(a2, a2, K)
M = [[0, 1, 1], [1, 0, 1], [1, 1, 1]]
def f(n):
K = 10**9+7
return sum(sum(a) for a in mpow(M, n-1, K)) % K
print f(1), f(2), f(3), f(4)
print f(10 ** 9)
Output:
3 7 17 41
999999966
It runs effectively instantly, even for the n=10**9 case.
I have a sequence of numbers say 1,2,4,0 and I have to find number of sequence divisible by 6.
So we will have 0,12,24,120,240 which means answer will be 5,
The problem is that I devised an algorithm which requires O(2^n) time complexity so basically it iterates through all the possibilities which is naive.
Is there some way to decrease the complexity.
Edit1: multiple copy of digit is allowed. for example input can be 1,2,1,4,3
Edit2: digits should be in order such as in above example 42 420 etc are not allowed
code: This code however is not able to take 120 into account
`#include <stdio.h>
#include<string.h>
#define m 1000000007
int main(void) {
int t;
scanf("%d",&t);
while(t--)
{
char arr[100000];
int r=0,count=0,i,j,k;
scanf("%s",&arr);
int a[100000];
for(i=0;i<strlen(arr);i++)
{
a[i]=arr[i]-'0';
}
for(i=0;i<strlen(arr);i++)
{
for(j=i;j<strlen(arr);j++)
{
if(a[i]==0)
{
count++;
goto label;
}
r=a[i]%6;
for(k=j+1;k<strlen(arr);k++)
{
r=(r*10 + a[k])%6;
if(r==0)
count++;
}
}
label:;
r=0;
}
printf("%d\n",count);
}
return 0;
}
You can use dynamic programming.
As usual, when we decide to solve a problem using dynamic programming, we start by turning some input values into parameters, and maybe adding some other parameters.
The obvious candidate for a parameter is the length of the sequence.
Let our sequence be a[1], a[2], ..., a[N].
So, we search for the value f(n) (for n from 0 to N) which is the number of subsequences of a[1], a[2], ..., a[n] which, when read as numbers, are divisible by D=6.
Computing f(n) when we know f(n-1) does not look obvious yet, so we dig into details.
On closer look, the problem we now face is that adding a digit to the end of a number can turn a number divisible by D into a number not divisible by D, and vice versa.
Still, we know exactly how the remainder changes when we add a digit to the end of a number.
If we have a sequence p[1], p[2], ..., p[k] and know r, the remainder of the number p[1] p[2] ... p[k] modulo D, and then add p[k+1] to the sequence, the remainder s of the new number p[1] p[2] ... p[k] p[k+1] modulo D is easy to compute: s = (r * 10 + p[k+1]) mod D.
To take that into account, we can make the remainder modulo D our new parameter.
So, we now search for f(n,r) (for n from 0 to N and r from 0 to D-1) which is the number of subsequences of a[1], a[2], ..., a[n] which, when read as numbers, have the remainder r modulo D.
Now, knowing f(n,0), f(n,1), ..., f(n,D-1), we want to compute f(n+1,0), f(n+1,1), ..., f(n+1,D-1).
For each possible subsequence of a[1], a[2], ..., a[n], when we consider element number n+1, we either add a[n+1] to it, or omit a[n+1] and leave the subsequence unchanged.
This is easier to express by forward dynamic programming rather than a formula:
let f (n + 1, *) = 0
for r = 0, 1, ..., D - 1:
add f (n, r) to f (n + 1, r * 10 + a[n + 1]) // add a[n + 1]
add f (n, r) to f (n + 1, r) // omit a[n + 1]
The resulting f (n + 1, s) (which, depending on s, is a sum of one or more terms) is the number of subsequences of a[1], a[2], ..., a[n], a[n+1] which yield the remainder s modulo D.
The whole solution follows:
let f (0, *) = 0
let f (0, 0) = 1 // there is one empty sequence, and its remainder is 0
for n = 0, 1, ..., N - 1:
let f (n + 1, *) = 0
for r = 0, 1, ..., D - 1:
add f (n, r) to f (n + 1, r * 10 + a[n + 1]) // add a[n + 1]
add f (n, r) to f (n + 1, r) // omit a[n + 1]
answer = f (N, 0) - 1
We subtract one from the answer since an empty subsequence is not considered a number.
The time and memory requirements are O (N * D).
We can lower the memory to O (D) when we note that, at each given moment, we only need to store f (n, *) and f (n + 1, *), so the storage for f can be 2 * D instead of (N + 1) * D.
An illustration with your example sequence:
-------------------------------
a[n] 1 2 4 0
f(n,r) n 0 1 2 3 4
r
-------------------------------
0 1 1 2 3 6
1 0 1 1 1 1
2 0 0 1 2 4
3 0 0 0 0 0
4 0 0 0 2 5
5 0 0 0 0 0
-------------------------------
Exercise: how to get rid of numbers with leading zeroes with this solution?
Will we need another parameter?
The rule in a particular game is that a character's power is proportional to the triangular root of the character's experience. For example, 15-20 experience gives 5 power, 21-27 experience gives 6 power, 28-35 experience gives 7 power, etc. Some players are known to have achieved experience in the hundreds of billions.
I am trying to implement this game on an 8-bit machine that has only three arithmetic instructions: add, subtract, and divide by 2. For example, to multiply a number by 4, a program would add it to itself twice. General multiplication is much slower; I've written a software subroutine to do it using a quarter-square table.
I had considered calculating the triangular root T(p) through bisection search for the successive triangular numbers bounding an experience number from above and below. My plan was to use a recurrence identity for T(2*p) until it exceeds experience, then use that as the upper bound for a bisection search. But I'm having trouble finding an identity for T((x+y)/2) in the bisection that doesn't use either x*y or (x+y)^2.
Is there an efficient algorithm to calculate the triangular root of a number with just add, subtract, and halve? Or will I end up having to perform O(log n) multiplications, one to calculate each midpoint in the bisection search? Or would it be better to consider implementing long division to use Newton's method?
Definition of T(x):
T(x) = (n * (n + 1))/2
Identities that I derived:
T(2*x) = 4*T(x) - x
# e.g. T(5) = 15, T(10) = 4*15 - 5 = 55
T(x/2) = (T(x) + x/2)/4
# e.g. T(10) = 55, T(5) = (55 + 5)/4 = 15
T(x + y) = T(x) + T(y) + x*y
# e.g. T(3) = 6, T(7) = 28, T(10) = 6 + 28 + 21 = 55
T((x + y)/2) = (T(x) + T(y) + x*y + (x + y)/2)/4
# e.g. T(3) = 6, T(7) = 28, T(5) = (6 + 28 + 21 + 10/2)/4 = 15
Do bisection search, but make sure that y - x is always a power of two. (This does not increase the asymptotic running time.) Then T((x + y) / 2) = T(x) + T(h) + x * h, where h is a power of two, so x * h is computable with a shift.
Here's a Python proof of concept (hastily written, more or less unoptimized but avoids expensive operations).
def tri(n):
return ((n * (n + 1)) >> 1)
def triroot(t):
y = 1
ty = 1
# Find a starting point for bisection search by doubling y using
# the identity T(2*y) = 4*T(y) - y. Stop when T(y) exceeds t.
# At the end, x = 2*y, tx = T(x), and ty = T(y).
while (ty <= t):
assert (ty == tri(y))
tx = ty
ty += ty
ty += ty
ty -= y
x = y
y += y
# Now do bisection search on the interval [x .. x + h),
# using these identities:
# T(x + h) = T(x) + T(h) + x*h
# T(h/2) = (T(h) + h/2)/4
th = tx
h = x
x_times_h = ((tx + tx) - x)
while True:
assert (tx == tri(x))
assert (x_times_h == (x * h))
# Divide h by 2
h >>= 1
x_times_h >>= 1
if (not h):
break
th += h
th >>= 1
th >>= 1
# Calculate the midpoint of the search interval
tz = ((tx + th) + x_times_h)
z = (x + h)
assert (tz == tri(z))
# If the midpoint is below the target, move the lower bound
# of the search interval up to the midpoint
if (t >= tz):
tx = tz
x = z
x_times_h += ((th + th) - h)
return x
for q in range(1, 100):
p = triroot(q)
assert (tri(p) <= q < tri((p + 1)))
print(q, p)
As observed in the linked page on math.stackexchange.com there is a direct formula for the solution of this problem and being x = n*(n+1)/2 then the reverse is:
n = (sqrt(1+8*x) - 1)/2
Now there is the square root and other things but I would suggest to use this direct formula with an implementation like the following:
tmp = x + x; '2*x
tmp += tmp; '4*x
tmp += tmp + 1; '8*x + 1
n = 0;
n2 = 0;
while(n2 <= tmp){
n2 += n + n + 1; 'remember that (n+1)^2 - n^2 = 2*n + 1
n++;
}
'here after the loops n = floor(sqrt(8*x+1)) + 1
n -= 2; 'floor(sqrt(8*x+1)) - 1
n /= 2; '(floor(sqrt(8*x+1)) - 1) / 2
Of course this can be improved for better performances if neede like considering that integer values of floor(sqrt(8*x+1)) + 1 are even so n can be incremented with steps of 2 (rewriting the n2 calculation accordingly: n2 += n + n + n + n + 4 that can itself be written better than this).