Finding the number at a specific index of spirally filled matrix - algorithm

The problem is as follows:
We have 3 integer values as input: n, i, j
Suppose we have n × n matrix, that is filled with numbers from 1 to n² in clockwise spiral way. Find the number at the index (i, j).
I know how to construct such matrix, I can solve it by filling the matrix and looking at the index (i, j), and I consider such solution a bit too "brute force". I believe there should be some mathematical relationship between the number n, the indices i, j and the number sitting at that cell. I've tried some approaches but couldn't find a way to do it. Any suggestions to help me in my attempt?
Edit: an example 5x5 matrix:
1 2 3 4 5
16 17 18 19 6
15 24 25 20 7
14 23 22 21 8
13 12 11 10 9

You can do it with some basic arithmetic (assuming 0-based indexing, i stands for the row and j for the column):
Find the ring in which the number is in. It is r = min(i, j, n - i - 1, n - j - 1). This is counting the rings from outer to inner. If we count from inside to outside (which will come in handy later), then we get q = (n - 1) / 2 - r for odd n or q = (n - 2) / 2 - r for even n. Or generalized: q = (n - 2 + n % 2) / 2 - r which is the same as q = (n - 1) / 2 - r for integer division (as mentioned by #Stef).
It is not that hard to see that the number of elements covered by a ring (including the numbers inside of it) from most inner going outwards is 1^2, 3^2, 5^2, ... if n is odd and 2^2, 4^2, 6^2, ... if n is even. So the side length of the square covered by ring q is (in generalized form) m = q * 2 + 2 - n % 2. This means the element in the upper left corner of the ring is b = n^2 - m^2 + 1.
Get the number:
If r == i: b + j - r (element is on top side).
If r == n - j - 1: b + m - 1 + i - r (element is on the right side)
If r == n - i - 1: b + 2 * m - 2 + n - j - 1 - r (element is on the bottom)
Otherwise (r == j): b + 3 * m - 3 + n - i - 1 - r (element is on the left side)
This is O(1). The formulas can be simplified, but then the explanation would be harder to understand.
Example: n = 5, i = 3, j = 2
r = min(2, 3, 1, 2) = 1, q = (3 + 1) / 2 - 1 = 1
m = 2 + 2 - 1 = 3, b = 25 - 9 + 1 = 17
Third condition applies: 17 + 6 - 2 + 5 - 2 - 1 - 1 = 22

Related

O(log n) solution to 1a + 2a^2 + 3a^3 + ... + na^n

The task is to find the sum of the equation given n and a. So for the equation 1a + 2a^2 + 3a^3 + ... + na^n, we can find the n-th element in the sequence with the following formula (from observation):
n-th element = a^n * (n-(n-2)/n-(n-1)) * (n-(n-3)/n-(n-2)) * ... * (n/(n-1))
I think that it's impossible to simplify the sum of n elements by modifying the above formula to a sum formula. Even if it is possible, I assume that it will involve the use of exponent n, which will introduce a n-time loop; thus causing the solution to not be O(log n). The best solution I can get is simply find the ratio of each element, which is a(n+1)/n and apply that to the n-1 element to find the n-th element.
I think that I may be missing something. Could someone provide me with solution(s)?
You can solve this problem, and lots of problems like it, with matrix exponentiation.
Let's start with this sequence:
A[n] = a + a^2 + a^3 ... + a^n
That sequence can be generated with a simple formula:
A[i] = a*(A[i-1] + 1)
Now if we consider your sequence:
B[n] = a + 2a^2 + 3a^3 ... + na^n
We can generate that with a formula that makes use of the previous one:
B[i] = (B[i-1] + A[i-1] + 1) * a
If we make a sequence of vectors containing all the components we need:
V[n] = (B[n], A[n], 1)
Then we can construct a matrix M so that:
V[i] = M*V[i-1]
And so:
V[n] = (M^(n-1))V[1]
Since the size of the matrix is fixed at 3x3, you can use exponentiation by squaring on the matrix itself to calculate M^(n-1) in O(log n) time, and the final multiplication takes constant time.
Here's an implementation in python with numpy (so I don't have to include matrix multiply code):
import numpy as np
def getSum(a,n):
# A[n] = a + a^2 + a^3...a^n
# B[n] = a + 2a^2 + 3a^3 +. .. na^n
# V[n] = [B[n],A[n],1]
M = np.matrix([
[a, a, a], # B[i] = B[i-1]*a + A[i-1]*a + a
[0, a, a], # A[i] = A[i-1]*a + a
[0, 0, 1]
])
# calculate MsupN = M^(n-1)
n-=1
MsupN=np.matrix([[1,0,0],[0,1,0],[0,0,1]]);
while(n>0):
if n%2 > 0:
MsupN *= M
n-=1
M*=M
n=n/2
# calculate V[n] = MsupN*V
Vn = MsupN*np.matrix([a,a,1]).T;
return Vn.item(0,0);
I assume a, n are nonnegative integers. The explicit formula for a > 1 is
a * (n * a^{n + 1} - (n + 1) * a^n + 1) / (a - 1)^2
It can be evaluated efficiently in O(log(n)) using
square and multiply for a^n.
To derive the formula, you could use the following ingredients:
explicit formula for geometric series
You have to notice that this polynomial looks almost like a derivative of a geometric series
Gaussian sum formula for the special case a = 1.
Now you can simply calculate:
sum_{i = 1}^n i * a^i // [0] ugly sum
= a * sum_{i = 1}^n i * a^{i-1} // [1] linearity
= a * d/da (sum_{i = 1}^n a^i) // [2] antiderivative
= a * d/da (sum_{i = 0}^n a^i - 1) // [3] + 1 - 1
= a * d/da ((a^{n + 1} - 1) / (a - 1) - 1) // [4] geom. series
= a * ((n + 1)*a^n / (a - 1) - (a^{n+1} - 1)/(a - 1)^2) // [5] derivative
= a * (n * a^{n + 1} - (n + 1)a^n + 1) / (a - 1)^2 // [6] explicit formula
This is just a simple arithmetic expression with a^n, which can be evaluated in O(log(n)) time using square-and-multiply.
This doesn't work for a = 0 or a = 1, so you have to treat those cases specially: for a = 0 you just return 0 immediately, for a = 1, you return n * (n + 1) / 2.
Scala snippet to test the formula:
def fast(a: Int, n: Int): Int = {
def pow(a: Int, n: Int): Int =
if (n == 0) 1
else if (n == 1) a
else {
val r = pow(a, n / 2)
if (n % 2 == 0) r * r else r * r * a
}
if (a == 0) 0
else if (a == 1) n * (n + 1) / 2
else {
val aPowN = pow(a, n)
val d = a - 1
a * (n * aPowN * a - (n + 1) * aPowN + 1) / (d * d)
}
}
Slower, but simpler version, for comparison:
def slow(a: Int, n: Int): Int = {
def slowPow(a: Int, n: Int): Int = if (n == 0) 1 else slowPow(a, n - 1) * a
(1 to n).map(i => i * slowPow(a, i)).sum
}
Comparison:
for (a <- 0 to 5; n <- 0 to 5) {
println(s"${slow(a, n)} <-> ${fast(a, n)}")
}
Output:
0 <-> 0
0 <-> 0
0 <-> 0
0 <-> 0
0 <-> 0
0 <-> 0
0 <-> 0
1 <-> 1
3 <-> 3
6 <-> 6
10 <-> 10
15 <-> 15
0 <-> 0
2 <-> 2
10 <-> 10
34 <-> 34
98 <-> 98
258 <-> 258
0 <-> 0
3 <-> 3
21 <-> 21
102 <-> 102
426 <-> 426
1641 <-> 1641
0 <-> 0
4 <-> 4
36 <-> 36
228 <-> 228
1252 <-> 1252
6372 <-> 6372
0 <-> 0
5 <-> 5
55 <-> 55
430 <-> 430
2930 <-> 2930
18555 <-> 18555
So, yes, the O(log(n)) formula gives the same numbers as the O(n^2) formula.
a^n can be indeed computed in O(log n).
The method is called Exponentiation by squaring and the main idea is that if you know a^n you also know a^(2*n) which is just a^n * a^n.
So if you want to compute a^n (if n is even) you can just compute a^(n/2) and multiply the result with itself: a^n = a^(n/2) * a^(n/2). So instead of having a loop up to n, now you only have a loop up to n/2. But n/2 is just another number, and can be computed the same way, thus doing only half the operations. Halving the number of operations each time leads to the logarithmic complexity.
As mentioned by #Sopel in the comment, the series can be written as a simple equation/function:
a * (n * a^(n+1) - (n+1) * a^n + 1)
f(a,n) = ------------------------------------
(a- 1) ^ 2
So to find the answer you only have to compute the above formula, using the fast exponentiation described above to do it in O(logN) complexity.

Prooving by induction that a function gets called n-1 times

This is the pseudo-code from the problem:
Procedure Foo(A,f,L), precondition:
A[f..L] is an array of integers
f,L, are two naturals >=1 with f<=L.
Code
procedure Foo(A,f,L
if (f=L) then
return A[f]
else
m <-- [(f+L)/2]
return min(Foo(A,f,m), Foo(A, m+1,L))
end if
The Question:
Using induction, argue that Foo invokes min at most n-1 times.
I am a little lost on how to continue my proof for part (iii). I have the claim written out as well as the base case. Which i believe it to be n>=2. But how do I do it for k + 1 terms? Since this is a proof by induction.
Thanks
We will proceed by induction on n = L - f + 1.
Base case: when n = 1, f=L and we immediately return A[f] calling min zero times; we have n - 1 = 1 - 1 = 0.
Induction hypothesis: assume that the claim is true for all n up to and including k - 1.
Induction step: we must show the claim is true for k. Since L > f we execute the second branch which calls min once, and invokes Foo on subarrays of sizes floor(k/2) and ceiling(k/2).
if k is even, k/2 is an integer and floor(k/2) = ceiling(k/2) = k/2. Both of these are less than k and so we know that Foo invokes min at least k/2 - 1 times for each call. But 2(k/2 - 1) + 1 = k - 2 + 1 = k - 1, so the minimum number of invocations must be k - 1 for n = k.
if k is odd, k/2 is not an integer and floor(k/2) = ceiling(k/2) - 1. For k > 1, both of these are less than k and so we know that each recursive call invokes min at least floor(k/2) - 1 and ceiling(k/2) - 1 times, respectively. But floor(k/2) - 1 + ceiling(k/2) - 1 + 1 = floor(k/2) - 1 + floor(k/2) + 1 = 2*floor(k/2) - 1 + 1 = 2*floor(k/2). Since k is an odd integer, k/2 can be written w+1/2 where w = floor(k/2). Rearranging, we have that k = 2w + 1 and we invoke min at least 2*w times. But k - 1 = 2*w + 1 - 1 = 2*w = 2*floor(k/2), as required
Since k is either even or odd, and we have shown that in both cases the minimum number of invocations of min is at least k - 1, this completes the proof.

Computing all infix products for a monoid / semigroup

Introduction: Infix products for a group
Suppose I have a group
G = (G, *)
and a list of elements
A = {0, 1, ..., n} ⊂ ℕ
x : A -> G
If our goal is to implement a function
f : A × A -> G
such that
f(i, j) = x(i) * x(i+1) * ... * x(j)
(and we don't care about what happens if i > j)
then we can do that by pre-computing a table of prefixes
m(-1) = 1
m(i) = m(i-1) * x(i)
(with 1 on the right-hand side denoting the unit of G) and then implementing f as
f(i, j) = m(i-1)⁻¹ * m(j)
This works because
m(i-1) = x(0) * x(1) * ... * x(i-1)
m(j) = x(0) * x(1) * ... * x(i-1) * x(i) * x(i+1) * ... * x(j)
and so
m(i)⁻¹ * m(j) = x(i) * x(i+1) * ... * x(j)
after sufficient reassociation.
My question
Can we rescue this idea, or do something not much worse, if G is only a monoid, not a group?
For my particular problem, can we do something similar if G = ([0, 1] ⊂ ℝ, *), i.e. we have real numbers from the unit line, and we can't divide by 0?
Yes, if G is ([0, 1] ⊂ ℝ, *), then the idea can be rescued, making it possible to compute ranged products in O(log n) time (or more accurately, O(log z) where z is the number of a in A with x(a) = 0).
For each i, compute the product m(i) = x(0)*x(1)*...*x(i), ignoring any zeros (so these products will always be non-zero). Also, build a sorted array Z of indices for all the zero elements.
Then the product of elements from i to j is 0 if there's a zero in the range [i, j], and m(j) / m(i-1) otherwise.
To find if there's a zero in the range [i, j], one can binary search in Z for the smallest value >= i in Z, and compare it to j. This is where the extra O(log n) time cost appears.
General monoid solution
In the case where G is any monoid, it's possible to do precomputation of n products to make an arbitrary range product computable in O(log(j-i)) time, although its a bit fiddlier than the more specific case above.
Rather than precomputing prefix products, compute m(i, j) for all i, j where j-i+1 = 2^k for some k>=0, and 2^k divides both i and j. In fact, for k=0 we don't need to compute anything, since the values of m(i, i+1) is simply x(i).
So we need to compute n/2 + n/4 + n/8 + ... total products, which is at most n-1 things.
One can construct an arbitrary interval [i, j] from at O(log_2(j-i+1)) of these building blocks (and elements of the original array): pick the largest building block contained in the interval and append decreasing sized blocks on either side of it until you get to [i, j]. Then multiply the precomputed products m(x, y) for each of the building blocks.
For example, suppose your array is of size 10. For example's sake, I'll assume the monoid is addition of natural numbers.
i: 0 1 2 3 4 5 6 7 8 9
x: 1 3 2 4 2 3 0 8 2 1
2: ---- ---- ---- ---- ----
4 6 5 8 3
4: ----------- ----------
10 13
8: ----------------------
23
Here, the 2, 4, and 8 rows show sums of aligned intervals of length 2, 4, 8 (ignoring bits left over if the array isn't a power of 2 in length).
Now, suppose we want to calculate x(1) + x(2) + x(3) + ... + x(8).
That's x(1) + m(2, 3) + m(4, 7) + x(8) = 3 + 6 + 13 + 2 = 24.

How to generate the continuous sequence which sum up equal to N [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
Given a number N and generate an Arithmetic Progression having difference of 1 so that after summing up to finite element gives the number N
for example:
For Example:
N=10
1 + 2 + 3 + 4 =10
N=20
2+3+4+5+6 = 20
N=30
4+5+6+7+8 = 30
N < 1000000
Start with sum = 0.
Let 1 be the current number.
Add the current number to the sum.
If sum > N, subtract numbers from the first number added to the sum until sum <= N.
Stop if sum = N (success).
Increase the current number.
Continue from step 3.
You'll just need to remember the first number added to the sum for step 4, which you will increase by one as you subtract it from the sum (thanks Niko).
As an optimization, you can also use a formula (n(n+1)/2) to add numbers in batch rather than adding them one by one (in case N is large).
Example:
N = 30
Sum = 0
Add 1 -> 1
Add 2 -> 3
Add 3 -> 6
Add 4 -> 10
Add 5 -> 15
Add 6 -> 21
Add 7 -> 28
Add 8 -> 36
36 > 30, so:
Subtract 1 -> 35
Subtract 2 -> 33
Subtract 3 -> 30
Done.
Let T be the number
So N(N+1)/2 = T in your first case where N=4
N(N+1)/2 - K(K+1)/2 = T in your second case where N=6 & K=1
N(N+1)/2 - K(K+1)/2 = T in your third case where N=8 & K=3
So you solve for N basically that is by multiplying & reducing you get
N^2 + N - (2T + K^2 + K) = 0
Applying quadratic formula for N that is
N= (-b + sqrt(b^2 - 4ac))/2a
So we get,
N = (-1 +- sqrt(1 + 8T + 4K^2 + 4K))/2
N has to be positive so we can remove the negative case
Therefore N has to be equal to
N = (sqrt(8T + (2k+1)^2) - 1)/2
You can iterate from K=0 till you get a natural number N which will be your answer
Hope it helps, Iam trying to find a better way as iam doing this(appreciate the interesting problem)
Let N = pq where p is an odd positive integer and q is any positive integer.
(1) You can write N as sum of p consecutive integers, with q as the middle value.
(2) And if both p and q are odd (say, q = 2k+1), you can also write N as sum of 2p consecutive integers, with k and k+1 in the middle.
For example, let N = 15 = 5 x 3.
If we choose p=5, then following rule (1) we have 1+2+3+4+5 = 15.
Or by rule (2) we could also write (-3)+(-2)+(-1)+0+1+2+3+4+5+6 = 15.
We can also choose p = 3 to get 4+5+6 = 15 and 0+1+2+3+4+5 = 15 too.
int NumSum(int val)
{
int n = 0, i = 0, j;
while (n != val)
{
n = 0;
j = ++i;
while (n < val)
n += j++;
}
return i;
}
No fancy maths, just the easy way of doing it.. Returns number to start counting from.
This is more of a trick method & i think it might work.
Lets say number is 10 then start a sequence from n/2 that is 5
Now the sequence cannot be
5+6 since 10>11 so we have to work backwards also 5 is the upper limit of numbers we need to consider since numbers like 6+7 etc will exceed 10 so the last number(highest) of the sequence will be 5.
moving backwards 5+4=9 < 10
5+4+3=12 > 10 so remove first element kinda like a queue.
So for 20 we have
start = 20/2 = 10
10 + 9 = 19 -> do nothing
10 + 9 + 8 = 27 -> remove first element that is 10
9 + 8 + 7 = 24 -> remove 9
8 + 7 + 6 = 21 -> remove 8
7 + 6 + 5 = 18 -> do nothing
7 + 6 + 5 + 4 = 22 -> remove 7
6 + 5 + 4 + 3 = 18 -> do nothing
6 + 5 + 4 + 3 + 2 = 20 -> Answer we need
I guess this is a variation to the accepted answer but still thought i could add this as an alternative solution.
First, every odd number is the sum of an AP length 2 because n = floor(n/2) + ceil(n/2).
More interestingly, a number with an odd divisor d is a sum of an AP length d (with difference 1) around n/d. For instance, 30 is divisible by 5, so is the sum of an AP around 6: 30 = 4 + 5 + 6 + 7 + 8.
A number with no odd divisors is a power of 2. While 1 = 0 + 1 and 2 = -1 + 0 + 1 + 2, larger powers are not the sum of any (non-trivial) arithmetic progression. Why? Suppose 2**m = a + (a+1) + .. + (a+k-1). Sum the series = k (2a + k-1) / 2. k must be odd or even, but either choice contradicts the sum being a power of 2.

Adjacent Bit Counts

here is the problem from spoj that states
For a string of n bits x1,x2,x3,...,Xn
the adjacent bit count of the string
(AdjBC(x)) is given by
X1*X2 + X2*X3 + X3*X4 + ... + Xn-1 *
Xn
which counts the number of times a 1
bit is adjacent to another 1 bit. For
example:
AdjBC(011101101) = 3
AdjBC(111101101) = 4
AdjBC(010101010) = 0
and the question is : Write a program which takes as input integers n and k and returns the number of bit strings x of n bits (out of 2ⁿ) that satisfy AdjBC(x) = k.
I have no idea how to solve this problem. Can you help me to solve this ?
Thanks
Often in combinatorial problems, it helps to look at the set of values it produces. Using brute force I calculated the following table:
k 0 1 2 3 4 5 6
n +----------------------------
1 | 2 0 0 0 0 0 0
2 | 3 1 0 0 0 0 0
3 | 5 2 1 0 0 0 0
4 | 8 5 2 1 0 0 0
5 | 13 10 6 2 1 0 0
6 | 21 20 13 7 2 1 0
7 | 34 38 29 16 8 2 1
The first column is the familiar Fibonacci sequence, and satisfies the recurrence relation f(n, 0) = f(n-1, 0) + f(n-2, 0)
The other columns satisfies the recurrence relation f(n, k) = f(n - 1, k) + f(n - 1, k - 1) + f(n - 2, k) - f(n - 2, k - 1)
With this, you can do some dynamic programming:
INPUT: n, k
row1 <- [2,0,0,0,...] (k+1 elements)
row2 <- [3,1,0,0,...] (k+1 elements)
repeat (n-2) times
for j = k downto 1 do
row1[j] <- row2[j] + row2[j-1] + row1[j] - row1[j-1]
row1[0] <- row1[0] + row2[0]
swap row1 and row2
return row2[k]
As a hint you can split it up into two cases: numbers ending in 0 and numbers ending in 1.
def f(n, k):
return f_ending_in_0(n, k) + f_ending_in_1(n, k)
def f_ending_in_0(n, k):
if n == 1: return k == 0
return f(n - 1, k)
def f_ending_in_1(n, k):
if n == 1: return k == 0
return f_ending_in_0(n - 1, k) + f_ending_in_1(n - 1, k - 1)
This gives the correct output but takes a long time to execute. You can apply standard dynamic programming or memoization techniques to get this to perform fast enough.
I am late to the party, but I have a linear time complexity solution.
For me this is more of a mathematical problem. You can read the detailed solution in this blog post written by me. What follows is a brief outline. I wish I could put some LaTeX, but SO doesn't allows that.
Suppose for given n and k, our answer is given by the function f(n,k). Using Beggar's Method, we can arrive at the following formula
f(n,k) = SUM C(k+a-1,a-1)*C(n-k+1-a,a), where a runs from 1 to (n-k+1)/2
Here C(p,q) denotes binomial coefficients.
So to get our answer, we have to calculate both the binomial coefficients for each value of a. We can calculate the binomial table beforehand. This approach will then given our answer in O(n^2) since we have to calculate the table.
We can improve the time complexity by using the recursion formula C(p,q) = (p * C(p-1,q-1))/q to calculate the current value of binomial coefficients from their values in previous loop.
Our final code looks like this:
long long x=n-k, y=1, p=n-k+1, ans=0;
ans += x*y;
for(int a=2; a<=p/2; a++)
{
x = (x*(p-2*a+1)*(p-2*a+2))/(a*(p-a+1));
y = (y*(k+a-1))/(a-1);
ans += x*y;
}
You can find the complete accepted solution in my GitHub repository.

Resources