There are multiple queries of the form
Q(n,m) = (nC1*mC1) + (nC2*mC2) + (nC3*mC3) ... (nCk*mCk) where
k=min(n,m)
How to find the value of Q(n,m) in O(1) time complexity.
I tried pre-computing ncr[N][N] matrix and dp[N][N][N] where dp[n][m][min(n,m)] = Q(n,m).
This pre-computation takes O(N^3) time and queries can be answered in O(1) time now. But I'm looking for an approach in which pre-computation shouldn't take more O(N^2) time.
Solution for starting from C(n,0)*C(m,0) seems pretty simple
Q0(n,m) = C(n+m, m)
So for your formulation just subtract 1
Q(n,m) = C(n+m, m) - 1
Example: n=9, m=5
Dot product of 9-th and 5-th rows of Pascal's triangle is
1 9 36 84 126 126 84 36 9 1
1 5 10 10 5 1
1 + 45 + 360 + 840 + 630 + 126 = 2002 = C(14,5)
It might be proved with math induction starting from Q(n,1) but expressions are rather long.
I have discovered a truly marvelous demonstration of this proposition that this margin is too narrow to contain © Fermat ;)
Related
Examples of perfect square numbers are 1,4,9,16,25....
How do we compute all the perfect square numbers for very large numbers like 10 pow 20. For 10 pow 20 there are 10 pow 10 perfect square numbers.
So far what i have done....
Bruteforce : calucate the x**2 in range 1 to 10 pow 10. As my system accepts just 10 pow 6. This didn't work.
Two pointer approach : I have taken the upper and lower bounds....
Upper bound is 10 pow 20
Lower bound is 1
Now, i have taken two pointers, one at the start and the other at the end. Then next perfect square for lower bound will be
lower bound + (sqrt(lower bound) *2+1)
Example : for 4 next perfect square is
4 + (sqrt(4)*2+1)= 9
In the same way upper bound will be decreasing
upper bound - (sqrt(upper bound) *2-1)
Example : for 25 the previous perfect square is
25 - (sqrt(25)*2-1) =16
Both of the above mentioned approaches didn't work well because the upper bound is very very large number 10 pow 20.
How can we efficiently compute all the perfect squares till 10 pow 20 in less time ?
It's easy to note the difference between perfect squares:
0 1 4 9 16 25 ...
|___|___|___|___|_____|
| | | | |
1 3 5 7 9
So we have:
answer = 0;
for(i = 1; answer <= 10^20; i = i + 2)
answer = answer + i;
print(answer);
}
Since you want all the perfect squares until x, the time complexity will be O(sqrt(x)), which may be slow for x = 10^20, whose square is 10^10.
I am trying to find an answer to the following problem:
T(n) = T(n - n^(1/q)), q > 2
T(c) = O(1), for a constant c
What I am interested in are recursive problems, which do not branch and do not have any auxiliary calculations. Something similar to this problem. The difference is that the amount by which my problem is going to be reduced also gets smaller in each recursive call. I've tried to solve this via an iterative approach leading to this:
T(n) = T(n - n^(1/q) =
= T[n - n^(1/q) - [n - n^(1/q)]^(1/q)] =
= ....
for which I cannot really find a reasonable expansion. I therefor tried the substitution method for T(n) \in O(n) and T(n) \in O(log(n)), which both hold, if I haven't made any mistake.
Given that I can now only assume that T(n) = T(n - n^(1/q)) \in O(log(n)), for q > 2, which seems reasonable, since it is very similar to binary search.
Sadly I haven't really covered such recursions, neither do I have a good idea of applications following such recursions.
Given all of that my questions are:
Can we reason about recursive problems which do not branch and do not have auxiliary calculations comparable to the statement above?
Are there applications of algorithms which have such a behaviour? I guess it could appear when searching in a linear field, very similar to binary search, but not with cutting the field in halves?
Similar problems are the following: T(n) = x + T(n-log(n)) and T(n) = T(n^(1/2)) + T(n-n^(1/2)) + n
Edit: Removed wrong expansion.
Perform a binomial expansion on the second term:
This can be done because 1/q < 1, which in turn means n^(1/q-1) << 1 for large values of n.
Substitute back into the recurrence:
1 - 2/q > 0 because q > 2, so the third term quickly decays for large values of n, which means it can be ignored in the asymptotic limit:
Using a similar technique, the next expansions would be:
Therefore:
Some numerical test results:
q | 3 4 5 6 7 8 9 10
m | T(10^m, q)
------------------------------------------------------------------------------------
1 | 8 10 10 10 10 10 10 10
2 | 38 54 65 81 100 100 100 100
3 | 164 272 389 486 563 627 755 1000
4 | 728 1447 2222 2994 3761 4554 5255 5511
5 | 3300 7835 13471 19497 25699 31681 36869 43686
6 | 15149 43199 82438 128804 177495 226212 275381 343686
7 | 69946 240337 511496 854128 1249628 1648028 2123037 2586014
8 | 323861 1343497 3193858 5735972 8792779 12228031 15705474 19268220
9 | 1501499 7529750 20024691 38712540 62366319 89586305 120308456 152184297
10 | 6965614 42264115 125845996 262058267 444972781 664753359 914166923 1188039787
Log-log plot of T(n) against n, for all values of q:
Linear log-log plots correspond to polynomial relationships, which agrees with the theoretical result of n^(1-1/q). The exponents are given by the values of the gradients:
q 1-1/q gradient
---------------------------
3 0.666666667 0.658929415
4 0.75 0.736456264
5 0.8 0.786448328
6 0.833333333 0.818586522
7 0.857142857 0.841039047
8 0.875 0.860906137
9 0.888888889 0.875722729
10 0.9 0.886571495
The gradients match the theoretical values reasonably well, but are on the low side. This may have been due to integer truncation in the code used for numerical tests.
I recently got into the book "Programming Challenges" by Skiena and Revilla and was somewhat surprised when I saw the solution to the 3n+1 problem, which was simply brute forced. Basically it's an algorithm that generates a list of numbers, dividing by 2 if even and multiplying by 3 and adding 1 if odd. This occurs until n=1 is reached, its base case. Now the trick is to find the maximum length of a list between integers i and j which in the problem ranges between 1 and 1,000,000 for both variables. So I was wondering how much more efficient (if so) a program would be with Dynamic Programming. Basically, the program would do one pass on the first number, i, find the total length, and then check each individual number within the array and store the associated lengths within a HashMap or other dictionary data type.
For Example:
Let's say i = 22 and j = 23
For 22:
22 11 34 17 52 26 13 40 20 10 5 16 8 4 2 1
This means that in the dictionary, with the structure would store
(22,16) , (11,15) , (34,14) and so on... until (1,1)
Now for 23:
23 70 35 106 53 160 80 40 ...
Since 40 was hit, and it is in the dictionary
program would get the length of 23 to 80, which is 7, and add it to the length stored previously by 40 which is 9 resulting in total list length of 16. And of course the program would store lengths of 23, 70 , 35 etc... such that if the numbers were bigger it should compute faster.
So what are the opinions of approaching such a question in this manner?
I tried both approaches and submitted them to UVaOJ, the brute force solution got runtime ~0.3s and the dp solution ~0.0s. It gets pretty slow when the range gets long (like over 1e7 elements).
I just used an array (memo) to be able to memorize the first 5 million (SIZE) values:
int cycleLength(long long n)
{
if(n < 1) //overflow
return 0;
if (n == 1)
return 1;
if (n < SIZE && memo[n] != 0)
return memo[n];
int res = 1 + cycleLength(n % 2 == 0 ? n / 2 : 3 * n + 1);
if (n < SIZE)
memo[n] = res;
return res;
}
There is a sequence S.
All the elements in S is product of 2, 3, 5.
S = {2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24 ...}
How to get the 1000th element in this sequence efficiently?
I check each number from 1, but this method is too slow.
A geometric approach:
Let s = 2^i . 3^j . 5^k, where the triple (i, j, k) belongs to the first octant of a 3D state space.
Taking the logarithm,
ln(s) = i.ln(2) + j.ln(3) + k.ln(5)
so that in the state space the iso-s surfaces are planes, which intersect the first octant along a triangle. On the other hand, the feasible solutions are the nodes of a square grid.
If one wants to produce the s-values in increasing order, one can keep a list of the grid nodes closest to the current s-plane*, on its "greater than" side.
If I am right, to move from one s-value to the next, it suffices to discard the current (i, j, k) and replace it by the three triples (i+1, j, k), (i, j+1, k) and (i, j, k+1), unless they are already there, and pick the next smallest s.
An efficient implementation will be by storing the list as a binary tree with the log(s)-value as the key.
If you are asking for the first N values, you will explore a pyramidal volume of state-space of height O(³√N), and base area O(³√N²), which is the number of tree nodes, hence the spatial complexity. Every query in the tree will take O(log(N)) comparisons (and O(1) operations to fetch the minimum), for a total of O(N.log(N)).
*More precisely, the list will contain all triples on the "greater than" side and such that no index can be decreased without getting on the other side of the plane.
Here is Python code that implements these ideas.
You will notice that the logarithms are converted to fixed point (7 decimals) to avoid floating-point inaccuracies that could result in the log(s)-values not being found equal. This causes the s values being inexact in the last digits, but this does not matter as long as the ordering of the values is preserved. Recomputing the s-values from the indexes yields exact values.
import math
import bintrees
# Constants
ln2= round(10000000 * math.log(2))
ln3= round(10000000 * math.log(3))
ln5= round(10000000 * math.log(5))
# Initial list
t= bintrees.FastAVLTree()
t.insert(0, (0, 0, 0))
# Find the N first products
N= 100
for i in range(N):
# Current s
s= t.pop_min()
print math.pow(2, s[1][0]) * math.pow(3, s[1][1]) * math.pow(5, s[1][2])
# Update the list
if not s[0] + ln2 in t:
t.insert(s[0] + ln2, (s[1][0]+1, s[1][1], s[1][2]))
if not s[0] + ln3 in t:
t.insert(s[0] + ln3, (s[1][0], s[1][1]+1, s[1][2]))
if not s[0] + ln5 in t:
t.insert(s[0] + ln5, (s[1][0], s[1][1], s[1][2]+1))
The 100 first values are
1 2 3 4 5 6 8 9 10 12
15 16 18 20 24 25 27 30 32 36
40 45 48 50 54 60 64 72 75 80
81 90 96 100 108 120 125 128 135 144
150 160 162 180 192 200 216 225 240 243
250 256 270 288 300 320 324 360 375 384
400 405 432 450 480 486 500 512 540 576
600 625 640 648 675 720 729 750 768 800
810 864 900 960 972 1000 1024 1080 1125 1152
1200 1215 1250 1280 1296 1350 1440 1458 1500 1536
The plot of the number of tree nodes confirms the O(³√N²) spatial behavior.
Update:
When there is no risk of overflow, a much simpler version (not using logarithms) is possible:
import math
import bintrees
# Initial list
t= bintrees.FastAVLTree()
t[1]= None
# Find the N first products
N= 100
for i in range(N):
# Current s
(s, r)= t.pop_min()
print s
# Update the list
t[2 * s]= None
t[3 * s]= None
t[5 * s]= None
Simply put, you just have to generate each ith number consecutively. Let's call the set {2, 3, 5} to be Z. At ith iteration, assume you have all (i-1) of the values generated in the previous iteration. While generating the next one, what you basically have to do is trying all the elements in Z and for each of them generating **the least element they can form that is larger than the element generated at (i-1)th iteration. Then, you simply consider the smallest one among them as the ith value. A simple and not so efficient implementation is given below.
def generate_simple(N, Z):
generated = [1]
for i in range(1, N+1):
minFound = -1
minElem = -1
for j in range(0, len(Z)):
for k in range(0, len(generated)):
candidateVal = Z[j] * generated[k]
if candidateVal > generated[-1]:
if minFound == -1 or minFound > candidateVal:
minFound = candidateVal
minElem = j
break
generated.append(minFound)
return generated[-1]
As you may observe, this approach has a time complexity of O(N2 * |Z|). An improvement in terms of efficiency would be to store where we left off scanning in the array of generated values for each element in a second array, indicesToStart. Then, for each element we would only scan all N values of the array generated for once(i.e. all through the algorithm), which means the time complexity after such an improvement would be O(N * |Z|).
A simple implementation of the improvement based on the simple version provided above, is given below.
def generate_improved(N, Z):
generated = [1]
indicesToStart = [0] * len(Z)
for i in range(1, N+1):
minFound = -1
minElem = -1
for j in range(0, len(Z)):
for k in range(indicesToStart[j], len(generated)):
candidateVal = Z[j] * generated[k]
if candidateVal > generated[-1]:
if minFound == -1 or minFound > candidateVal:
minFound = candidateVal
minElem = j
break
indicesToStart[j] += 1
generated.append(minFound)
indicesToStart[minElem] += 1
return generated[-1]
If you have a hard time understanding how complexity decreases with this algorithm, try looking into the difference in time complexity of any graph traversal algorithm when an adjacency list is used, and when an adjacency matrix is used. The improvement adjacency lists help achieve is almost exactly the same kind of improvement we get here. In a nutshell, you have an index for each element and instead of starting to scan from the beginning you continue from wherever you left the last time you scanned the generated array for that element. Consequently, even though there are N iterations in the algorithm(i.e. the outermost loop) the overall number of operations you make is O(N * |Z|).
Important Note: All the code above is a simple implementation for demonstration purposes, and you should consider it just as a pseudocode you can test. While implementing this in real life, based on the programming language you choose to use, you will have to consider issues like integer overflow when computing candidateVal.
I was giving a test for a company called Code Nation and came across this question which asked me to calculate how many times a number k appears in the submatrix M[n][n]. Now there was a example which said Input like this.
5
1 2 3 2 5
36
M[i][j] is to calculated by a[i]*a[j]
which on calculation turn I could calculate.
1,2,3,2,5
2,4,6,4,10
3,6,9,6,15
2,4,6,4,10
5,10,15,10,25
Now I had to calculate how many times 36 appears in sub matrix of M.
The answer was 5.
I am unable to comprehend how to calculate this submatrix. How to represent it?
I had a naïve approach which resulted in many matrices of which I think none are correct.
One of them is Submatrix[i][j]
1 2 3 2 5
3 9 18 24 39
6 18 36 60 99
15 33 69 129 228
33 66 129 258 486
This was formed by adding all the numbers before it 0,0 to i,j
In this 36 did not appear 5 times so i know this is incorrect. If you can back it up with some pseudo code it will be icing on the cake.
Appreciate the help
[Edit] : Referred Following link 1 link 2
My guess is that you have to compute how many submatrices of M have sum equal to 36.
Here is Matlab code:
a=[1,2,3,2,5];
n=length(a);
M=a'*a;
count = 0;
for a0 = 1:n
for b0 = 1:n
for a1 = a0:n
for b1 = b0:n
A = M(a0:a1,b0:b1);
if (sum(A(:))==36)
count = count + 1;
end
end
end
end
end
count
This prints out 5.
So you are correctly computing M, but then you have to consider every submatrix of M, for example, M is
1,2,3,2,5
2,4,6,4,10
3,6,9,6,15
2,4,6,4,10
5,10,15,10,25
so one possible submatrix is
1,2,3
2,4,6
3,6,9
and if you add up all of these, then the sum is equal to 36.
There is an answer on cstheory which gives an O(n^3) algorithm for this.