static int isPascal(int n) {
int sum = 0;
int nthVal = 1;
while (sum < n) {
sum = sum + nthVal;
nthVal++;
}
return sum == n ? 1 : 0;
}
Here the function checks given number is pascal number or not. Pascal number is a number that is the sum of the integers from 1 to i for some i.
For example 6 is a Pascal number because 6 = 1 + 2 + 3
What will be the Time complexity of this function? Will it be O(logn) time? If so what will be base of log here?
If you consider calculating the square root a O(1) operation, you can do this check in O(1), with the help of the formula for the sum of the first i natural numbers
sum(i) = (i^2 + i)/2
Now in your case you don't know i but you know sum(i), because that's your n you want to check if it's a pascal number. So you have
n = (i^2 + i) /2
or
i^2 + i - 2n = 0
Solving this quadratic equation with the respecitive formula gives
i = -1/2 + sqrt(2*n + 1/4)
You can discard the second solution to this equation, because i must be > 0 to be a valid solution. If that resulting i is an integer, n is a Pascal number. Otherwise it isn't.
From that formula also follows, your iterative solution is in O(sqrt(n))
Related
I'm having this code to calculate the power of certain number
func power(x, n int) int {
if n == 0 {
return 1
}
if n == 1 {
return x
}
if n%2 == 0 {
return power(x, n/2) * power(x, n/2)
} else {
return power(x, n/2) * power(x, n/2) * x
}
}
go playground:
So, the total number of execution is 1 + 2 + 4 + ... + 2^k
and according to the formula of Geometric progression
a(1-r^n) / (1-r)
the sum of the execution times will be 2^k, where k is the height of the binary tree
Hence the time complexity is 2^logn
Am I correct? Thanks :)
Yes.
Another way of thinking on complexity of recursive functions is (amount of calls)**(height of recursive tree)
In each call you make two calls which divide n by two so the height of tree is logn so the time complexity is 2**(logn) which is O(n)
See a much more formal proof here:
https://cs.stackexchange.com/questions/69539/time-complexity-of-recursive-power-code
Every time you are dividing n by 2 unless n <= 1. So think how many times you can reduce n to 1 only by dividing by 0? Let's see,
n = 26
n1 = 13
n2 = 6 (take floor of 13/2)
n3 = 3
n4 = 1 (take floor of 3/2)
Let's say x_th power of 2 is greater or equal to x. Then,
2^x >= n
or, log2(2^x) = log2(n)
or, x = log2(n)
That is how you find the time complexity of your algorithm as log2(n).
This is a test I failed because I thought this complexity would be O(n), but it appears i'm wrong and it's O(n^2). Why not O(n)?
First, notice that the question does not ask what is the time complexity of a function calculating f(n), but rather the complexity of the function f(n) itself. you can think about f(n) as being the time complexity of some other algorithm if you are more comfortable talking about time complexity.
This is indeed O(n^2), when the sequence a_i is bounded by a constant and each a_i is at least 1.
By the assumption, for all i, a_i <= c for some constant c.
Hence, a_1*1+...+a_n*n <= c * (1 + 2 + ... + n). Now we need to show that 1 + 2 +... + n = O(n^2) to complete the proof.
1 + 2 + ... + n <= n + n + ... + n = n * n = n ^ 2
and
1 + 2 + ... + n >= n / 2 + (n / 2 + 1) + ... + n >= (n / 2) * (n / 2) = n^2/4
So the complexity is actually Theta(n^2).
Note that if a_i was not constant, e.g., a_i = i then the result is not correct.
in that case, f(n) = 1^2 + 2^2 + ... + n^2 and you can show easily (using the same method as before) that f(n) = Omega(n^3), which means it's not O(n^2).
Preface, not super great with complexity-theory but I'll take a stab.
I think what is confusing is that its not a time complexity problem, but rather the functions complexity.
So for easy part i just goes up to n ie. 1,2,3 ...n , then for ai all entries must be above 0 meaning that a could be something like this 2,5,1... for n times. If you multiply them together n*n = O(n2).
The best case would be if a is 1,1,1 which drop the complexity down to O(n) but the average case will be n so you get squared.
Unless it's mentioned that a[i] is O(n), it's definitely O(n)
Here an another try to achieve O(n*n) if sum should be returned as result.
int sum = 0;
for(int i = 0; i<=n; i++){
for(int j = 0; j<=n; j++){
if(i == j){
sum += A[i] * j;
}
}
return sum;
I am given a formula f(n) where f(n) is defined, for all non-negative integers, as:
f(0) = 1
f(1) = 1
f(2) = 2
f(2n) = f(n) + f(n + 1) + n (for n > 1)
f(2n + 1) = f(n - 1) + f(n) + 1 (for n >= 1)
My goal is to find, for any given number s, the largest n where f(n) = s. If there is no such n return None. s can be up to 10^25.
I have a brute force solution using both recursion and dynamic programming, but neither is efficient enough. What concepts might help me find an efficient solution to this problem?
I want to add a little complexity analysis and estimate the size of f(n).
If you look at one recursive call of f(n), you notice, that the input n is basically divided by 2 before calling f(n) two times more, where always one call has an even and one has an odd input.
So the call tree is basically a binary tree where always the half of the nodes on a specific depth k provides a summand approx n/2k+1. The depth of the tree is log₂(n).
So the value of f(n) is in total about Θ(n/2 ⋅ log₂(n)).
Just to notice: This holds for even and odd inputs, but for even inputs the value is about an additional summand n/2 bigger. (I use Θ-notation to not have to think to much about some constants).
Now to the complexity:
Naive brute force
To calculate f(n) you have to call f(n) Θ(2log₂(n)) = Θ(n) times.
So if you want to calculate the values of f(n) until you reach s (or notice that there is no n with f(n)=s) you have to calculate f(n) s⋅log₂(s) times, which is in total Θ(s²⋅log(s)).
Dynamic programming
If you store every result of f(n), the time to calculate a f(n) reduces to Θ(1) (but it requires much more memory). So the total time complexity would reduce to Θ(s⋅log(s)).
Notice: Since we know f(n) ≤ f(n+2) for all n, you don't have to sort the values of f(n) and do a binary search.
Using binary search
Algorithm (input is s):
Set l = 1 and r = s
Set n = (l+r)/2 and round it to the next even number
calculate val = f(n).
if val == s then return n.
if val < s then set l = n
else set r = n.
goto 2
If you found a solution, fine. If not: try it again but round in step 2 to odd numbers. If this also does not return a solution, no solution exists at all.
This will take you Θ(log(s)) for the binary search and Θ(s) for the calculation of f(n) each time, so in total you get Θ(s⋅log(s)).
As you can see, this has the same complexity as the dynamic programming solution, but you don't have to save anything.
Notice: r = s does not hold for all s as an initial upper limit. However, if s is big enough, it holds. To be save, you can change the algorithm:
check first, if f(s) < s. If not, you can set l = s and r = 2s (or 2s+1 if it has to be odd).
Can you calculate the value of f(x) which x is from 0 to MAX_SIZE only once time?
what i mean is : calculate the value by DP.
f(0) = 1
f(1) = 1
f(2) = 2
f(3) = 3
f(4) = 7
f(5) = 4
... ...
f(MAX_SIZE) = ???
If the 1st step is illegal, exit. Otherwise, sort the value from small to big.
Such as 1,1,2,3,4,7,...
Now you can find whether exists n satisfied with f(n)=s in O(log(MAX_SIZE)) time.
Unfortunately, you don't mention how fast your algorithm should be. Perhaps you need to find some really clever rewrite of your formula to make it fast enough, in this case you might want to post this question on a mathematics forum.
The running time of your formula is O(n) for f(2n + 1) and O(n log n) for f(2n), according to the Master theorem, since:
T_even(n) = 2 * T(n / 2) + n / 2
T_odd(n) = 2 * T(n / 2) + 1
So the running time for the overall formula is O(n log n).
So if n is the answer to the problem, this algorithm would run in approx. O(n^2 log n), because you have to perform the formula roughly n times.
You can make this a little bit quicker by storing previous results, but of course, this is a tradeoff with memory.
Below is such a solution in Python.
D = {}
def f(n):
if n in D:
return D[n]
if n == 0 or n == 1:
return 1
if n == 2:
return 2
m = n // 2
if n % 2 == 0:
# f(2n) = f(n) + f(n + 1) + n (for n > 1)
y = f(m) + f(m + 1) + m
else:
# f(2n + 1) = f(n - 1) + f(n) + 1 (for n >= 1)
y = f(m - 1) + f(m) + 1
D[n] = y
return y
def find(s):
n = 0
y = 0
even_sol = None
while y < s:
y = f(n)
if y == s:
even_sol = n
break
n += 2
n = 1
y = 0
odd_sol = None
while y < s:
y = f(n)
if y == s:
odd_sol = n
break
n += 2
print(s,even_sol,odd_sol)
find(9992)
This recursive in every iteration for 2n and 2n+1 is increasing values, so if in any moment you will have value bigger, than s, then you can stop your algorithm.
To make effective algorithm you have to find or nice formula, that will calculate value, or make this in small loop, that will be much, much, much more effective, than your recursion. Your recursion is generally O(2^n), where loop is O(n).
This is how loop can be looking:
int[] values = new int[1000];
values[0] = 1;
values[1] = 1;
values[2] = 2;
for (int i = 3; i < values.length /2 - 1; i++) {
values[2 * i] = values[i] + values[i + 1] + i;
values[2 * i + 1] = values[i - 1] + values[i] + 1;
}
And inside this loop add condition of possible breaking it with success of failure.
I am somewhat confused with the running time analysis of a program here which has recursive calls which depend on a RNG. (Randomly Generated Number)
Let's begin with the pseudo-code, and then I will go into what I have thought about so far related to this one.
Func1(A, i, j)
/* A is an array of at least j integers */
1 if (i ≥ j) then return (0);
2 n ← j − i + 1 ; /* n = number of elements from i to j */
3 k ← Random(n);
4 s ← 0; //Takes time of Arbitrary C
5 for r ← i to j do
6 A[r] ← A[r] − A[i] − A[j]; //Arbitrary C
7 s ← s + A[r]; //Arbitrary C
8 end
9 s ← s + Func1(A, i, i+k-1); //Recursive Call 1
10 s ← s + Func1(A, i+k, j); //Recursive Call 2
11 return (s);
Okay, now let's get into the math I have tried so far. I'll try not to be too pedantic here as it is just a rough, estimated analysis of expected run time.
First, let's consider the worst case. Note that the K = Random(n) must be at least 1, and at most n. Therefore, the worst case is the K = 1 is picked. This causes the total running time to be equal to T(n) = cn + T(1) + T(n-1). Which means that overall it takes somewhere around cn^2 time total (you can use Wolfram to solve recurrence relations if you are stuck or rusty on recurrence relations, although this one is a fairly simple one).
Now, here is where I get somewhat confused. For the expected running time, we have to base our assumption off of the probability of the random number K. Therefore, we have to sum all the possible running times for different values of k, plus their individual probability. By lemma/hopefully intuitive logic: the probability of any one Randomly Generated k, with k between 1 to n, is equal 1/n.
Therefore, (in my opinion/analysis) the expected run time is:
ET(n) = cn + (1/n)*Summation(from k=1 to n-1) of (ET(k-1) + ET(n-k))
Let me explain a bit. The cn is simply for the loop which runs i to j. This is estimated by cn. The summation represents all of the possible values for k. The (1/n) multiplied by this summation is there because the probability of any one k is (1/n). The terms inside the summation represent the running times of the recursive calls of Func1. The first term on the left takes ET(k-1) because this recursive call is going to do a loop from i to k-1 (which is roughly ck), and then possibly call Func1 again. The second is a representation of the second recursive call, which would loop from i+k to j, which is also represented by n-k.
Upon expansion of the summation, we see that the overall function ET(n) is of the order n^2. However, as a test case, plugging in k=(n/2) gives a total running time for Func 1 of roughly nlog(n). This is why I am confused. How can this be, if the estimated running time is of the order n^2? Am I considering a "good" case by plugging in n/2 for k? Or am I thinking about k in the wrong sense in some way?
Expected time complexity is ET(n) = O(nlogn) . Following is math proof derived by myself please tell if any error :-
ET(n) = P(k=1)*(ET(1)+ET(n-1)) + P(k=2)*(ET(2)+ET(n-2)).......P(k=n-1)*(ET(n-1)+ET(1)) + c*n
As the RNG is uniformly random P(k=x) = 1/n for all x
hence ET(n) = 1/n*(ET(1)*2+ET(2)*2....ET(n-1)*2) + c*n
ET(n) = 2/n*sum(ET(i)) + c*n i in (1,n-1)
ET(n-1) = 2/(n-1)*sum(ET(i)) + c*(n-1) i in (1,n-2)
sum(ET(i)) i in (1,n-2) = (ET(n-1)-c*(n-1))*(n-1)/2
ET(n) = 2/n*(sum(ET(i)) in (1,n-2) + ET(n-1)) + c*n
ET(n) = 2/n*((ET(n-1)-c*(n-1))*(n-1)/2+ET(n-1)) + c*n
ET(n) = 2/n*((n+1)/2*ET(n-1) - c*(n-1)*(n-1)/2) + c*n
ET(n) = (n+1)/n*ET(n-1) + c*n - c*(n-1)*(n-1)/n
ET(n) = (n+1)/n*ET(n-1) + c
solving recurrence
ET(n) = (n+1)ET(1) + c + (n+1)/n*c + (n+1)/(n-1)*c + (n+1)/(n-2)*c.....
ET(n) = (n+1) + c + (n+1)*sum(1/i) i in (1,n)
sum(1/i) i in (1,n) = O(logn)
ET(n) = (n+1) + c + (n+1)*logn
ET(n) = O(nlogn)
I came across this problem of finding said probability and my first attempt was to come up with following algorithm: I am counting number of pairs which are relatively prime.
int rel = 0
int total = n * (n - 1) / 2
for i in [1, n)
for j in [i+1, n)
if gcd(i, j) == 1
++rel;
return rel / total
which is O(n^2).
Here is my attempt to reducing complexity:
Observation (1): 1 is relatively prime to [2, n] so n - 1 pairs are trivial.
Observation (2): 2 is not relatively prime to even numbers in the range [4, n] so remaining odd numbers are relatively prime to 2, so
#Relatively prime pairs = (n / 2) if n is even
= (n / 2 - 1) if n is odd.
So my improved algorithm would be:
int total = n * (n - 1) / 2
int rel = 0
if (n % 2) // n is odd
rel = (n - 1) + n / 2 - 1
else // n is even
rel = (n - 1) + n / 2
for i in [3, n)
for j in [i+1, n)
if gcd(i, j) == 1
++rel;
return rel / total
With this approach I could reduce two loops, but worst case time complexity is still O(n^2).
Question: My question is can we exploit any mathematical properties other than above to find the desired probability in linear time?
Thanks.
You'll need to calculate the Euler's Totient Function for all integers from 1 to n. Euler's totient or phi function, φ(n), is a arithmetical function that counts the number of positive integers less than or equal to n that are relatively prime to n.
To calculate the function efficiently, you can use a modified version of Sieve of Eratosthenes.
Here is a sample C++ code -
#include <stdio.h>
#define MAXN 10000000
int phi[MAXN+1];
bool isPrime[MAXN+1];
void calculate_phi() {
int i,j;
for(i = 1; i <= MAXN; i++) {
phi[i] = i;
isPrime[i] = true;
}
for(i = 2; i <= MAXN; i++) if(isPrime[i]) {
for(j = i+i; j <= MAXN; j+=i) {
isPrime[j] = false;
phi[j] = (phi[j] / i) * (i-1);
}
}
for(i = 1; i <= MAXN; i++) {
if(phi[i] == i) phi[i]--;
}
}
int main() {
calculate_phi();
return 0;
}
It uses the Euler's Product Formula described on the Wikipedia page of Totient Function.
Calculating the complexity of this algorithm is a bit tricky, but it is much less than O(n^2). You can get results for n = 10^7 pretty quickly.
The number of integers in the range 0 .. n that are coprime to n is the Euler totient function of n. You are computing the sum of such values, e.g. called summatory totient function. Methods to compute this sum fast are for example
described here. You should easily get a method with a better than quadratic complexity,
depending on how fast you implement the totient function.
Even better are the references listed in the encyclopedia of integer sequences: http://oeis.org/A002088, though many of the references require some math skills.
Using these formulas you can even get an implementation that is sublinear.
For each prime p, probability of it dividing a randomly picked number between 1 and n is
[n / p] / n
([x] being the biggest integer not greater than x). If n is large, this is approximately 1/p.
The probability of it dividing two such randomly picked numbers is
([n / p] / n)2
Again, this is 1/p2 for large n.
Two numbers are coprime if no prime divides both, so the probability in question is the product
Πp is prime(1 - ([n / p] / n)2)
It is enough to calculate it for all primes less than or equal to n. As n goes to infinity, this product approaches 6/π2.
I'm not sure you can use the totient function directly, as described in the other answers.