I'm having this code to calculate the power of certain number
func power(x, n int) int {
if n == 0 {
return 1
}
if n == 1 {
return x
}
if n%2 == 0 {
return power(x, n/2) * power(x, n/2)
} else {
return power(x, n/2) * power(x, n/2) * x
}
}
go playground:
So, the total number of execution is 1 + 2 + 4 + ... + 2^k
and according to the formula of Geometric progression
a(1-r^n) / (1-r)
the sum of the execution times will be 2^k, where k is the height of the binary tree
Hence the time complexity is 2^logn
Am I correct? Thanks :)
Yes.
Another way of thinking on complexity of recursive functions is (amount of calls)**(height of recursive tree)
In each call you make two calls which divide n by two so the height of tree is logn so the time complexity is 2**(logn) which is O(n)
See a much more formal proof here:
https://cs.stackexchange.com/questions/69539/time-complexity-of-recursive-power-code
Every time you are dividing n by 2 unless n <= 1. So think how many times you can reduce n to 1 only by dividing by 0? Let's see,
n = 26
n1 = 13
n2 = 6 (take floor of 13/2)
n3 = 3
n4 = 1 (take floor of 3/2)
Let's say x_th power of 2 is greater or equal to x. Then,
2^x >= n
or, log2(2^x) = log2(n)
or, x = log2(n)
That is how you find the time complexity of your algorithm as log2(n).
Related
static int isPascal(int n) {
int sum = 0;
int nthVal = 1;
while (sum < n) {
sum = sum + nthVal;
nthVal++;
}
return sum == n ? 1 : 0;
}
Here the function checks given number is pascal number or not. Pascal number is a number that is the sum of the integers from 1 to i for some i.
For example 6 is a Pascal number because 6 = 1 + 2 + 3
What will be the Time complexity of this function? Will it be O(logn) time? If so what will be base of log here?
If you consider calculating the square root a O(1) operation, you can do this check in O(1), with the help of the formula for the sum of the first i natural numbers
sum(i) = (i^2 + i)/2
Now in your case you don't know i but you know sum(i), because that's your n you want to check if it's a pascal number. So you have
n = (i^2 + i) /2
or
i^2 + i - 2n = 0
Solving this quadratic equation with the respecitive formula gives
i = -1/2 + sqrt(2*n + 1/4)
You can discard the second solution to this equation, because i must be > 0 to be a valid solution. If that resulting i is an integer, n is a Pascal number. Otherwise it isn't.
From that formula also follows, your iterative solution is in O(sqrt(n))
Could you help me please ? I need a fast algorithm for calculating the following : the remainder of division the sum of the integers in the power from given range ( from A to B , 1 < A,B < 10^8 ) and 987654321;
For instance , if I have A = 10 , B = 15, I should calculate
((11^11) + (12^12) + (13^13) + (14^14) ) % 987654321
If I use this direct approach, it takes forever to calculate this. Is there a trick to calculate such kind of remainders?
Using fast modulo exponentiation, we can calculate x^n in O(log(n)) time. In the worst case, if A = 1 and B = n where n can be upto 10^8, then the total complexity will be around
log(2) + log(3) + log(4) + ... + log(n)
= log(n!)
~ n*log(n) - n + O(log(n)) (According to Striling's Approximation)
Wikipedia
Fast Modulo Exponentiation
This method is used to quickly calculate powers of the form x^n (in O(log(n)) time).
It can be given as a recurrence relation:
x^n = (x^2)^(n/2) if n is even
= x*{(x^2)^(n/2)} if n is odd
So, essentially instead of multiplying x n times, we do the following:
x = x^2;
n = n/2;
time till we reach a trivial case, where n = 1.
Python code (with modulo for this case):
def fast(x, n, mod):
if n == 1:
return x % mod
if n % 2 == 0:
return fast(x**2 % mod, n/2, mod)
else:
return x*fast(x**2 % mod, (n-1)/2, mod) % mod
Write a program that takes an integer and prints out all ways to multiply smaller integers that equal the original number, without repeating sets of factors. In other words, if your output contains 4 * 3, you should not print out 3 * 4 again as that would be a repeating set. Note that this is not asking for prime factorization only. Also, you can assume that the input integers are reasonable in size; correctness is more important than efficiency. PrintFactors(12) 12 * 1 6 * 2 4 * 3 3 * 2 * 2
public void printFactors(int number) {
printFactors("", number, number);
}
public void printFactors(String expression, int dividend, int previous) {
if(expression == "")
System.out.println(previous + " * 1");
for (int factor = dividend - 1; factor >= 2; --factor) {
if (dividend % factor == 0 && factor <= previous) {
int next = dividend / factor;
if (next <= factor)
if (next <= previous)
System.out.println(expression + factor + " * " + next);
printFactors(expression + factor + " * ", next, factor);
}
}
}
I think it is
If the given number is N and the number of prime factors of N = d, then the time complexity is O(N^d). It is because the recursion depth will go up to the number of prime factors. But it is not tight bound. Any suggestions?
2 ideas:
The algorithm is output-sensitive. Outputting a factorization uses up at most O(N) iterations of the loop, so overall we have O(N * number_of_factorizations)
Also, via Master's theorem, the equation is: F(N) = d * F(N/2) + O(N) , so overall we have O(N^log_2(d))
The time complexity should be:
number of iterations * number of sub calls ^ depth
There are O(log N) sub calls instead of O(N), since the number of divisors of N is O(log N)
The depth of recursion is also O(log N), and number of iterations for every recursive call is less than N/(2^depth), so overall time complexity is O(N ((log N)/2)^(log N))
The time complexity is calculated as : Total number of iterations multiplied
by no of sub iterations^depth.So over all complexity id Y(n)=O(no of
dividends)*O(number of factorization )+O(no of (factors-2) in loop);
Example PrintFactors(12) 12 * 1 ,6 * 2, 4 * 3, 3 * 2 * 2
O(no of dividends)=12
O(number of factorization)=3
O(no of factors-2){ in case of 3 * 2 * 2)=1 extra
Over all: O(N^logbase2(dividends))
I am given a formula f(n) where f(n) is defined, for all non-negative integers, as:
f(0) = 1
f(1) = 1
f(2) = 2
f(2n) = f(n) + f(n + 1) + n (for n > 1)
f(2n + 1) = f(n - 1) + f(n) + 1 (for n >= 1)
My goal is to find, for any given number s, the largest n where f(n) = s. If there is no such n return None. s can be up to 10^25.
I have a brute force solution using both recursion and dynamic programming, but neither is efficient enough. What concepts might help me find an efficient solution to this problem?
I want to add a little complexity analysis and estimate the size of f(n).
If you look at one recursive call of f(n), you notice, that the input n is basically divided by 2 before calling f(n) two times more, where always one call has an even and one has an odd input.
So the call tree is basically a binary tree where always the half of the nodes on a specific depth k provides a summand approx n/2k+1. The depth of the tree is log₂(n).
So the value of f(n) is in total about Θ(n/2 ⋅ log₂(n)).
Just to notice: This holds for even and odd inputs, but for even inputs the value is about an additional summand n/2 bigger. (I use Θ-notation to not have to think to much about some constants).
Now to the complexity:
Naive brute force
To calculate f(n) you have to call f(n) Θ(2log₂(n)) = Θ(n) times.
So if you want to calculate the values of f(n) until you reach s (or notice that there is no n with f(n)=s) you have to calculate f(n) s⋅log₂(s) times, which is in total Θ(s²⋅log(s)).
Dynamic programming
If you store every result of f(n), the time to calculate a f(n) reduces to Θ(1) (but it requires much more memory). So the total time complexity would reduce to Θ(s⋅log(s)).
Notice: Since we know f(n) ≤ f(n+2) for all n, you don't have to sort the values of f(n) and do a binary search.
Using binary search
Algorithm (input is s):
Set l = 1 and r = s
Set n = (l+r)/2 and round it to the next even number
calculate val = f(n).
if val == s then return n.
if val < s then set l = n
else set r = n.
goto 2
If you found a solution, fine. If not: try it again but round in step 2 to odd numbers. If this also does not return a solution, no solution exists at all.
This will take you Θ(log(s)) for the binary search and Θ(s) for the calculation of f(n) each time, so in total you get Θ(s⋅log(s)).
As you can see, this has the same complexity as the dynamic programming solution, but you don't have to save anything.
Notice: r = s does not hold for all s as an initial upper limit. However, if s is big enough, it holds. To be save, you can change the algorithm:
check first, if f(s) < s. If not, you can set l = s and r = 2s (or 2s+1 if it has to be odd).
Can you calculate the value of f(x) which x is from 0 to MAX_SIZE only once time?
what i mean is : calculate the value by DP.
f(0) = 1
f(1) = 1
f(2) = 2
f(3) = 3
f(4) = 7
f(5) = 4
... ...
f(MAX_SIZE) = ???
If the 1st step is illegal, exit. Otherwise, sort the value from small to big.
Such as 1,1,2,3,4,7,...
Now you can find whether exists n satisfied with f(n)=s in O(log(MAX_SIZE)) time.
Unfortunately, you don't mention how fast your algorithm should be. Perhaps you need to find some really clever rewrite of your formula to make it fast enough, in this case you might want to post this question on a mathematics forum.
The running time of your formula is O(n) for f(2n + 1) and O(n log n) for f(2n), according to the Master theorem, since:
T_even(n) = 2 * T(n / 2) + n / 2
T_odd(n) = 2 * T(n / 2) + 1
So the running time for the overall formula is O(n log n).
So if n is the answer to the problem, this algorithm would run in approx. O(n^2 log n), because you have to perform the formula roughly n times.
You can make this a little bit quicker by storing previous results, but of course, this is a tradeoff with memory.
Below is such a solution in Python.
D = {}
def f(n):
if n in D:
return D[n]
if n == 0 or n == 1:
return 1
if n == 2:
return 2
m = n // 2
if n % 2 == 0:
# f(2n) = f(n) + f(n + 1) + n (for n > 1)
y = f(m) + f(m + 1) + m
else:
# f(2n + 1) = f(n - 1) + f(n) + 1 (for n >= 1)
y = f(m - 1) + f(m) + 1
D[n] = y
return y
def find(s):
n = 0
y = 0
even_sol = None
while y < s:
y = f(n)
if y == s:
even_sol = n
break
n += 2
n = 1
y = 0
odd_sol = None
while y < s:
y = f(n)
if y == s:
odd_sol = n
break
n += 2
print(s,even_sol,odd_sol)
find(9992)
This recursive in every iteration for 2n and 2n+1 is increasing values, so if in any moment you will have value bigger, than s, then you can stop your algorithm.
To make effective algorithm you have to find or nice formula, that will calculate value, or make this in small loop, that will be much, much, much more effective, than your recursion. Your recursion is generally O(2^n), where loop is O(n).
This is how loop can be looking:
int[] values = new int[1000];
values[0] = 1;
values[1] = 1;
values[2] = 2;
for (int i = 3; i < values.length /2 - 1; i++) {
values[2 * i] = values[i] + values[i + 1] + i;
values[2 * i + 1] = values[i - 1] + values[i] + 1;
}
And inside this loop add condition of possible breaking it with success of failure.
How can I calculate time complexity of following piece of code? Suppose m is close to n. What I got is f(n) = 2*f(n-1). So time complexity is f(n) = O(2^n). Am I right?
int uniquePaths(int m, int n) {
if (m < 1 || n < 1) return 0;
if (m == 1 && n == 1) return 1;
return uniquePaths(m - 1, n) + uniquePaths(m, n - 1);
}
There is some hand-waving involved in what follows, but I think it's essentially correct.
Every leaf in the call tree will contribute 1 to the total result, so the number of leaves is uniquePaths(m,n). Since uniquePaths(m,n) == "m+n-2 choose n-1", when m and n are similar the execution time of your algorithm will be roughly the central binomial coefficient "2n choose n", which is in O(4^n).