What kind of algorithm does this recurrence relation represent? [closed] - algorithm

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 4 years ago.
Improve this question
Take a look at this recurrence relation for algorithm complexity:
T(n) = 2 T(n-1) - 1
What kind of algorithm does this recurrence relation represent. Notice that there's a minus instead of plus, so it can't be a divide and conquer algorithm.
What kind of algorithm will have complexity with this as it's recurrence relation?

T(n) = 2 T(n-1)-1
T(n) = 4 T(n-2)-3
T(n) = 8 T(n-3)-7
T(n) = 16 T(n-4)-15
...
T(n) = 2^k T(n-k) - 2^(k-1)
If, for example T(1) = O(1) then
T(n) = 2^(n-1) O(1) - 2^(n-2) = O(2^(n-1)) = O(2^n)
which is an exponential growth.
Now let's see that O(1) - 1 = O(1). From CLRS:
O(g(n)) = { f(n) : there exist positive constants c and n0 such that 0 <= f(n) <= c g(n) for all n >= n0 }
Thus to remove effect of -1 we just need to increase hidden constant c by one.
So, as long as your base case have complexity like O(1), O(n) with n > 0 you shouldn't care about -1. In other words if you base case makes recurrence T(n) = 2 T(n-1) at least exponential in n you don't care about this -1.
Example: imagine that you are asked to told if a string S with n characters contains specified character. And you proceed like this, you run algorithm recursively on S[0..n-2] and S[1..n-1]. You stop the recursion when string is one character length, then you just check the character.

Based on the time complexity given, it is an exponential algorithm.
For reduction of size by 1, you are multiplying the time by 2 (approximately)
So it does not come under any polynomial time algorithmic paradigms like divide and conquer, dynamic programming, ...

Related

Dropping the less significant terms in the middle of calculating time complexity?

We know that for some algorithm with time complexity of lets say T(n) = n^2 + n + 1 we can drop the less significant terms and say that it has a worst case of O(n^2).
What about when we're in the middle of calculating time complexity of an algorithm such as T(n) = 2T(n/2) + n + log(n)? Can we just drop the less significant terms and just say T(n) = 2T(n/2) + n = O(n log(n))?
In this case, yes, you can safely discard the dominated (log n) term. In general, you can do this any time you only need the asymptotic behaviour rather than the exact formula.
When you apply the Master theorem to solve a recurrence relation like
T(n) = a T(n/b) + f(n)
asymptotically, then you don't need an exact formula for f(n), just the asymptotic behaviour, because that's how the Master theorem works.
In your example, a = 2, b = 2, so the critical exponent is c = 1. Then the Master theorem tells us that T(n) is in Θ(n log n) because f(n) = n + log n, which is in Θ(nc) = Θ(n).
We would have reached the same conclusion using f(n) = n, because that's also in Θ(n). Applying the theorem only requires knowing the asymptotic behaviour of f(n), so in this context it's safe to discard dominated terms which don't affect f(n)'s asymptotic behaviour.
First of all you need to understand that T(n) = n^2 + n + 1 is a closed form expression, in simple terms it means you can inject some value for n and you will get the value of this whole expression.
on the other hand T(n) = 2T(n/2) + n + log(n) is a recurrence relation, it means this expression is defined recursively, to get a closed form expression you will have to solve the recurrence relation.
Now to answer your question, in general we drop lower order terms and coefficients when we can clearly see the highest order term, in T(n) = n^2 + n + 1 its n^2. but in a recurrence relation there is no such highest order term, because its not a closed form expression.
but one thing to observe is that highest order term in the closed form expression of a recurrence relation would be result of depth of recurrence tree multiplied with the highest order term in recurrence relation, so in your case it would be depthOf(2T(n/2)) * n, this would result in something like logn*n, so you can say that in terms of big O notation its O(nlogn).

How dominant term helps determine time complexity using Big-O? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
I don't quite understand the concept of dominant terms and how to determine the time complexity using big o. Like, for example, the dominant term of N(100N + 200N^3) + N^3. If anyone could explain it, that would be very helpful.
The dominant term is the term the one that gets biggest (i.e. dominates) as N gets bigger.
For example:
N(100N + 200N^3) + N^3
can be rewritten as
(100 * N^2) + (200 * N^4) + N^3
and as N gets very large, the N^4 is going to get biggest (irrespective of the 200 that you multiply it by).
So that would be O(N^4).
Let f(N) = 100N^2 + 200 N^4 + N^3. As N increases, the value of f(N) is dominated by the 200 N^4 term. This would be the case even if coefficient of the N^4 term was smaller than that of the lower order terms.
Consider the simpler example for g(N) = N^2 + 4N. When N = 1000, we get g(1000) =10^6 + 4000 which is approximately 10^6 since a million plus a (few) thousand is still about a million. So the N^2$ term dominates the lower order terms. This would be the case even if the coefficients of the lower order terms are large - in g(N) = N^2 + 10^{100} N, N^2 term dominates the linear term, and its just that the value of N required for the quadratic term to exceed the linear term is larger in this case. But as N goes to infinity, we can approximate a function by just its leading term.
As for big-oh notation, you can prove using its definition that a polynomial f(N) can be expressed as O(N^k), where k is the exponent in the leading term. In your example, we can say f(N) = O(N^4). This notation discards the lower order terms and the coefficient of the leading term as they are often irrelevant when comparing different algorithms for their running time.
"Dominant" simply means "the one that grows faster in the long run". That's the best way I can put it.
Let's devise your polynomial function into parts ,
F(N) = 100N²+ 200N^4 + N^3 ;
g,h,k are respectively 3 polynomial functions
g(N) = 200N^4 , h(N) = N^3 , k(N) = 100N² .
h is dominated by g , and k is dominated by h , so using Transitive relation k is dominated by g ; so both h and k are dominated by g.
I mean by domination in mathematics , the limit of the fraction (h(n) / g(n) ) or ( k(n) / g(n) ) if n goes to infinity : is zero.
so to know which function is dominated , you need to study the asymptotic behavior and limits.
This is an example illustrated from this website

What is the upper bound of function f(n) = n in Big-O notation and why? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
I was reading the book Algorithm by Karumanchi .In one of the example it is given that for function f(n)= n the big-o notation is O(n^2).But why is that and why isn't it O(n) with c=2 and n0=1.
f(n) = O(g(n)) sets an upper limit to the function f(n). But this upper limit need not be tight.
So for the function f(n) = n, we can say f(n) = O(n),
also f(n) = O(n^2), f(n) = (n^3) and so on. The definition of Big-Oh doesn't say anything about the tighness of the bound.
Let's first be sure we understand what Karumanchi was saying. First, on page 61, he states that big-O notation "gives the tight upper bound of the given function." (his emphasis). So if O(n) is correct, then O(n^2) is incorrect by his definition.
Then, on page 62, we get the example you cite. He justifies O(n^2) by stating that n <= n^2 for all n >= 1. This is true.
But it is also true that n <= 2n for all n >= 1. (OP's constants.) That justifies the statement n = O(n) with c = 2 and n0 = 1.
So why did he say it's O(n^2)? Who knows? The book is wrong.

Master Theorem confusion with the three cases

I know that we can apply the Master Theorem to find the running time of a divide and conquer algorithm, when the recurrence relation has the form of:
T(n) = a*T(n/b) + f(n)
We know the following :
a is the number of subproblems that the algorithm divides the original problem
b is the size of the sun-problem i.e n/b
and finally.. f(n) encompasses the cost of dividing the problem and combining the results of the subproblems.
Now we then find something (I will come back to the term "something")
and we have 3 cases to check.
The case that f(n) = O(n^log(b)a-ε) for some ε>0; Then T(n) is O(n*log(b)a)
The case that f(n) = O(n^log(b)a) ; Then T(n) is O(n^log(b)a * log n).
If n^log(b)a+ε = O(f(n)) for some constant ε > 0, and if a*f(n/b) =< cf(n) for some constant
c < 1 and almost all n, then T(n) = O(f(n)).
All fine, I am recalling the term something. How we can use general examples (i.e uses variables and not actual numbers) to decide which case the algorithm is in?
In instance. Consider the following:
T(n) = 8T(n/2) + n
So a = 8, b = 2 and f(n) = n
How I will proceed then? How can I decide which case is? While the function f(n) = some big-Oh notation how these two things are comparable?
The above is just an example to show you where I don't get it, so the question is in general.
Thanks
As CLRS suggests, the basic idea is comparing f(n) with n^log(b)a i.e. n to the power (log a to the base b). In your hypothetical example, we have:
f(n) = n
n^log(b)a = n^3, i.e. n-cubed as your recurrence yields 8 problems of half the size at every step.
Thus, in this case, n^log(b)a is larger because n^3 is always O(n) and the solution is: T(n) = θ(n^3).
Clearly, the number of subproblems vastly outpaces the work (linear, f(n) = n) you are doing for each subproblem. Thus, the intuition tells and master theorem verifies that it is the n^log(b)a that dominates the recurrence.
There is a subtle technicality where the master theorem says that f(n) should be not only smaller than n^log(b)a O-wise, it should be smaller polynomially.

Solving recurrence: T(n)=sqrt(2)T(n/2)+log(n) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 4 years ago.
Improve this question
Given the equation T(n)=sqrt(2)T(n/2)+log(n).
The solution points to case 1 of the M.T. with a complexity class of O(sqrt(n)). However after my understanding log(n) is polynomial greater then sqrt(n). Am I missing something?
I used the definition as following: n^e = log_b(a) where a = sqrt(2) and b = 2. This would give me e = 1/2 < 1. log n is obviously polynomial greater then n^e.
No. logx n is not greater than √n.
Consider n=256,
√n = 16,
and
log2 256 = 8 (let us assume base x=2, as with many of the computational problems).
In your recurrence,
T(n)= √2 T(n/2) + log(n)
a = √2, b = 2 and f(n) = log(n)
logb a = log2 √2 = 1/2.
Since log n < na, for a > 0, We have Case 1 of Master Theorem.
There for T(n) = Θ(√n).
Using the masters theorem you get: a=sqrt(2), b = 2 and therefore c = logb(a) = 1/2. Your f(n) = log(n) and therefore you fall into the first case.
So your complexity is O(sqrt(n))

Resources