difficult asymptotic(recurrence) function (algorithm analysis) - algorithm

i am stuck on this one and i don't know how to solve it, no matter what i try i just can't find a way to play with the function so i can represent it in a way that will allow me to find a g(n), so that g(n) is T(n)∈Θ(g(n))
the function i am having trouble with is:
$T(n)=4n^4T(\sqrt n) +(n^6lgn+3lg^7n)(2n^2lgn+lg^3n)$
additionally, if you can - could you please check if i am on the right path with:
$T(n)=T(n-1)+\frac{1}{n}+\frac{1}{n^2}$
to solve it i tried to use: $T(n)-T(n-1)=\frac{1}{n}+\frac{1}{n^2}$ iff $(T(n)-T(n-1))+(T(n-1)-T(n-2))+\ldots+(T(2)-T(1))=\frac{1}{n}+\frac{1}{n-1}+...+\frac{1}{n^2}+\frac{1}{\left(n-1\right)^2}+....$ iff $(T(n)-T(n-1))+(T(n-1)-T(n-2))+\ldots+(T(2)-T(1))=T(n)=T(1)+\sum_{k=2}^n\frac{1}{n}+\sum_{k=2}^n\frac{1}{n^2}$ and then using the harmonic series formula. however i don't know how to continue and finish it and find the asymptotic boundaries to solve it
i hope that on the second i am on the right path. however i don't know how to solve the first one at all. if i've done any mistakes, please show me the right way so i can improve my mistakes.
thank you very much for your help
sorry that for some reason math doesn't show correctly here

Following on from the comments:
Solving (2) first since it is more straightforward.
Your expansion attempt is correct. Writing it slightly differently:
A, harmonic series - asymptotically equal to the natural logarithm:
γ = 0.57721... is the Euler-Mascheroni constant.
B, sum of inverse squares - the infinite sum is the famous Basel problem:
Which is 1.6449.... Therefore since B is monotonically increasing, it will always be O(1).
The total complexity of (2) is simply Θ(log n).
(1) is a little more tedious.
Little-o notation: strictly lower complexity class, i.e.:
Assume a set of N functions {F_i} is ordered in decreasing order of complexity, i.e. F2 = o(F1) etc. Take a linear combination of them:
Thus a sum of different functions is asymptotically equal to the one with the highest growth rate.
To sort the terms in the expansion of the two parentheses, note that
Provable by applying L'Hopital's rule. So the only asymptotically significant term is n^6 log n * 2n^2 log n = 2n^8 log^2 n.
Expand the summation as before, note that i) the factor 4n^4 accumulates, ii) the parameter for the m-th expansion is n^(1/(2^m)) (repeated square-root).
The new term added by the m-th expansion is therefore (will assume you know how to do this since you were able to do the same for (2)):
Rather surprisingly, each added term is precisely equal to the first.
Assume that the stopping condition for recursive expansion is n < 2 (which of course rounds down to T(1)):
Since each added term t_m is always the same, simply multiply by the maximum number of expansions:
Function (1) is

Related

What is the Value of LogN [duplicate]

This question already has answers here:
What does O(log n) mean exactly?
(32 answers)
Closed 7 years ago.
So I have been studying Big O notation ( Noob ) , and most things looks like alien language to me. Now I understand basic of log like log 16 of base2 is the power of 2 equals the number 16. Now for Binary search big O(logN) making no sense to me , what is the value of LogN exacly what is the base here? I have searched internet, problem is everyone explained this mathmetically which i cant catch I am not good with math. Can someone explain this to me in Basic English not Alien language like exponential. I know How Binary search works.
Second question: [I dont even know what f = Ω(g) this symbol means] Can someone explain to me in Plain English what is required here , I dont want the answer , just what this means.
Question :
In each of the following situations, indicate whether f = O(g), or f = Ω(g), or both. (in which case f = Θ(g)).
f(n) g(n)
(a) n-100 ...............n-200
(b) 100n + logn .......n + (log n)2
(c) log2n ............... log3n
Update: I just realized that I studied algorithms from MIT's videos. Here is the link to the first of those videos. Keep going to next lecture as far as you want.
Clearly, Log(n) has no value without fixing what n is and what base of log we are using. The purpose of mentioning log(n) so often is to help people understand the rate of growth of a particular algorithm or piece of code. It is only to help people see things in perspective. To build your perspective, see the comparison below:
1 < logn < n < nlogn < n2 < 2^n < n! < n^n
The line above says that after some value of n on the number line, the rate of growth of the above written functions is in the order mentioned there. This way, decision makers can decide which approach they want to take in solving their problem (and students can pass their Algorithm Design and Analysis exam).
Coming to your question, when books say 'binary search's run time is Log(n)', essentially they mean that the if you have n elements, the running time for binary search would be proportional to Log(n) and if you have 17n elements then you can expect the answer from your algorithm in a time duration that is proportional to Log(17n). In this case, the base of Log function is 2 because in binary search, we have exactly <= 2 paths to pick from at every node.
Since, the log function's base can be easily converted from any number to any other number by multiplying a constant, telling what the base is becomes irrelevant as in Big O notations, constants are ignored.
Coming to the answer to your second question, images will explain it the best.
Big O is only about the upper bound on a function. In the image below, f(n) = O(g(n)). In other words, there are positive constants c and k, such that 0 ≤ f(n) ≤ cg(n) for all n ≥ k.
Importance of k is that after 'k' this Big O will stay true, no matter what value of n. If we can't fix a 'k', we cannot say that the growth rate will always stay below the function mentioned in O(...).
Importance of c is in saying that it is the function between O(...) that's really important.
Omega is simply the inversion of Big O. If f(n) = O(g(n)), then g(n) = Ω(f(n)). In other words, Ω() is about your function staying above what is mentioned in Ω(...) for a given value of another 'k' and another 'c'.
The pictorial visualization is
Finally, Big theta is about finding a mathematical function that grows at same rate as your given function. But how do you prove that this function runs same as your function. By using two constant values.
Since it runs same as your given function, you should be able to multiply two constants 'c1' and 'c2' that will be able to put c1 * g(n) above your function f(n) and put c2 * g(n) below your function f(n).
The thing behind Big theta is to provide a function with same rate of growth. Note that there may be no constant 'c' that will be able to get f(n) and g(n) to overlap. Nobody is concerned with that. The only concern is to be able to sandwich the f(n) between a g(n) using two constants so that we can confidently say that we found the rate of growth of f(n).
How to apply the above learned ideas to your question?
Let's take each of them one by one. You can use some online tool to plot these functions and see first hand, how these function behave when you go along the number line.
f(n) = n - 100 and g(n) = n - 200
Here, the rate of growth can be found out by differentiating both functions wrt n. d(f(n))/dn = d(g(n))/dn = 1. Therefore, even though the running times of f(n) and g(n) may be different, their rate of growth is same. Can you pick 'c1' and 'c2' such that c1 * g(n) < f(n) < c2 * g(n)?
f(n) = 100n + log(n) and g(n) = n + 2(log (n))
Differentiate and tell if you can relate the functions as Big O or Big Theta or Big Omega.
f(n) = log (2n) and g(n) = log (3n)
Same as above.
(The images are taken from different pages on this website: http://xlinux.nist.gov/dads/HTML/)
My experience: Try to compare the growth rate of a lot of different functions. Eventually you will get the hang of it for all of them and it will become very intuitive for you. Given concentrated effort for one week or two, this concept cannot remain esoteric for anyone.
First of all, let's go through the notations. I'm assuming from the questions that
O(f) is upper bound,
Ω(f) is lower bound, and
Θ(f) is both
For O(log(N)) in this case, generally the base isn't given because the general form of log(N) is known regardless of the base. E.g.,
(source: rapidtables.com)
So if you've worked through the binary search algorithm (I suggest you do this if you haven't), you should find that the worst case scenario (upper bound) is log_2(N). So given N terms, it will take "log_2(N) computations" in the worst case in order to find the term.
For your second question,
You are simply comparing computational run-times of f and g.
f = O(g)
is when f is an upper bound on g, i.e., f will definitely take longer to compute than g. Alternately,
f = Ω(g)
is when f is a lower bound on g, i.e., g will definitely take longer to compute than f. Lastly,
f = Θ(g)
is when the f is both an upper and lower bound on g, i.e., the run times are the same.
You need to compare the two functions for each question and determine which will take longer to compute. As Mitch mentioned you can check here where this question has already been answered.
Edit: accidentally linked e^x instead of log(x)
The reason the base of the log is never specified is because it is actually completely irrelevant. You can convince yourself of this in three steps:
First, recall that log_2(x) = log_10(x)/log_10(2). But also recall that log_10(2) is a constant, which we'll call k2, so really, log_2(x) * k2 = log_10(x)
Second, recall that this is not unique to logs of base 2. The constants of conversion vary, but all the log functions are related to each other through multiplicative constants.
(You can prove this to yourself if you understand the mathematics behind log functions, or you can just work it up very quickly on a spreadsheet-- have a column of log_2(x) and a column of log_3(x) and divide them.)
Finally, remember that in Big Oh notation, constants basically drop out as being irrelevant. Trying to draw a distinction between O(log_2(N)) and O(log_3(N)) is like trying to draw a distinction between O(N) and O(2N). It is a distinction that does not matter because log_2 and log_3 are related through a constant.
Honestly, the base of the log does not matter.

Analyzing best-case run time of copy algorithm

I'm at a loss as to how to calculate the best case run time - omega f(n) - for this algorithm. In particular, I don't understand how there could be a "best case" - this seems like a very straightforward algorithm. Can someone give me some hints / a general approach for solving these types of questions?
for i = 0 to n do
for j = n to 0 do
for k = 1 to j-i do
print (k)
Have a look at this question and answers Still sort of confused about Big O notation it may help you sort out some of the confusion between worst case, best case, Big O, Big Omega, etc.
As to your specific problem, you are asked to find some (asymptotical) lower bound of the time complexity of the algorithm (you should also clarify whether you mean Big Omega or small omega).
Otherwise you are right, this problem is quite simple. You just have to think about how many times print (k) will be executed for a given n.
You can see that the first loop goes n-times. The second one also n-times.
For the third one, you see that if i = 1, then j = n-1, thus k is 1 to n-1-1 = n-2, which makes me think whether your example is correct and if there should be i-j instead.
In any case, the third loop will execute at least n/2 times. It is n/2 because you subtract j-i and j decreases while i increases so when they are n/2 the result will be 0 and then it will be negative at which point the innermost loop will not execute anymore.
Therefore print (k) will execute n*n*n/2-times which is Omega(n^3) which you can easily verify from definition.
Just beware, as #Yves points out in his answer, that this is all assuming that print(k) is done in constant time which is probably what was meant in your exercise.
In the real world, it wouldn't be true because printing the number also takes time and the time increases with log(n) if print(k) prints in base 2 or base 10 (it would be n for base 1).
Otherwise, in this case the best case is the same as the worst case. There is just one input of size n so you cannot find "some best instance" of size n and "worst instance" of size n. There is just instance of size n.
I do not see anything in that function that varies between different executions except for n
aside from that, as far as I can tell, there is only one case, and that is both the best case and the worst case at the same time
it's O(n3) in case you're wondering.
If this is a sadistic exercise, the complexity is probably Theta(n^3.Log(n)), because of the triple loop but also due to the fact that the number of digits to be printed increases with n.
As n the only factor in the behavior of your algorithm, best case is n being 0. One initialization, one test, and then done...
Edit:
The best case describe the algorithm behavior under optimal condition. You provide us a code snippet which depend on n. What is the nbest case? n being zero.
Another example : What is the best case of performing a linear search on an array? It is either that the key match the first element or the array is empty.

Complexity's and Run times

I tried looking around to see if my answer could be answered but I haven't stumbled what could help me.
When Dealing with Run Time Complexity's do you account for the operands? From my understanding dealing with run time you have each different operand can take x-amount of time so only counting for the loops with give you the lower bound? If this is incorrect can you please explain to me where my logic is wrong.
for example:
for (i=0;i<n;i++)
for (j=0;j<n;j++)
a[i,j]=b[i,j]+c[i,j]
Would just be O(n^2) right? or would it be O(a*n^2) because of the Addition Operand?? and you use "O" for run time usually correct?
for example:
for (i=0;i<n;i++)
for (j=0;j<n;j++)
a[i,j] -= b[i,j] * c[i,j]
Would just be O(n^2) again right?? or would it be O(a^2*n^2) because of the subtraction and multiply Operands??
Thanks Stack!
I suggest you read on what the O notation means.But let me present you with a brief overview:
When we say f(x)=O(g(x)), we mean that for some constant c independent of input size,
f(x)<=c.g(x) for all x>=k
in other words, beyond a certain point k, the curve f(n) is always bounded above by the curve g(n) as shown in the figure.
Now in the case you have considered, the operations of addition and subtraction, multiplication are all primitive operations that take constant time(O(1)).
Let's say the addition of two numbers takes 'a' time and assigning the result takes 'b' time.
So for this code:
for (i=0;i<n;i++)
for (j=0;j<n;j++)
a[i,j]=b[i,j]+c[i,j]
Let's be sloppy and ignore the for loop operations of assignment and update.
The running time T(n)=(a+b)n2.
Notice that this is just O(n2), why?
As per the definition, this means we can identify some point k beyond which for some constant c the curve T(n) is always bounded above by n2.
Realize that this is indeed true.We can always pick sufficiently large constants so that c.n^2 curve always bounds above the given curve.
This is why people say:Drop the constants!
Bottomline:
Big O of f(n) is g(n) means, to the right of some vertical line, the curve f(n) is always bounded above by g(n).

Max and min of functions in order-notation

Order notation question, big-o notation and the like:
What does the max and min of a function mean in terms of order notation?
for example:
DEFINITION:
"Maximum" rules: Suppose that f(n) and g(n) are positive functions for all n > n0.
Then:
O[f(n) + g(n)] = O[max (f(n),g(n)) ]
etc...
I need to use these definitions to prove something for homework.. thanks for the help!
EDIT: f(n) and g(n) are supposed to represent running times of algorithms with respect to input size
With Big-O notation, you're talking about upper bounds on computation. This means that you are only interested in the largest term of the combined function as n (the variable) tends to infinity. What's more, you drop any constant multipliers as well since the formal definition of the notation allows you to discard those parts, which is important as it allows you to focus on the algorithm's behavior and not on the implementation of the algorithm.
So, we are combining two functions through summation. Well, there are two cases (strictly three, but it's symmetric):
One function is of higher order than the other. At that point, the higher order function dominates the lesser; you can pretend that the lesser doesn't exist.
Both functions are of the same order. This is just like you are doing some kind of ratio-ed sum of the two (since we've already thrown away the scaling factors) but then you just end up with the same factor and have just changed the scaling factors a bit.
The net result looks very much like the max() function, even though it's really not (it's actually a generalization of max over the space of functions), so it's very convenient to use the notation.
It is a regular max between natural numbers. f is a function mapped to numbers [f:N->N], and so is g.
Thus, f(n) is in N, and so max(f(n),g(n)) is just standard max: f(n) > g(n) ? f(n) : g(n)
O[max (f(n),g(n)) ] means: which ever is more 'expensive': f or g: it is the upper bound.

Is the time complexity of the empty algorithm O(0)?

So given the following program:
Is the time complexity of this program O(0)? In other words, is 0 O(0)?
I thought answering this in a separate question would shed some light on this question.
EDIT: Lots of good answers here! We all agree that 0 is O(1). The question is, is 0 O(0) as well?
From Wikipedia:
A description of a function in terms of big O notation usually only provides an upper bound on the growth rate of the function.
From this description, since the empty algorithm requires 0 time to execute, it has an upper bound performance of O(0). This means, it's also O(1), which happens to be a larger upper bound.
Edit:
More formally from CLR (1ed, pg 26):
For a given function g(n), we denote O(g(n)) the set of functions
O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }
The asymptotic time performance of the empty algorithm, executing in 0 time regardless of the input, is therefore a member of O(0).
Edit 2:
We all agree that 0 is O(1). The question is, is 0 O(0) as well?
Based on the definitions, I say yes.
Furthermore, I think there's a bit more significance to the question than many answers indicate. By itself the empty algorithm is probably meaningless. However, whenever a non-trivial algorithm is specified, the empty algorithm could be thought of as lying between consecutive steps of the algorithm being specified as well as before and after the algorithm steps. It's nice to know that "nothingness" does not impact the algorithm's asymptotic time performance.
Edit 3:
Adam Crume makes the following claim:
For any function f(x), f(x) is in O(f(x)).
Proof: let S be a subset of R and T be a subset of R* (the non-negative real numbers) and let f(x):S ->T and c ≥ 1. Then 0 ≤ f(x) ≤ f(x) which leads to 0 ≤ f(x) ≤ cf(x) for all x∈S. Therefore f(x) ∈ O(f(x)).
Specifically, if f(x) = 0 then f(x) ∈ O(0).
It takes the same amount of time to run regardless of the input, therefore it is O(1) by definition.
Several answers say that the complexity is O(1) because the time is a constant and the time is bounded by the product of some coefficient and 1. Well, it is true that the time is a constant and it is bounded that way, but that doesn't mean that the best answer is O(1).
Consider an algorithm that runs in linear time. It is ordinarily designated as O(n) but let's play devil's advocate. The time is bounded by the product of some coefficient and n^2. If we consider O(n^2) to be a set, the set of all algorithms whose complexity is small enough, then linear algorithms are in that set. But it doesn't mean that the best answer is O(n^2).
The empty algorithm is in O(n^2) and in O(n) and in O(1) and in O(0). I vote for O(0).
I have a very simple argument for the empty algorithm being O(0): For any function f(x), f(x) is in O(f(x)). Simply let f(x)=0, and we have that 0 (the runtime of the empty algorithm) is in O(0).
On a side note, I hate it when people write f(x) = O(g(x)), when it should be f(x) ∈ O(g(x)).
Big O is asymptotic notation. To use big O, you need a function - in other words, the expression must be parametrized by n, even if n is not used. It makes no sense to say that the number 5 is O(n), it's the constant function f(n) = 5 that is O(n).
So, to analyze time complexity in terms of big O you need a function of n. Your algorithm always makes arguably 0 steps, but without a varying parameter talking about asymptotic behaviour makes no sense. Assume that your algorithm is parametrized by n. Only now you may use asymptotic notation. It makes no sense to say that it is O(n2), or even O(1), if you don't specify what is n (or the variable hidden in O(1))!
As soon as you settle on the number of steps, it's a matter of the definition of big O: the function f(n) = 0 is O(0).
Since this is a low-level question it depends on the model of computation.
Under "idealistic" assumptions, it is possible you don't do anything.
But in Python, you cannot say def f(x):, but only def f(x): pass. If you assume that every instruction, even pass (NOP), takes time, then the complexity is f(n) = c for some constant c, and unless c != 0 you can only say that f is O(1), not O(0).
It's worth noting big O by itself does not have anything to do with algorithms. For example, you may say sin x = x + O(x3) when discussing Taylor expansion. Also, O(1) does not mean constant, it means bounded by constant.
All of the answers so far address the question as if there is a right and a wrong answer. But there isn't. The question is a matter of definition. Usually in complexity theory the time cost is an integer --- although that too is just a definition. You're free to say that the empty algorithm that quits immediately takes 0 time steps or 1 time step. It's an abstract question because time complexity is an abstract definition. In the real world, you don't even have time steps, you have continuous physical time; it may be true that one CPU has clock cycles, but a parallel computer could easily have asynchronoous clocks and in any case a clock cycle is extremely small.
That said, I would say that it's more reasonable to say that the halt operation takes 1 time step rather than that it takes 0 time steps. It does seem more realistic. For many situations it's arguably very conservative, because the overhead of initialization is typically far greater than executing one arithmetic or logical operation. Giving the empty algorithm 0 time steps would only be reasonable to model, for example, a function call that is deleted by an optimizing compiler that knows that the function won't do anything.
It should be O(1). The coefficient is always 1.
Consider:
If something grows like 5n, you don't say O(5n), you say O(n) [in other words, O(1n)]
If something grows like 7n^2, you don't say O(7n^2), you say O(n^2) [in other words, O(1n^2)]
Likewise you should say O(1), not O(some other constant)
There is no such thing as O(0). Even an oracle machine or a hypercomputer require the time for one operation, i.e. solve(the_goldbach_conjecture), ergo:
All machines, theoretical or real, finite or infinite produce algorithms with a minimum time complexity of O(1).
But then again, this code right here is O(0):
// Hello world!
:)
I would say it's O(1) by definition, but O(0) if you want to get technical about it: since O(k1g(n)) is equivalent to O(k2g(n)) for any constants k1 and k2, it follows that O(1 * 1) is equivalent to O(0 * 1), and therefore O(0) is equivalent to O(1).
However, the empty algorithm is not like, for example, the identity function, whose definition is something like "return your input". The empty algorithm is more like an empty statement, or whatever happens between two statements. Its definition is "do absolutely nothing with your input", presumably without even the implied overhead of simply having input.
Consequently, the complexity of the empty algorithm is unique in that O(0) has a complexity of zero times whatever function strikes your fancy, or simply zero. It follows that since the whole business is so wacky, and since O(0) doesn't already mean something useful, and since it's slightly ridiculous to even discuss such things, a reasonable special case for O(0) is something like this:
The complexity of the empty algorithm is O(0) in time and space. An algorithm with time complexity O(0) is equivalent to the empty algorithm.
So there you go.
Given the formal definition of Big O:
Let f(x) and g(x) be two functions defined over the set of real numbers. Then, we write:
f(x) = O(g(x)) as x approaches infinity iff there exists a real M and a real x0 so that:
|f(x)| <= M * |g(x)| for every x > x0
As I see it, if we substitute g(x) = 0 (in order to have a program with complexity O(0)), we must have:
|f(x)| <= 0, for every x > x0 (the constraint of existence of a real M and x0 is practically lifted here)
which can only be true when f(x) = 0.
So I would say that not only the empty program is O(0), but it is the only one for which that holds. Intuitively, this should've been true since O(1) encompasses all algorithms that require a constant number of steps regardless of the size of its task, including 0. It's essentially useless to talk about O(0); it's already in O(1). I suspect it's purely out of simplicity of definition that we use O(1), where it could as well be O(c) or something similar.
0 = O(f) for all function f, since 0 <= |f|, so it is also O(0).
Not only is this a perfectly sensible question, but it is important in certain situations involving amortized analysis, especially when "cost" means something other than "time" (for example, "atomic instructions").
Let's say there is a datastructure featuring multiple operation types, for which an amortized analysis is being conducted. It could well happen that one type of operation can always be funded fully using "coins" deposited during previous operations.
There is a simple example of this: the "multipop queue" described in Cormen, Leiserson, Rivest, Stein [CLRS09, 17.2, p. 457], and also on Wikipedia. Each time an item is pushed, a coin is put on the item, for a total amortized cost of 2. When (multi) pops occur, they can be fully paid for by taking one coin from each item popped, so the amortized cost of MULTIPOP(k) is O(0). To wit:
Note that the amortized cost of MULTIPOP is a constant (0)
...
Moreover, we can also charge MULTIPOP operations nothing. To pop the
first plate, we take the dollar of credit off the plate and use it to
pay the actual cost of a POP operation. To pop a second plate, we
again have a dollar of credit on the plate to pay for the POP
operation, and so on. Thus, we have always charged enough up front to
pay for MULTIPOP operations. In other words, since each plate on the
stack has 1 dollar of credit on it, and the stack always has a
nonnegative number of plates, we have ensured that the amount of
credit is always nonnegative.
Thus O(0) is an important "complexity class" for certain amortized operations.
O(1) means the algorithm's time complexity is always constant.
Let's say we have this algorithm (in C):
void doSomething(int[] n)
{
int x = n[0]; // This line is accessing an array position, so it is time consuming.
int y = n[1]; // Same here.
return x + y;
}
I am ignoring the fact that the array could have less than 2 positions, just to keep it simple.
If we count the 2 most expensive lines, we have a total time of 2.
2 = O(1), because:
2 <= c * 1, if c = 2, for every n > 1
If we have this code:
public void doNothing(){}
And we count it as having 0 expansive lines, there is no difference in saying it has O(0) O(1), or O(1000), because for every one of these functions, we can prove the same theorem.
Normally, if the algorithm takes a constant number of steps to complete, we say it has O(1) time complexity.
I guess this is just a convention, because you could use any constant number to represent the function inside the O().
No. It's O(c) by convention whenever you don't have dependence on input size, where c is any positive constant (typically 1 is used - O(1) = O(12.37)).

Resources