Determine big o for these functions - big-o

For the below two questions, solution manual states that for
b. O(n log n)
c. O(n log n)
But as per my understanding b is O(log n) and C is sqrt(n).

I agree with you on (c). O(c(n)) is O(sqrt(n)). n^n = e^(n log n), so log log (n^n) is log(n) + log log(n). The square root grows much faster than this.
For (b), I'm interpreting log(n > 2) to mean log(n), where n > 2. Maybe I'm wrong. In which case the second term is O(n log n) which is bigger than the 5 log(n ^ 10) = 50 log(n) of the first term.

Related

question regarding asymptotic runtime behavior

We know that log(n) = O(sqrt n )
I am wondering if is it valid to say that log(n) is theta( sqrt n ) .
numerically , i proved that it is right ; yet i am not too sure about it .
Would like some help
log n is NOT in Theta(sqrt n), since sqrt n is asymptotically greater than log n, meaning that log n isn't in Omega(sqrt n). In other words, sqrt n cannot be an asymptotic lower bound for log n.
Refer to this definition of big theta. Substitute sqrt n for g(n) and log n for f(n) in the definition and you will see that you can easily find a k2 and n0 such that the definition is satisfied (which is why log n is in O(sqrt n)), while finding a suitable k1 will prove impossible (which is why log n is NOT in Omega(sqrt n)).

Where do the functions 2n^2 , 100n log n, and (log n) ^3 fit in the big-O hierarchy?

The big-O hierarchy for any constants a, b > 0 is;
O(a) ⊂ O(log n) ⊂ O(n^b) ⊂ O(C^n).
I need some explanations, thanks.
Leading constants don't matter, so O(2n^2) = O(n^2) and O(100 n log n) = O (n log n).
If f and g are functions then O(f * g) = O(f) * O(g). Now, apparently you are ok accepting that O(log n) < O(n). Multiply both sides by O(n) and you get O(n) * O(log n) = O(n * log n) < O(n * n) = O(n^2).
To see that O((log n)^3) is less than O(n^a) for any positive a is a little trickier, but if you are willing to accept that O(log n) is less than O(n^a) for any positive a, then you can see it by taking the third root of O((log n)^3) and O(n^a). You get O(log n) on the one side, and O(n^(a/3)) on the other side, and the inequality you are looking for is easy to deduce from this.
you can think about that number that a function produces, it usually goes that the smaller the number, the faster the algorithm is. And if it is a larger number the function produces its slower.
log 10 < 10^b < C^n

Which is complexity of set difference using quick sort and binary search?

We have two sets A, B and we want to compute set difference A - B, we will sort first elements of B with quicksort which have average complexity O(n * log n) and after we search each element from A in B with binary search which have complexity O(log n), the entire set difference algorihm described up which complexity will have ? if we consider that we use qucksort and binary search. I tried follow way to compute complexity of set difference using this algorithms: O(n * log n) + O(log n) = O(n * log n + log n) = O(log n * (n + 1)) = O((n + 1) * log n). Is it correct ?
First, constant does not really count in O notation facing a polynomial that grows faster than a constant, so 1 will be owned by n, which means O((n + 1) * log n) is just O(n * log n).
Now the important issue - suppose A has m elements, you need to do m binary searches, each has complexity O(log n). So totally, the complexity should be O(n * log n) + O(m * log n) = O((n + m) * log n).
O (n * log n) + O (log n) = O (n * log n)
http://en.wikipedia.org/wiki/Big_O_notation#Properties
If a function may be bounded by a polynomial in n, then as n tends to
infinity, one may disregard lower-order terms of the polynomial.

O(n^2) vs O (n(logn)^2)

Is time complexity O(n^2) or O (n(logn)^2) better?
I know that when we simplify it, it becomes
O(n) vs O((logn)^2)
and logn < n, but what about logn^2?
n is only less than (log n)2 for values of n less than 0.49...
So in general (log n)2 is better for large n...
But since these O(something)-notations always leave out constant factors, in your case it might not be possible to say for sure which algorithm is better...
Here's a graph:
(The blue line is n and the green line is (log n)2)
Notice, how the difference for small values of n isn't so big and might easily be dwarfed by the constant factors not included in the Big-O notation.
But for large n, (log n)2 wins hands down:
For each constant k asymptotically log(n)^k < n.
Proof is simple, do log on both sides of the equation, and you get:
log(log(n))*k < log(n)
It is easy to see that asymptotically, this is correct.
Semantic note: Assuming here log(n)^k == log(n) * log(n) * ... * log(n) (k times) and NOT log(log(log(...log(n)))..) (k times) as it is sometimes also used.
O(n^2) vs. O(n*log(n)^2)
<=> O(n) vs. O(log(n)^2) (divide by n)
<=> O(sqrt(n)) vs. O(log(n)) (square root)
<=> polynomial vs. logarithmic
Logarithmic wins.
(Log n)^2 is better because if you do a variable change n by exp m, then m^2 is better than exp m
(logn)^2 is also < n.
Take an example:
n = 5
log n = 0.6989....
(log n)^ 2 = 0.4885..
You can see, (long n)^2 is further reduced.
Even if you take any bigger value of n e.g. 100,000,000 , then
log n = 9
(log n)^ 2 = 81
which is far less than n.
O(n(logn)^2) is better (faster) for large n!
take log from both sides:
Log(n^2)=2log(n)
Log(n(logn)^2)=Log(n)+2log(Log(n))=Log(n)+2log(Log(n))
lim n--> infinity [(Log(n)+2log(Log(n)))/2log(n)/]=0.5 (use l'Hôpital's rule)(http://en.wikipedia.org/wiki/L'H%C3%B4pital's_rule)]

What is the Big-O of this two-part algorithm?

Given the following algorithm on a dataset of size N:
Separate the data into M=(N/lg N)
blocks in O(N) time.
Partition the blocks in O(M lg M) time. *
What is the big-O? How do I evaluate (N/lg N) * lg (N/lg N) ?
If it is not O(N), is there an M small enough that the whole thing does become O(N)?
* The partition algorithm is the STL's stable_partition which, in this example, will do M tests and at most M lg M swaps. But, the items being swapped are blocks of size lg N. Does this push the practical time of step 2 back up to O(N lg N) if they must be swapped in place?
Not homework, just a working engineer poking around in comp-sci stuff.
You evaluate by doing a bit of math.
log(x/y) = log(x) - log(y)
->
log(N / log(N)) = log(N) - log(log(N))
So, plugging this back in and combining into a single fraction.
N(log(N) - log(log(N))) / log(N)
=
N - N(log(log(N)) / log(N))
<=, since log(log(N)) <= log(N) as N -> inf., it's like multiplying by <= 1
N
So, the whole thing is O(N).
You can pretty easily guess that it is O(N log N) by noticing that M = N / log N is, itself, O(N). I don't know of a quick way to figure out that it's O(N) without a bit of doubt on my part due to having to multiply in the log M.
It is O(N):
N / lgN * lg(N / lgN)=N / lgN * (lgN-lglgN)=N*(1-lglgN / lgN)<=N

Resources