Is it correct to write the following equalities writen in the image:
I'm not sure about the red marked passage.
Furtheremore, I know that constants are neglected such as O(12 log b) = O(log b).
Does this means that O(log b^12) = O(log b) ?
O(log b12) = O(log b) is correct for the reason you state in the question.
The derivation in the image is not correct, because O(f) is an empty set whenever f is a function taking negative values for arbitrarily large inputs. You cannot therefore discard a constant negative factor like -12, because this changes the sign of the function; in general O(f) and O(-f) are not the same, and (at least) one of them is an empty set (unless f is identically zero beyond some bound).
Related
I saw a very short algorithm for merging two binary search trees. I was surprised how easy and also very inefficient it is. But when I tried to guess its time complexity, I failed.
Lets have a two immutable binary search trees (not balanced) that contains integers and you want to merge them together with the following recursive algorithm in pseudo code. Function insert is auxiliary:
function insert(Tree t, int elem) returns Tree:
if elem < t.elem:
return new Tree(t.elem, insert(t.leftSubtree, elem), t.rightSubtree)
elseif elem > t.elem:
return new Tree(t.elem, t.leftSubtree, insert(t.rightSubtree, elem))
else
return t
function merge(Tree t1, Tree t2) returns Tree:
if t1 or t2 is Empty:
return chooseNonEmpty(t1, t2)
else
return insert(merge(merge(t1.leftSubtree, t1.rightSubtree), t2), t1.elem)
I guess its an exponencial algorithm but I cannot find an argument for that. What is the worst time complexity of this merge algorithm?
Let's consider the worst case:
At each stage every tree is in the maximally imbalanced state, i.e. each node has at least one sub-tree of size 1.
In this extremal case the complexity of insert is quite easily shown to be Ө(n) where n is the number of elements in the tree, as the height is ~ n/2.
Based on the above constraint, we can deduce a recurrence relation for the time complexity of merge:
where n, m are the sizes of t1, t2. It is assumed without loss of generality that the right sub-tree always contains a single element. The terms correspond to:
T(n - 2, 1): the inner call to merge on the sub-trees of t1
T(n - 1, m): the outer call to merge on t2
Ө(n + m): the final call to insert
To solve this, let's re-substitute the first term and observe a pattern:
We can solve this sum by stripping out the first term:
Where in step (*) we used a change-in-variable substitution i -> i + 1. The recursion stops when k = n:
T(1, m) is just the insertion of an element into a tree of size m, which is obviously Ө(m) in our assumed setup.
Therefore the absolute worst-case time complexity of merge is
Notes:
The order of the parameters matters. It is thus common to insert the smaller tree into the larger tree (in a manner of speaking).
Realistically you are extremely unlikely to have maximally imbalanced trees at every stage of the procedure. The average case will naturally involve semi-balanced trees.
The optimal case (i.e. always perfectly balanced trees) is much more complex (I am unsure that an analytical solution like the above exists; see gdelab's answer).
EDIT: How to evaluate the exponential sum
Suppose we want to compute the sum:
where a, b, c, n are positive constants. In the second step we changed the base to e (the natural exponential constant). With this substitution we can treat ln c as a variable x, differentiate a geometrical progression with respect to it, then set x = ln c:
But the geometrical progression has a closed-form solution (a standard formula which is not difficult to derive):
And so we can differentiate this result with respect to x by n times to obtain an expression for Sn. For the problem above we only need the first two powers:
So that troublesome term is given by:
which is exactly what Wolfram Alpha directly quoted. As you can see, the basic idea behind this was simple, although the algebra was incredibly tedious.
It's quite hard to compute exactly, but it looks like it's not polynomially bounded in the worst case (this is not a complete proof however, you'd need a better one):
insert has complexity O(h) at worst, where h is the height of the tree (i.e. at least log(n),possibly n).
The complexity of merge() could then be of the form: T(n1, n2) = O(h) + T(n1 / 2, n1 / 2) + T(n1 - 1, n2)
let's consider F(n) such that F(1)=T(1, 1) and F(n+1)=log(n)+F(n/2)+F(n-1). We can probably show that F(n) is smaller than T(n, n) (since F(n+1) contains T(n, n) instead of T(n, n+1)).
We have F(n)/F(n-1) = log(n)/F(n-1) + F(n/2) / F(n-1) + 1
Assume F(n)=Theta(n^k) for some k. Then F(n/2) / F(n-1) >= a / 2^k for some a>0 (that comes from the constants in the Theta).
Which means that (beyond a certain point n0) we always have F(n) / F(n-1) >= 1 + epsilon for some fixed epsilon > 0, which is not compatible with F(n)=O(n^k), hence a contradiction.
So F(n) is not a Theta(n^k) for any k. Intuitively, you can see that the problem is probably not the Omega part but the big-O part, hence it's probably not a O(n) (but technically we used the Omega part here to get a). Since T(n, n) should be even bigger than F(n), T(n, n) should not be polynomial, and is maybe exponential...
But then again, this is not rigorous at all, so maybe I'm actually dead wrong...
I would like some clarification regarding O(N) functions. I am using SICP.
Consider the factorial function in the book that generates a recursive process in pseudocode:
function factorial1(n) {
if (n == 1) {
return 1;
}
return n*factorial1(n-1);
}
I have no idea how to measure the number of steps. That is, I don't know how "step" is defined, so I used the statement from the book to define a step:
Thus, we can compute n ! by computing (n-1)! and multiplying the
result by n.
I thought that is what they mean by a step. For a concrete example, if we trace (factorial 5),
factorial(1) = 1 = 1 step (base case - constant time)
factorial(2) = 2*factorial(1) = 2 steps
factorial(3) = 3*factorial(2) = 3 steps
factorial(4) = 4*factorial(3) = 4 steps
factorial(5) = 5*factorial(4) = 5 steps
I think this is indeed linear (number of steps is proportional to n).
On the other hand, here is another factorial function I keep seeing which has slightly different base case.
function factorial2(n) {
if (n == 0) {
return 1;
}
return n*factorial2(n-1);
}
This is exactly the same as the first one, except another computation (step) is added:
factorial(0) = 1 = 1 step (base case - constant time)
factorial(1) = 1*factorial(0) = 2 steps
...
Now I believe this is still O(N), but am I correct if I say factorial2 is more like O(n+1) (where 1 is the base case) as opposed to factorial1 which is exactly O(N) (including the base case)?
One thing to note is that factorial1 is incorrect for n = 0, likely underflowing and ultimately causing a stack overflow in typical implementations. factorial2 is correct for n = 0.
Setting that aside, your intution is correct. factorial1 is O(n) and factorial2 is O(n + 1). However, since the effect of n dominates over constant factors (the + 1), it's typical to simplify it by saying it's O(n). The wikipedia article on Big O Notation describes this:
...the function g(x) appearing within the O(...) is typically chosen to be as simple as possible, omitting constant factors and lower order terms.
From another perspective though, it's more accurate to say that these functions execute in pseudo-polynomial time. This means that it is polynomial with respect to the numeric value of n, but exponential with respect to the number of bits required to represent the value of n. There is an excellent prior answer that describes the distinction.
What is pseudopolynomial time? How does it differ from polynomial time?
Your pseudocode is still pretty vague as to the exact details of its execution. A more explicit one could be
function factorial1(n) {
r1 = (n == 1); // one step
if r1: { return 1; } // second step ... will stop only if n==1
r2 = factorial1(n-1) // third step ... in addition to however much steps
// it takes to compute the factorial1(n-1)
r3 = n * r2; // fourth step
return r3;
}
Thus we see that computing factorial1(n) takes four more steps than computing factorial1(n-1), and computing factorial1(1) takes two steps:
T(1) = 2
T(n) = 4 + T(n-1)
This translates roughly to 4n operations overall, which is in O(n). One step more, or less, or any constant number of steps (i.e. independent of n), do not change anything.
I would argue that no you would not be correct in saying that.
If something is O(N) then it is by definition O(N+1) as well as O(2n+3) as well as O(6N + -e) or O(.67777N - e^67). We use the simplest form out of convenience for notation O(N) however we have to be aware that it would be true to say that the first function is also O(N+1) and likewise the second is as much O(n) as it wasO(n+1)`.
Ill prove it. If you spend some time with the definition of big-O it isn't too hard to prove that.
g(n)=O(f(n)), f(n) = O(k(n)) --implies-> g(n) = O(k(n))
(Dont believe me? Just google transitive property of big O notation). It is then easy to see the below implication follows from the above.
n = O(n+1), factorial1 = O(n) --implies--> factorial1 = O(n+1)
So there is absolutely no difference between saying a function is O(N) or O(N+1). You just said the same thing twice. It is an isometry, a congruency, a equivalency. Pick your fancy word for it. They are different names for the same thing.
If you look at the Θ function you can think of them as a bunch of mathematical sets full of functions where all function in that set have the same growth rate. Some common sets are:
Θ(1) # Constant
Θ(log(n)) # Logarithmic
Θ(n) # Linear
Θ(n^2) # Qudratic
Θ(n^3) # Cubic
Θ(2^n) # Exponential (Base 2)
Θ(n!) # Factorial
A function will fall into one and exactly one Θ set. If a function fell into 2 sets then by definitions all functions in both sets could be proven to fall into both sets and you really just have one set. At the end of the day Θ gives us a perfect segmentation of all possible functions into set of countably infinite unique sets.
A function being in a big-O set means that it exists in some Θ set which has a growth rate no larger than the big-O function.
And thats why I would say you were wrong, or at least misguided to say it is "more O(N+1)". O(N) is really just a way of notating "The set of all functions that have growth rate equal to or less than a linear growth". And so to say that:
a function is more O(N+1) and less `O(N)`
would be equivalent to saying
a function is more "a member of the set of all functions that have linear
growth rate or less growth rate" and less "a member of the set of all
functions that have linear or less growth rate"
Which is pretty absurd, and not a correct thing to say.
Assume f(x) goes to infinity as x tends to infinity and a,b>0. Find the f(x) that yields the lowest order for
as x tends to infinity.
By order I mean Big O and Little o notation.
I can only solve it roughly:
My solution: We can say ln(1+f(x)) is approximately equal to ln(f(x)) as x goes to infinity. Then, I have to minimize the order of
Since for any c>0, y+c/y is miminized when y =sqrt(c), b+ln f(x)}=sqrt(ax) is the anwer. Equivalently, f(x)=e^(sqrt(ax)-b) and the lowest order for g(x) is 2 sqrt(ax).
Can you help me obtain a rigorous answer?
The rigorous way to minimize (I should say extremize) a function of another function is to use the Euler-Lagrange relation:
Thus:
Taylor expansion:
If we only consider up to "constant" terms:
Which is of course the result you obtained.
Next, linear terms:
We can't solve this equation analytically; but we can explore the effect of a perturbation in the function f(x) (i.e. a small change in parameter to the previous solution). We can obviously ignore any linear changes to f, but we can add a positive multiplicative factor A:
sqrt(ax) and Af are obviously both positive, so the RHS has a negative sign. This means that ln(A) < 0, and thus A < 1, i.e. the new perturbed function gives a (slightly) tighter bound. Since the RHS must be vanishingly small (1/f), A must not be very much smaller than 1.
Going further, we can add another perturbation B to the exponent of f:
Since ln(A) and the RHS are both vanishing small, the B-term on LHS must be even smaller for the sign to be consistent.
So we can conclude that (1) A is very close to 1, (2) B is much smaller than 1, i.e. the result you obtained is in fact a very good upper bound.
The above also leads to the possibility of even tighter bounds for higher powers of f.
In a program, I’m using two data structures
1: An array of pointers of size k, each pointer points to a link lists(hence, total ‘k’ lists) . Total number of nodes in all the lists = M…..(something like hashing with separate chaining, k is fixed, M can vary)
2: Another array of integers of size M (where M=number of nodes above)
Question is: What is the overall space complexity of the program? Is it something like below?
First part: O(k+M) or just O(M)….both are correct I guess!
Second part: O(2M) or just O(M)…again both correct?
Overall O(k+M) + O(2M) ==> O(max(k+M, 2M)
Or just O(M)?
Please help.
O(K+M) is O(M) if the M is always greater than K. So, the final result is O(M).
First part: O(k+M) is not correct its just O(M)
Second part: O(2M) is not correct because we don't use constants in order so correct is O(M)
Overall O(M) + O(M) ==> O(M).
Both are correct in the two cases. But since O(k+M) = O(M), supposing k constant, everybody will use the simplest notation, which is O(M).
For the second part, a single array is O(M).
For the overall, it would be O(k+M+M) = O(max(k+M,2M)) = O(M) (we can "forget" multiplicative and additive constants in the big-O notation - except if your are in constant time).
As a reminder g(x) = O(f(x)) iff there exist x0 and c such that x>x0 implies g(x) >= c.f(x)
As part of a programming assignment I saw recently, students were asked to find the big O value of their function for solving a puzzle. I was bored, and decided to write the program myself. However, my solution uses a pattern I saw in the problem to skip large portions of the calculations.
Big O shows how the time increases based on a scaling n, but as n scales, once it reaches the resetting of the pattern, the time it takes resets back to low values as well. My thought was that it was O(nlogn % k) when k+1 is when it resets. Another thought is that as it has a hard limit, the value is O(1), since that is big O of any constant. Is one of those right, and if not, how should the limit be represented?
As an example of the reset, the k value is 31336.
At n=31336, it takes 31336 steps but at n=31337, it takes 1.
The code is:
def Entry(a1, q):
F = [a1]
lastnum = a1
q1 = q % 31336
rows = (q / 31336)
for i in range(1, q1):
lastnum = (lastnum * 31334) % 31337
F.append(lastnum)
F = MergeSort(F)
print lastnum * rows + F.index(lastnum) + 1
MergeSort is a standard merge sort with O(nlogn) complexity.
It's O(1) and you can derive this from big O's definition. If f(x) is the complexity of your solution, then:
with
and with any M > 470040 (it's nlogn for n = 31336) and x > 0. And this implies from the definition that:
Well, an easy way that I use to think about big-O problems is to think of n as so big it may as well be infinity. If you don't get particular about byte-level operations on very big numbers (because q % 31336 would scale up as q goes to infinity and is not actually constant), then your intuition is right about it being O(1).
Imagining q as close to infinity, you can see that q % 31336 is obviously between 0 and 31335, as you noted. This fact limits the number of array elements, which limits the sort time to be some constant amount (n * log(n) ==> 31335 * log(31335) * C, for some constant C). So it is constant time for the whole algorithm.
But, in the real world, multiplication, division, and modulus all do scale based on input size. You can look up Karatsuba algorithm if you are interested in figuring that out. I'll leave it as an exercise.
If there are a few different instances of this problem, each with its own k value, then the complexity of the method is not O(1), but instead O(k·ln k).