Meaning of lg * N in Algorithmic Analysis - algorithm

I'm currently reading about algorithmic analysis and I read that a certain algorithm (weighted quick union with path compression) is of order N + M lg * N. Apparently though this is linear because lg * N is a constant in this universe. What mathematical operation is being referred to here. I am unfamiliar with the notation lg * N.

The answers given here so far are wrong. lg* n (read "log star") is the iterated logarithm. It is defined as recursively as
0 if n <= 1
lg* n =
1 + lg*(lg n) if n > 1
Another way to think of it is the number of times that you have to iterate logarithm before the result is less than or equal to 1.
It grows extremely slowly. You can read more on Wikipedia which includes some examples of algorithms for which lg* n pops up in the analysis.

I'm assuming you're talking about the algorithm analyzed on slide 44 of this lecture:
http://www.cs.princeton.edu/courses/archive/fall05/cos226/lectures/union-find.pdf
Where they say "lg * N is a constant in this universe" I believe they aren't being entirely literal.
lg*N does appear to increase with N as per their table on the right side of the slide; it just happens to grow at such a slow rate that it can't be considered much else (N = 2^65536 -> log*n = 5). As such it seems they're saying that you can just ignore the log*N as a constant because it will never increase enough to cause a problem.
I could be wrong, though. That's simply how I read it.
edit: it might help to note that for this equation they're defining "lg*N" to be 2^(lg*(N-1)). Meaning that an N value of 2^(2^(65536)) [a far larger number] would give lg*N = 6, for example.

The recursive definition of lg*n by Jason is equivalent to
lg*n = m when 2 II m <= n < 2 II (m+1)
where
2 II m = 2^2^...^2 (repeated exponentiation, m copies of 2)
is Knuth's double up arrow notation. Thus
lg*2= 1, lg*2^2= 2, lg*2^{2^2}= 3, lg*2^{2^{2^2}} = 4, lg*2^{2^{2^{2^2}}} = 5.
Hence lg*n=4 for 2^{16} <= n < 2^{65536}.
The function lg*n approaches infinity extremely slowly.
(Faster than an inverse of the Ackermann function A(n,n) which involves n-2 up arrows.)
Stephen

lg is "LOG" or inverse exponential. lg typically refers to base 2, but for algorithmic analysis, the base usually doesnt matter.

lg n refers to log base n. It is the answer to the equation 2^x = n. In Big O complexity analysis, the base to log is irrelevant. Powers of 2 crop up in CS, so it is no surprise if we have to choose a base, it will be base 2.
A good example of where it crops up is a fully binary tree of height h, which has 2^h-1 nodes. If we let n be the number of nodes this relationship is the tree is height lg n with n nodes. The algorithm traversing this tree takes at most lg n to see if a value is stored in the tree.
As to be expected, wiki has great additional info.

Logarithm is denoted by log or lg. In your case I guess the correct interpretation is N + M * log(N).
EDIT: The base of the logarithm does not matter when doing asymptotic complexity analysis.

Related

Is 2^(log n) = O(log(n))?

Are these two equal? I read somewhere that O(2lg n) = O(n). Going by this observation, I'm guessing the answer would be no, but I'm not entirely sure. I'd appreciate any help.
Firstly, O(2log(n)) isn't equal to O(n).
To use big O notation, you would find a function that represents the complexity of your algorithm, then you would find the term in that function with the largest growth rate. Finally, you would eliminate any constant factors you could.
e.g. say your algorithm iterates 4n^2 + 5n + 1 times, where n is the size of the input. First, you would take the term with the highest growth rate, in this case 4n^2, then remove any constant factors, leaving O(n^2) complexity.
In your example, O(2log(n)) can be simplified to O(log(n))
Now on to your question.
In computer science, unless specified otherwise, you can generally assume that log(n) actually means the log of n, base 2.
This means, using log laws, 2^log(n) can be simplified to O(n)
Proof:
y = 2^log(n)
log(y) = log(2^log(n))
log(y) = log(n) * log(2) [Log(2) = 1 since we are talking about base 2 here]
log(y) = log(n)
y = n

Is complexity O(log(n)) equivalent to O(sqrt(n))?

My professor just taught us that any operation that halves the length of the input has an O(log(n)) complexity as a thumb rule. Why is it not O(sqrt(n)), aren't both of them equivalent?
They are not equivalent: sqrt(N) will increase a lot more quickly than log2(N). There is no constant C so that you would have sqrt(N) < C.log(N) for all values of N greater than some minimum value.
An easy way to grasp this, is that log2(N) will be a value close to the number of (binary) digits of N, while sqrt(N) will be a number that has itself half the number of digits that N has. Or, to state that with an equality:
        log2(N) = 2log2(sqrt(N))
So you need to take the logarithm(!) of sqrt(N) to bring it down to the same order of complexity as log2(N).
For example, for a binary number with 11 digits, 0b10000000000 (=210), the square root is 0b100000, but the logarithm is only 10.
Assuming natural logarithms (otherwise just multiply by a constant), we have
lim {n->inf} log n / sqrt(n) = (inf / inf)
= lim {n->inf} 1/n / 1/(2*sqrt(n)) (by L'Hospital)
= lim {n->inf} 2*sqrt(n)/n
= lim {n->inf} 2/sqrt(n)
= 0 < inf
Refer to https://en.wikipedia.org/wiki/Big_O_notation for alternative defination of O(.) and thereby from above we can say log n = O(sqrt(n)),
Also compare the growth of the functions below, log n is always upper bounded by sqrt(n) for all n > 0.
Just compare the two functions:
sqrt(n) ---------- log(n)
n^(1/2) ---------- log(n)
Plug in Log
log( n^(1/2) ) --- log( log(n) )
(1/2) log(n) ----- log( log(n) )
It is clear that: const . log(n) > log(log(n))
No, It's not equivalent.
#trincot gave one excellent explanation with example in his answer. I'm adding one more point. Your professor taught you that
any operation that halves the length of the input has an O(log(n)) complexity
It's also true that,
any operation that reduces the length of the input by 2/3rd, has a O(log3(n)) complexity
any operation that reduces the length of the input by 3/4th, has a O(log4(n)) complexity
any operation that reduces the length of the input by 4/5th, has a O(log5(n)) complexity
So on ...
It's even true for all reduction of lengths of the input by (B-1)/Bth. It then has a complexity of O(logB(n))
N:B: O(logB(n)) means B based logarithm of n
One way to approach the problem can be to compare the rate of growth of O()
and O( )
As n increases we see that (2) is less than (1). When n = 10,000 eq--1 equals 0.005 while eq--2 equals 0.0001
Hence is better as n increases.
No, they are not equivalent; you can even prove that
O(n**k) > O(log(n, base))
for any k > 0 and base > 1 (k = 1/2 in case of sqrt).
When talking on O(f(n)) we want to investigate the behaviour for large n,
limits is good means for that. Suppose that both big O are equivalent:
O(n**k) = O(log(n, base))
which means there's a some finite constant C such that
O(n**k) <= C * O(log(n, base))
starting from some large enough n; put it in other terms (log(n, base) is not 0 starting from large n, both functions are continuously differentiable):
lim(n**k/log(n, base)) = C
n->+inf
To find out the limit's value we can use L'Hospital's Rule, i.e. take derivatives for numerator and denominator and divide them:
lim(n**k/log(n)) =
lim([k*n**(k-1)]/[ln(base)/n]) =
ln(base) * k * lim(n**k) = +infinity
so we can conclude that there's no constant C such that O(n**k) < C*log(n, base) or in other words
O(n**k) > O(log(n, base))
No, it isn't.
When we are dealing with time complexity, we think of input as a very large number. So let's take n = 2^18. Now for sqrt(n) number of operation will be 2^9 and for log(n) it will be equal to 18 (we consider log with base 2 here). Clearly 2^9 much much greater than 18.
So, we can say that O(log n) is smaller than O(sqrt n).
To prove that sqrt(n) grows faster than lgn(base2) you can take the limit of the 2nd over the 1st and proves it approaches 0 as n approaches infinity.
lim(n—>inf) of (lgn/sqrt(n))
Applying L’Hopitals Rule:
= lim(n—>inf) of (2/(sqrt(n)*ln2))
Since sqrt(n) and ln2 will increase infinitely as n increases, and 2 is a constant, this proves
lim(n—>inf) of (2/(sqrt(n)*ln2)) = 0

why O(2n^2) and O(100 n^2) same as O(n^2) in algorithm complexity?

I am new in the algorithm analysis domain. I read here in the Stack Overflow question
"What is a plain English explanation of "Big O" notation?" that O(2n^2) and O(100 n^2) are the same as O(n^2). I don't understand this, because if we take n = 4, the number of operations will be:
O(2 n^2) = 32 operations
O(100 n^2) = 1600 operations
O(n^2) = 16 operations
Can any one can explain why we are supposed to treat these different operation counts as equivalent?
Why this is true can be derived directly from the formal definition. More specifically, f(x) = O(g(n)) if and only if |f(x)| <= M|g(x)| for all x >= x0 for some M and x0. Here you're free to pick M as you wish, so if M = 5 for f(x) = O(n2) to be true, then you can just pick M = 5*100 for f(x) = O(100 n2) to be true.
Why this is useful is a bit of a different story.
Some concerns with having constants matter:
What operations are we measuring? Array accesses? Arithmetic operations? Multiplication only? Arithmetic with multiplication weighted double as much as addition? And you may want to compare algorithms (that have the same Big-O complexity) using this metric, when in fact there can be some subtle difference in the number of operations that even the most experience computer scientists can miss.
Let's say you can assign a reasonable weight to each operation. Now there has to be across the board agreement to this, otherwise you'll have some near-meaningless analyses of algorithms done by someone using different weights (except for what information big-O would've given you).
The weights may be time-bound, as the speed of operations improve with time, and some operations may improve faster than others.
The weights may be environment-bound, as the speed of operations can differ on different environments. For example, disk read is a lot slower than memory read.
Big-O (which is part of asymptotic complexity) avoid all of these issues. You only check how many times some piece of code that takes a constant amount of time (i.e. independent of input size) is executed. As example:
c = 0
for i = 1 to n
for j = 1 to n
for k = 1 to n
x = input[i]*input[j]
y = input[j]*input[k]
z = input[i]*input[k]
c += (x-y)*z
So there are 4 multiplications, 1 subtraction and 1 addition, each executed n3 times, but here we just say that this code:
x = input[i]*input[j]
y = input[j]*input[k]
z = input[i]*input[k]
c += (x-y)*z
runs in constant time (it will always take the same amount of time, regardless of how many elements there are in the array) and will be executed O(n3) times, thus the running time is O(n3).
Because O(f(n)) means that the said function is bouded by some constant times f(n). If g is bounded by multiple of 100 f(n), it is also bouded by multiple of f(n). Specifically, O(2 n^2) does not mean it's not greater than 16 for n = 4, but that for all n it's not greater than C * 2n^2 for some fixed C, independent of n.
Because it is a classification, so it places algorithms in some complexity class. The classes are O(1), O(n), O(n log n), O(n ^ 2), O(n ^ 3), O(n ^ n), etc. By definition, two algorithms are in the same complexity class if the difference is a constant factor when n goes to infinity (the big-oh notation is for comparing algorithmic complexity for large values of n).

Compare Big O Notation

In n-element array sorting processing takes;
in X algorithm: 10-8n2 sec,
in Y algoritm 10-6n log2n sec,
in Z algoritm 10-5 sec.
My question is how do i compare them. For example for y works faster according to x, Which should I choose the number of elements ?
When comparing Big-Oh notations, you ignore all constants:
N^2 has a higher growth rate than N*log(N) which still grows more quickly than O(1) [constant].
The power of N determines the growth rate.
Example:
O(n^3 + 2n + 10) > O(200n^2 + 1000n + 5000)
Ignoring the constants (as you should for pure big-Oh comparison) this reduces to:
O(n^3 + n) > O(n^2 + n)
Further reduction ignoring lower order terms yields:
O(n^3) > O(n^2)
because the power of N 3 > 2.
Big-Oh follows a hierarchy that goes something like this:
O(1) < O(log[n]) < O(n) < O(n*log[n]) < O(n^x) < O(x^n) < O(n!)
(Where x is any amount greater than 1, even the tiniest bit.)
You can compare any other expression in terms of n via some rules which I will not post here, but should be looked up in Wikipedia. I list O(n*log[n]) because it is rather common in sorting algorithms; for details regarding logarithms with different bases or different powers, check a reference source (did I mention Wikipedia?)
Give the wiki article a shot: http://en.wikipedia.org/wiki/Big_O_notation
I propose this different solution since there is not an accepted answer yet.
If you want to see at what value of n does one algorithm perform better than another, you should set the algorthim times equal to each other and solve for n.
For Example:
X = Z
10^-8 n^2 = 10^-5
n^2 = 10^3
n = sqrt(10^3)
let c = sqrt(10^3)
So when comparing X and Z, choose X if n is less than c, and Z if n is greater than c. This can be repeating between the other two pairs.

Determining complexity of an integer factorization algorithm

I'm starting to study computational complexity, BigOh notation and the likes, and I was tasked to do an integer factorization algorithm and determine its complexity. I've written the algorithm and it is working, but I'm having trouble calculating the complexity. The pseudo code is as follows:
DEF fact (INT n)
BEGIN
INT i
FOR (i -> 2 TO i <= n / i STEP 1)
DO
WHILE ((n MOD i) = 0)
DO
PRINT("%int X", i)
n -> n / i
DONE
DONE
IF (n > 1)
THEN
PRINT("%int", n)
END
What I attempted to do, I think, is extremely wrong:
f(x) = n-1 + n-1 + 1 + 1 = 2n
so
f(n) = O(n)
Which I think it's wrong because factorization algorithms are supposed to be computationally hard, they can't even be polynomial. So what do you suggest to help me? Maybe I'm just too tired at this time of the night and I'm screwing this all up :(
Thank you in advance.
This phenomenon is called pseudopolynomiality: a complexity that seems to be polynomial, but really isn't. If you ask whether a certain complexity (here, n) is polynomial or not, you must look at how the complexity relates to the size of the input. In most cases, such as sorting (which e.g. merge sort can solve in O(n lg n)), n describes the size of the input (the number of elements). In this case, however, n does not describe the size of the input; it is the input value. What, then, is the size of n? A natural choice would be the number of bits in n, which is approximately lg n. So let w = lg n be the size of n. Now we see that O(n) = O(2^(lg n)) = O(2^w) - in other words, exponential in the input size w.
(Note that O(n) = O(2^(lg n)) = O(2^w) is always true; the question is whether the input size is described by n or by w = lg n. Also, if n describes the number of elements in a list, one should strictly speaking count the bits of every single element in the list in order to get the total input size; however, one usually assumes that in lists, all numbers are bounded in size (to e.g. 32 bits)).
Use the fact that your algorithm is recursive. If f(x) is the number of operations take to factor, if n is the first factor that is found, then f(x)=(n-1)+f(x/n). The worst case for any factoring algorithm is a prime number, for which the complexity of your algorithm is O(n).
Factoring algorithms are 'hard' mainly because they are used on obscenely large numbers.
In big-O notation, n is the size of input, not the input itself (as in your case). The size of the input is lg(n) bits. So basically your algorithm is exponential.

Resources