What is the time complexity of the whole algorithm? - algorithm

I am new to asymptotic notation and here is the algorithm. What is the worst case tight bond for time complexity and why?
F(A,B) { //A and B are positive
while A>0
print(A mod B)
A=A div B
}

The time complexity of this:
F(A,B) { //A and B are positive
while A>0
A=A/B
}
is equal to the number of times the loop will execute, let's call that l and is equal to how many times B has to divide A to make "A > 0" false.
From this question, we know that:
Algorithm D in 4.3.1 of Knuth's book "The Art of Computer Programming" (Volume 2) performs any long division in O(m) steps, where m is the number of digits of A, so we have an upper bound.
Thus the Time Complexity is: *O(l * m)*
Now this:
print(A mod B)
assuming that IO is constant (that's something of course that's not correct in real-wolrd), you need the complexity of the modulo itself, which from this, we know it's:
O(log A log B)
and will run l times.
As a result, we have:
O(l * (m + log A log B))

It's not a very good question. First if B is unity, the algorithm will never complete. Presumably B must be 2 or greater. So we have O(log A) steps. But the question now is whether the division is itself an "operation" or not. If A and B are unbounded, then inherently it must also be logarithmic. But normally the code will be running on a processor that implements all divisions in 32 or 64 bits, and it can't divide an number out of range. So generally we say that divisions are "operations".
If we say division is logarithimic and B is small then we're O(log A)^2.

Related

Why can't we always just pick the biggest term with big-O notation?

I'm looking at Cracking the Coding Interview 6th edition page 49, example 8.
Suppose we have an algorithm that took in an array of strings, sorts each string, and then sort the full array. What would the runtime be?
If the length of the longest string is s and length of the array is a, the book says sorting each string would be:
O(a*s log s)
What I understand is the complexity depends on the upper bound in this case, so it should be:
O(s log s)
For example, if s is the length of longest string and s1, s2, s3 are lengths other strings, the complexity would be:
O(s log s + s1 log s1 + s2 log s2 + s3 log s3)
which is ultimately:
O(s log s)
because value of s is the highest. Why do we need to multiply if with a?
You can only ignore the smaller terms if there's a constant amount of them.
The ability to ignore smaller terms actually follows from the fact that you can ignore constant factors, but this only works when you have a constant amount of smaller terms (otherwise the constant factor wouldn't be constant).
Intuitively:
What if you have 50000000000 strings of length 10? Some predefined factor of 10 log 10 doesn't sound right for the running time, you need that 50000000000 in there somewhere.
Mathematically:
Let's say you have f(n) + g(n), with g(n) being smaller than f(n) as n tends to infinity.
You can say that:
f(n) <= f(n) + g(n) <= f(n) + f(n) = 2.f(n) (as n tends to infinity)
Now you've cancelled out g(n) and there's only a constant factor between 1 and 2 for f(n), which you can ignore with asymptotic notation, thus the complexity is simply O(f(n)).
If you have a variable number a of terms, you're going to have a.f(n) and you can't ignore that a.
The proof of a.f(n) = O(f(n)) (at least one way to prove it) involves picking a constant M such that |a.f(n)| <= M.f(n) from some value of n onward. No matter which value of M you pick, there can always exist a larger a (just M+1 would work), thus this proof fails.
As Dukeling points out, both a and s are parameters here.
So the runtime of your algorithm depends on both. Without more info about the relationship between the two, you can't simplify further. For instance, it doesn't sound like you're given a < s.
But say that you were given a < s. Then you could say that because your O(s log s) sorting operation needs to be performed a = O(s) times to sort all the strings in your array, the total is O(s^2 log s).

Big-O of `2n-2n^3`

I am working on an assignment but a friend of mine disagree with the answer to one part.
f(n) = 2n-2n^3
I find the complexity to be f(n) = O(n^3)
Am I wrong?
You aren't wrong, since O(n^3) does not provide a tight bound. However, typically you assume that f is increasing and try to find the smallest function g for which f=O(g) is true. Consider a simple function f=n+3. It's correct to say that f=O(n^3), since n+3 < n^3 for all n > 2 (just to pick an arbitrary constant). However, it's "more" correct to say that f=O(n), since n+3 < 2n for all n > 3, and this gives you a better feel for how f behaves as n increases.
In your case, f is decreasing as n increases, so it is true that f = O(g) for any function that stays positive as n increases. The "smallest" (or rather, slowest growing) such function is a constant function for some positive constant, and we usually write that as 2n - 2n^3 = O(1), since 2n - 2n^3 < 1 for all n>0.
You could even find some function of n that is decreasing as n increases, but decreases more slowly than your f, but such usage is rare. Big-O notation is most commonly used to describe algorithm running times as the input size increases, so n is almost universally assumed to be positive.

Constant c in BIG O NOTATION

Suppose f(n) is runtime of algorithm.
According to function definition of O(n), if f(n)<=c*g(n) then f(n)=O(g(n)) where n0<=n.
What is the range of values constant c can take?
By definition (e.g. here), any positive number, as long as it's constant.
For instance, n^2 is not in O(n) because there is no positive number c such that n^2 = cn for all n; that equality is trivially solved to c = n, but by definition n is not a constant.
C can be anything (above zero, obviously). Doesn't matter: 0.1 or 1 or 1.000.000. The only thing: it must be constant - i.e. may be defined once and for all. It must not depend from n. Of course, C will affect total algorithm performance - but the purpose of big-O is to estimate performance, not calculate it precisely (well, that goes from definition)
It can be any positive number. If it is 0 you do nothing, if it is negative you break something.
To speak simply c is an constant consisted of two halves:
Algorithm half. For example your algo has to iterate 5 times through entire input collection. So constant will be 5. If your another algorithm iterates 3 times. Then constant will be 3, but both of your algo will have complexity O(n).
Hardware half. It is time needed to compute one operation on your pc. If you start app implementing your algo upon the same collection on Pentium 1 and on modern Xeon it is obvious that Xeon will compute the result much faster.
Any range.
c being a constance means that it does not depend on the size n of the problem. Whenever the value of n changes, c remains the same.
While c is independent of n, one is allowed to exclude some special cases for small n and determine the c only for n ≥ n0.
For instance,
n ≤ n^2 for n ≥ 1
but also
n ≤ 0.01*n^2 for n ≥ 100
Suppose f(n)=n+1 and g(n)=n^2
We try to prove that f(n)=O(g(n))
For n=1,f(n)=2,g(n)=1 f(n)<=c * g(n) if c=2 and now c>n
For n=2,f(n)=3,g(n)=4 f(n)<=c * g(n) if c=1 and now c< n
So comparison between c and n is senseless.
NOTE: c is a constant (its value can never change)
So we can say,f(n)=O(g(n) for n>=1 if c=2
But f(n)=O(g(n) for n>=2 if c=1

Are all algorithms with a constant base to the n (e.g.: k^n) of the same time complexity?

In this example: http://www.wolframalpha.com/input/?i=2%5E%281000000000%2F2%29+%3C+%283%2F2%29%5E1000000000
I noticed that those two equations are pretty similar no matter how high you go in n. Do all algorithms with a constant to the n fall in the same time complexity category? Such as 2^n, 3^n, 4^n, etc.
They are in the same category, This does not mean their complexity is the same. They are exponential running time algorithms. Obviously 2^n < 4^n
We can see 4^n/2^n = 2^2n/2^n = 2^n
This means 4^n algorithm exponential slower(2^n times) than 2^n
The same thing happens with 3^n which is 1.5^n.
But this does not mean 2^n is something far less than 4^n, It is still exponential and will not be feasible when n>50.
Note this is happening due to n is not in the base. If they were in the base like this:
4n^k vs n^k then this 2 algorithms are asymptotically the same(as long as n is relatively small than actually data size). They would be different by linear time, just like O(n) vs c * O(n)
The time complexities O(an) and O(bn) are not the same if 1 < a < b. As a quick proof, we can use the formal definition of big-O notation to show that bn ≠ O(an).
This works by contradiction. Suppose that bn = O(an) and that 1 < a < b. Then there must be some c and n0 such that for any n ≥ n0, we have that bn ≤ c · an. This means that bn / an ≤ c for any n ≥ n0. Since b > a, it should start to become clear that this is impossible - as n grows larger, bn / an = (b / a)n will get larger and larger. In particular, if we pick any n ≥ n0 such that n > logb / a c, then we will have that
(b / a)n > (b / a)log(b/a) c = c
So, if we pick n = max{n0, c + 1}, then it's not true that bn ≤ c · an, contradicting our assumption that bn = O(an).
This means, in particular, that O(2n) ≠ O(1.5n) and that O(3n) ≠ O(2n). This is why when using big-O notation, it's still necessary to specify the base of any exponents that end up getting used.
One more thing to notice - although it looks like 21000000000/2 is approximately 1.41000000000/2, notice that these are totally different numbers. The first is of the form 10108.1ish and the second of the form 10108.2ish. That might not seem like a big difference, but it's absolutely colossal. Take, for example, 10101 and 10102. This first number is 1010, which is 10 billion and takes ten digits to write out. The second is 10100, one googol, which takes 100 digits to write out. There's a huge difference between them - the first of them is close to the world population, while the second is about the total number of atoms in the universe!
Hope this helps!

why O(2n^2) and O(100 n^2) same as O(n^2) in algorithm complexity?

I am new in the algorithm analysis domain. I read here in the Stack Overflow question
"What is a plain English explanation of "Big O" notation?" that O(2n^2) and O(100 n^2) are the same as O(n^2). I don't understand this, because if we take n = 4, the number of operations will be:
O(2 n^2) = 32 operations
O(100 n^2) = 1600 operations
O(n^2) = 16 operations
Can any one can explain why we are supposed to treat these different operation counts as equivalent?
Why this is true can be derived directly from the formal definition. More specifically, f(x) = O(g(n)) if and only if |f(x)| <= M|g(x)| for all x >= x0 for some M and x0. Here you're free to pick M as you wish, so if M = 5 for f(x) = O(n2) to be true, then you can just pick M = 5*100 for f(x) = O(100 n2) to be true.
Why this is useful is a bit of a different story.
Some concerns with having constants matter:
What operations are we measuring? Array accesses? Arithmetic operations? Multiplication only? Arithmetic with multiplication weighted double as much as addition? And you may want to compare algorithms (that have the same Big-O complexity) using this metric, when in fact there can be some subtle difference in the number of operations that even the most experience computer scientists can miss.
Let's say you can assign a reasonable weight to each operation. Now there has to be across the board agreement to this, otherwise you'll have some near-meaningless analyses of algorithms done by someone using different weights (except for what information big-O would've given you).
The weights may be time-bound, as the speed of operations improve with time, and some operations may improve faster than others.
The weights may be environment-bound, as the speed of operations can differ on different environments. For example, disk read is a lot slower than memory read.
Big-O (which is part of asymptotic complexity) avoid all of these issues. You only check how many times some piece of code that takes a constant amount of time (i.e. independent of input size) is executed. As example:
c = 0
for i = 1 to n
for j = 1 to n
for k = 1 to n
x = input[i]*input[j]
y = input[j]*input[k]
z = input[i]*input[k]
c += (x-y)*z
So there are 4 multiplications, 1 subtraction and 1 addition, each executed n3 times, but here we just say that this code:
x = input[i]*input[j]
y = input[j]*input[k]
z = input[i]*input[k]
c += (x-y)*z
runs in constant time (it will always take the same amount of time, regardless of how many elements there are in the array) and will be executed O(n3) times, thus the running time is O(n3).
Because O(f(n)) means that the said function is bouded by some constant times f(n). If g is bounded by multiple of 100 f(n), it is also bouded by multiple of f(n). Specifically, O(2 n^2) does not mean it's not greater than 16 for n = 4, but that for all n it's not greater than C * 2n^2 for some fixed C, independent of n.
Because it is a classification, so it places algorithms in some complexity class. The classes are O(1), O(n), O(n log n), O(n ^ 2), O(n ^ 3), O(n ^ n), etc. By definition, two algorithms are in the same complexity class if the difference is a constant factor when n goes to infinity (the big-oh notation is for comparing algorithmic complexity for large values of n).

Resources