O-notation, O(∞) = O(1)? - infinity

So a quick thought; Could one argue that O(∞) is actually O(1)?
I mean it isn't depend on input size?
So in some way its, constant, even though it infinity.
Or is the only 'correct' way to express it O(∞)?

Infinity is not a number, or at least not a real number, so the expression is malformed. The correct way to express this is to simply state that a program doesn't terminate. Note: program, not algorithm, as an algorithm is guaranteed to terminate.
(If you wanted, you might be able to define big-O notation on transfinite numbers. I'm not sure if that would be of any use, though.)

Your argument is not quite correct.
Big O notation disregards constant multiples; there's no difference between O(1) and O(42), or between O(log(n)) and O(3π log(n)) .
Standard convention is to not use any constant multiples.
However, O(∞) would mean an “algorithm” that never terminates, as opposed to O(1) which will terminate at some point.

To answer the question :
O-notation, O(∞) = O(1)?
No
The main difference is that O(1) will end at some point, while O(∞) never ends.
They both don't include a variable, but have both different meanings :
O(1) (or O(121) or O(whatever but not infinity) : independendent of the functions arguments, but ending
O(∞) : independendent of the functions arguments, and non ending
As pointed out in another answer, infinity isn't really in the domain of the big-O notation, but the simple 'no' than remains of course, O(1) and O(∞) are not the same.

Big-Oh is a measure of how something the resources required scales as N increases. O(5 hours) and O(5 seconds) are both O(1) since no extra resources are needed as N increases.

Related

Is an iterative solution for Fibonacci series pseudo-polynomial?

So when we do an iterative solution to find the nth number in a Fibonacci sequence, we run a for loop (n-2) times. This would mean that the time complexity would be O(n). Is this correct or would it actually be pseudo-polynomial depending on the number of bits of the input, much like the Knapsack problem?
Here, I assume Fib(n) is an iterative version of a program that computes Fibonacci numbers. Perhaps something like:
def Fib(n):
a, b = 0, 1
for _ in xrange(n):
a, b = b, a + b
return a
"Fib(n) is pseudo-polynomial" means in this context that computing Fib is bounded by a polynomial of its argument, n, but isn't bounded by a polynomial function of the size of the argument, log(n). That's true in this case.
"Fib(n) is O(n)" is a statement about the running time of Fib with respect to the value of its argument. There's sometimes ambiguity what "n" is, but here there's none -- it's the input to Fib, otherwise "n" would refer to two different things in the original statement. That's true here (although see the technical side-note below).
"Fib is O(n)" is ambiguous. There are people who will tell you that n clearly refers to the argument, and there's others who will tell you that n always refers to the size of the argument. The truth is that it's ambiguous and if it's not clear in context you should say what you mean (or ask what it means if you hear it and are confused). One context where it's not ambiguous is when you're talking about classes of P/NP problems -- there it's assumed that complexities are always relative to the size of the input.
A technical side-note
The iterative version of Fib(n) performs O(n) arithmetic operations, but whether it's O(n) time depends on your computational model, and specifically whether it can perform arbitrary integer arithmetic operations in O(1) time. Personally, I'd be careful and say "Fib(n) performs O(n) arithmetic operations" rather than "Fib(n) is O(n)" -- and if you plot the running time of Fib(n), you'll find it's not linear time in practice, as real bignum implementations are certainly not O(1) for all basic operations.
Yes, it is infact O(n). The time complexity of Knapsack Problem is a really weird one and is an exception.

What's the Big-O of T(n)=Constant?

Would it be O(Constant)?
Example:
T(n) = 10
Would be correct to say that the big-o is O(10)?
We usually write it as O(1) since constant factors aren't relevant.
Of course a zero constant is mathematically quite distinct. For our practical purposes, nothing takes no time in all cases, unless we aren't doing it. And if we aren't doing it, we probably don't care.

Asymptotic Notations-Big Oh Notation

What is the clear interpretation of this?
O(1)+O(2)+O(3)+O(4)+O(5).......O(n)
And how different is this from
sigma O(i) 1<=i<=n?
CLRS says it is different but does not explain how are these different?
If I remember correctly, the asymptotic complexity is always expressed with the highest order function, so
O(1)+O(2)+...+O(n)
is just
O(n)
Which makes sense if n is reasonably large. If n is small the whole complexity stuff makes little sense anyway.

Still sort of confused about Big O notation

So I've been trying to understand Big O notation as well as I can, but there are still some things I'm confused about. So I keep reading that if something is O(n), it usually is referring to the worst-case of an algorithm, but that it doesn't necessarily have to refer to the worst case scenario, which is why we can say the best-case of insertion sort for example is O(n). However, I can't really make sense of what that means. I know that if the worst-case is O(n^2), it means that the function that represents the algorithm in its worst case grows no faster than n^2 (there is an upper bound). But if you have O(n) as the best case, how should I read that as? In the best case, the algorithm grows no faster than n? What I picture is a graph with n as the upper bound, like
If the best case scenario of an algorithm is O(n), then n is the upper bound of how fast the operations of the algorithm grow in the best case, so they cannot grow faster than n...but wouldn't that mean that they can grow as fast as O(log n) or O(1), since they are below the upper bound? That wouldn't make sense though, because O(log n) or O(1) is a better scenario than O(n), so O(n) WOULDN'T be the best case? I'm so lost lol
Big-O, Big-Θ, Big-Ω are independent from worst-case, average-case, and best-case.
The notation f(n) = O(g(n)) means f(n) grows no more quickly than some constant multiple of g(n).
The notation f(n) = Ω(g(n)) means f(n) grows no more slowly than some constant multiple of g(n).
The notation f(n) = Θ(g(n)) means both of the above are true.
Note that f(n) here may represent the best-case, worst-case, or "average"-case running time of a program with input size n.
Furthermore, "average" can have many meanings: it can mean the average input or the average input size ("expected" time), or it can mean in the long run (amortized time), or both, or something else.
Often, people are interested in the worst-case running time of a program, amortized over the running time of the entire program (so if something costs n initially but only costs 1 time for the next n elements, it averages out to a cost of 2 per element). The most useful thing to measure here is the least upper bound on the worst-case time; so, typically, when you see someone asking for the Big-O of a program, this is what they're looking for.
Similarly, to prove a problem is inherently difficult, people might try to show that the worst-case (or perhaps average-case) running time is at least a certain amount (for example, exponential).
You'd use Big-Ω notation for these, because you're looking for lower bounds on these.
However, there is no special relationship between worst-case and Big-O, or best-case and Big-Ω.
Both can be used for either, it's just that one of them is more typical than the other.
So, upper-bounding the best case isn't terribly useful. Yes, if the algorithm always takes O(n) time, then you can say it's O(n) in the best case, as well as on average, as well as the worst case. That's a perfectly fine statement, except the best case is usually very trivial and hence not interesting in itself.
Furthermore, note that f(n) = n = O(n2) -- this is technically correct, because f grows more slowly than n2, but it is not useful because it is not the least upper bound -- there's a very obvious upper bound that's more useful than this one, namely O(n). So yes, you're perfectly welcome to say the best/worst/average-case running time of a program is O(n!). That's mathematically perfectly correct. It's just useless, because when people ask for Big-O they're interested in the least upper bound, not just a random upper bound.
It's also worth noting that it may simply be insufficient to describe the running-time of a program as f(n). The running time often depends on the input itself, not just its size. For example, it may be that even queries are trivially easy to answer, whereas odd queries take a long time to answer.
In that case, you can't just give f as a function of n -- it would depend on other variables as well. In the end, remember that this is just a set of mathematical tools; it's your job to figure out how to apply it to your program and to figure out what's an interesting thing to measure. Using tools in a useful manner needs some creativity, and math is no exception.
Informally speaking, best case has O(n) complexity means that when the input meets
certain conditions (i.e. is best for the algorithm performed), then the count of
operations performed in that best case, is linear with respect to n (e.g. is 1n or 1.5n or 5n).
So if the best case is O(n), usually this means that in the best case it is exactly linear
with respect to n (i.e. asymptotically no smaller and no bigger than that) - see (1). Of course,
if in the best case that same algorithm can be proven to perform at most c * log N operations
(where c is some constant), then this algorithm's best case complexity would be informally
denoted as O(log N) and not as O(N) and people would say it is O(log N) in its best case.
Formally speaking, "the algorithm's best case complexity is O(f(n))"
is an informal and wrong way of saying that "the algorithm's complexity
is Ω(f(n))" (in the sense of the Knuth definition - see (2)).
See also:
(1) Wikipedia "Family of Bachmann-Landau notations"
(2) Knuth's paper "Big Omicron and Big Omega and Big Theta"
(3)
Big Omega notation - what is f = Ω(g)?
(4)
What is the difference between Θ(n) and O(n)?
(5)
What is a plain English explanation of "Big O" notation?
I find it easier to think of O() as about ratios than about bounds. It is defined as bounds, and so that is a valid way to think of it, but it seems a bit more useful to think about "if I double the number/size of inputs to my algorithm, does my processing time double (O(n)), quadruple (O(n^2)), etc...". Thinking about it that way makes it a little bit less abstract - at least to me...

Relation between worst case and average case running time of an algorithm

Let's say A(n) is the average running time of an algorithm and W(n) is the worst. Is it correct to say that
A(n) = O(W(n))
is always true?
The Big O notation is kind of tricky, since it only defines an upper bound to the execution time of a given algorithm.
What this means is, if f(x) = O(g(x)) then for every other function h(x) such that g(x) < h(x) you'll have f(x) = O(h(x)) . The problem is, are those over extimated execution times usefull? and the clear answer is not at all. What you usually whant is the "smallest"
upper bound you can get, but this is not strictly required in the definition, so you can play around with it.
You can get some stricter bound using the other notations, such as the Big Theta, as you can read here.
So, the answer to your question is yes, A(n) = O(W(n)), but that doesn't give any usefull information on the algorithm.
If you're mentioning A(n) and W(n) are functions - then, yes, you can do such statement in common case - it is because big-o formal definition.
Note, that in terms on big-o there's no sense to act such way - since it makes understanding of the real complexity worse. (In general, three cases - worst, average, best - are present exactly to show complexity more clear)
Yes, it is not a mistake to say so.
People use asymptotic notation to convey the growth of running time on specific cases in terms of input sizes.To compare the average case complexity with the worst case complexity isn't providing much insight into understanding the function's growth on either of the cases.
Whilst it is not wrong, it fails to provide more information than what we already know.
I'm unsure of exactly what you're trying to ask, but bear in mind the below.
The typical algorithm used to show the difference between average and worst case running time complexities is Quick Sort with poorly chosen pivots.
On average with a random sample of unsorted data, the runtime complexity is n log(n). However, with an already sorted set of data where pivots are taken from either the front/end of the list, the runtime complexity is n^2.

Resources