I was trying to write some sorting algorithms in prolog and find their complexity when I started thinking whether or not they are going to have a different complexity just because they are written in a logical language.
Take for example quicksort. It has an average complexity of nlogn and it's code(not complete) goes like this:
quicksort([Head|Tail], SortedList) :-
split(Head, Tail, Left, Right),
quicksort(Left, SortedLeft),
quicksort(Right, SortedRight),
append(SortedLeft, [Head|SortedRight], SortedList).
split has n. quicksort logn. Which gives the (average) nlogn. But what about append? It also has linear complexity. So is it overall (n^2)logn?
Doesn't the fact that in prolog we can only access the elements of a list in a linear way hurt the complexity of our programs. In that sense isn't it better to use another language like C for example?
When determining the complexity of algorithms, you should take the same steps as in other languages, and keep in mind that Prolog programs have a procedural/imperative reading as well as a logical one. The main difference comes from the possibility of backtracking, which in this case isn't very relevant.
The complexity of this program, assuming none of the steps backtracks, is given by a recurrence
T(n) ≤ 2T(n/2) + splitT(n) + appendT(n)
You've already observed that append/3 takes linear time, so if split/4 takes linear time as well,
splitT(n) + appendT(n) = Θ(n)
and you get T(n) ≤ 2T(n/2) + Θ(n), which is the same recurrence as for imperative quicksort.
Doesn't the fact that in prolog we can only access the elements of a list in a linear way hurt the complexity of our programs. In that sense isn't it better to use another language like C for example?
As I've shown, this is unrelated to the complexity problem that you presented. Yes, for many problems idiomatic C programs are faster than idiomatic Prolog programs. For other problems, idiomatic Prolog programs are a few lines of code whereas the corresponding C programs would include half a Prolog interpreter.
What is relevant here is that in C, quicksort can be written with O(lg n) ''space'' complexity since you only need to represent the stack and can modify an array in-place. A Prolog list, by contrast, is immutable, so your program will build a new list and has linear space complexity. Whether that's good or bad depends on the use case.
Related
I understand that Big Omega defines the lower bound of s function (or best-case runtime).
Considering that almost every search algorithm could "luck out" and find the target element on the first iteration, would it be fair to say that its Big-Omega time complexity is O(1)?
I also understand that defining O(1) as the big Omega may not be useful -other lower bounds may be tighter, or closer to the evaluated function-, but the question is, is it correct?
I've found multiple sources claiming the linear search is Big-Omega O(n), even if some cases could complete in a single step, which is different from the best-case scenario as I understand it.
The lower bound (𝛺) is not the fastest answer a given algorithm can give.
The lower bound of a given problem is equal to the worst case scenario of the best algorithm that solves the problem. When doing complexity analysis, you should never forget that "luck" is always in the hands of the input (the instance the algorithm is trying to solve).
When trying to find a lower bound, you will imagine the "perfect algorithm" and you will try to "trap" it in a very hard case. Usually the algorithm is not defined and is only described based on its (hypotetical) performances. You would use arguments such as "If the ideal algorithm is that fast, it will not have this particular knowledge and will therefore fail on this particular instance, ie. the ideal algorithm doesn't exist". Replace ideal with the lower bound you are trying to prove.
For example, if we search the lower bound for the min-search problem in an unsorted array is 𝛺(n). The proof for this is quite trivial, and like most of the time, is made by contradiction. Basically, an algorithm A in o(n) will not see at least one item from the input array, if that item it did not saw was the minimum, A will fail. The contradiction proves that the problem is in 𝛺(n).
Maybe you can have a look at that answer I gave on a similar question.
The notations O, o, Θ, Ω, and ω are used in characterizing mathematical functions; for example, f(n) = n3 log n is in O(n4) and in Ω(n3).
So, the question is what mathematical functions we apply them to.
The mathematical functions that we tend to be interested in are things like "the worst-case time complexity of such-and-such algorithm, as a function of the size of its input", or "the average-case space complexity of such-and-such procedure, as a function of the largest element in its input". (Note: when we just say "the complexity of such-and-such algorithm", that's usually shorthand for its worst-case time complexity, as a function of some characteristic of its input that's hopefully obvious in context. Either way, it's still a mathematical function.)
We can use any of these notations in characterizing those functions. In particular, it's fine to use Ω in characterizing the worst case or average case.
We can also use any of these functions in characterizing things like "the best-case […]" — that's unusual, but there are times when it may be relevant. But, notably, we're not limited to Ω for that; just as we can use Ω in characterizing the worst case, we can also use O in characterizing the best case. It's all about what characterizations we're interested in.
You are confusing two different topics: Lower/upper bound, and worst-case/best-case time complexity.
The short answer to your question is: Yes, all search algorithms have a lower bound of Ω(1). Linear search (in the worst case, and on average) also has a lower bound of Ω(n), which is a stronger and more useful claim. The analogy is that 1 < π but also 3 < π, the latter being more useful. So in this sense, you are right.
However, your confusion seems to be between the notations for complexity classes (big-O, big-Ω, big-θ etc), and the concepts of best-case, worst-case, average case. The point is that the best case and the worst case time complexities of an algorithm are completely different functions, and you can use any of the notations above to describe any of them. (NB: Some claim that big-Ω automatically and exclusively describes best case time complexity and that big-O describes worst case, but this is a common misconception. They just describe complexity classes and you can use them with any mathematical functions.)
It is correct to say that the average time complexity linear search is Ω(n), because we are just talking about the function that describes its average time complexity. Its worst-case complexity is a different function, which happens not to be Ω(n), because as you say it can be constant-time.
Is O(1/n) faster growing than O(1)? I am studying time complexity and saw O(1/n) being compared to O(n) as one of my exercise questions, and I had never seen that before. Not sure how to deduce the answer to this question.
A complexity of O(1/n) would means that the more data you process, the faster is the algorithm... Quite difficult to believe, for first, and second let's do maths: the limit of 1/x when x goes to +INF is zero...
The algorithm would resolve instantly the problem, then? Hey, let's forget about quantum computing, we found something better! :)
Stop joking: such a complexity doesn't exist. Because 1/n is a decreasing monotonic function. Complexities are increasing monotonic functions - at best, it's O(1), meaning a constant time whatever the data quantity is. It's not even a so common complexity for an algorithm, in fact, even if it's quite frequent for certain operations / manipulations.
For example, retrieving the head of a standard linked list is indeed O(1), even if the list is empty or if it contains all possible data of Universe (if it was storable...), because list's head is what is stored to access it. It's the same for all operations involving only exchanging pointers/handles exclusively, all direct accesses (like the [] operator of most arrays), etc. but most algorithms don't have such a nice complexity.
But even a simple (deep) copy is O(n)... Most researchs in a storage are in O(log2(n)). Most sorts are in O(n.log2(n)). Most cross-comparisons are in O(n²). All these functions are (strictly) increasing. All these functions tend to infinity when n also tends to infinity.
I have been learning big o efficiency at school as the "go to" method for describing algorithm runtimes as better or worse than others but what I want to know is will the algorithm with the better efficiency always outperform the worst of the lot like bubble sort in every single situation, are there any situations where a bubble sort or a O(n2) algorithm will be better for a task than another algorithm with a lower O() runtime?
Generally, O() notation gives the asymptotic growth of a particular algorithm. That is, the larger category that an algorithm is placed into in terms of asymptotic growth indicates how long the algorithm will take to run as n grows (for some n number of items).
For example, we say that if a given algorithm is O(n), then it "grows linearly", meaning that as n increases, the algorithm will take about as long as any other O(n) algorithm to run.
That doesn't mean that it's exactly as long as any other algorithm that grows as O(n), because we disregard some things. For example, if the runtime of an algorithm takes exactly 12n+65ms, and another algorithm takes 8n+44ms, we can clearly see that for n=1000, algorithm 1 will take 12065ms to run and algorithm 2 will take 8044ms to run. Clearly algorithm 2 requires less time to run, but they are both O(n).
There are also situations that, for small values of n, an algorithm that is O(n2) might outperform another algorithm that's O(n), due to constants in the runtime that aren't being considered in the analysis.
Basically, Big-O notation gives you an estimate of the complexity of the algorithm, and can be used to compare different algorithms. In terms of application, though, you may need to dig deeper to find out which algorithm is best suited for a given project/program.
Big O is gives you the worst cast scenario. That means that it assumes the input in in the worst possible It also ignores the coefficient. If you are using selection sort on an array that is reverse sorted then it will run in n^2 time. If you use selection sort on a sorted array then it will run in n time. Therefore selection sort would run faster than many other sort algorithms on an already sorted list and slower than most (reasonable) algorithms on a reverse sorted list.
Edit: sorry, I meant insertion sort, not selection sort. Selection sort is always n^2
So I've been trying to understand Big O notation as well as I can, but there are still some things I'm confused about. So I keep reading that if something is O(n), it usually is referring to the worst-case of an algorithm, but that it doesn't necessarily have to refer to the worst case scenario, which is why we can say the best-case of insertion sort for example is O(n). However, I can't really make sense of what that means. I know that if the worst-case is O(n^2), it means that the function that represents the algorithm in its worst case grows no faster than n^2 (there is an upper bound). But if you have O(n) as the best case, how should I read that as? In the best case, the algorithm grows no faster than n? What I picture is a graph with n as the upper bound, like
If the best case scenario of an algorithm is O(n), then n is the upper bound of how fast the operations of the algorithm grow in the best case, so they cannot grow faster than n...but wouldn't that mean that they can grow as fast as O(log n) or O(1), since they are below the upper bound? That wouldn't make sense though, because O(log n) or O(1) is a better scenario than O(n), so O(n) WOULDN'T be the best case? I'm so lost lol
Big-O, Big-Θ, Big-Ω are independent from worst-case, average-case, and best-case.
The notation f(n) = O(g(n)) means f(n) grows no more quickly than some constant multiple of g(n).
The notation f(n) = Ω(g(n)) means f(n) grows no more slowly than some constant multiple of g(n).
The notation f(n) = Θ(g(n)) means both of the above are true.
Note that f(n) here may represent the best-case, worst-case, or "average"-case running time of a program with input size n.
Furthermore, "average" can have many meanings: it can mean the average input or the average input size ("expected" time), or it can mean in the long run (amortized time), or both, or something else.
Often, people are interested in the worst-case running time of a program, amortized over the running time of the entire program (so if something costs n initially but only costs 1 time for the next n elements, it averages out to a cost of 2 per element). The most useful thing to measure here is the least upper bound on the worst-case time; so, typically, when you see someone asking for the Big-O of a program, this is what they're looking for.
Similarly, to prove a problem is inherently difficult, people might try to show that the worst-case (or perhaps average-case) running time is at least a certain amount (for example, exponential).
You'd use Big-Ω notation for these, because you're looking for lower bounds on these.
However, there is no special relationship between worst-case and Big-O, or best-case and Big-Ω.
Both can be used for either, it's just that one of them is more typical than the other.
So, upper-bounding the best case isn't terribly useful. Yes, if the algorithm always takes O(n) time, then you can say it's O(n) in the best case, as well as on average, as well as the worst case. That's a perfectly fine statement, except the best case is usually very trivial and hence not interesting in itself.
Furthermore, note that f(n) = n = O(n2) -- this is technically correct, because f grows more slowly than n2, but it is not useful because it is not the least upper bound -- there's a very obvious upper bound that's more useful than this one, namely O(n). So yes, you're perfectly welcome to say the best/worst/average-case running time of a program is O(n!). That's mathematically perfectly correct. It's just useless, because when people ask for Big-O they're interested in the least upper bound, not just a random upper bound.
It's also worth noting that it may simply be insufficient to describe the running-time of a program as f(n). The running time often depends on the input itself, not just its size. For example, it may be that even queries are trivially easy to answer, whereas odd queries take a long time to answer.
In that case, you can't just give f as a function of n -- it would depend on other variables as well. In the end, remember that this is just a set of mathematical tools; it's your job to figure out how to apply it to your program and to figure out what's an interesting thing to measure. Using tools in a useful manner needs some creativity, and math is no exception.
Informally speaking, best case has O(n) complexity means that when the input meets
certain conditions (i.e. is best for the algorithm performed), then the count of
operations performed in that best case, is linear with respect to n (e.g. is 1n or 1.5n or 5n).
So if the best case is O(n), usually this means that in the best case it is exactly linear
with respect to n (i.e. asymptotically no smaller and no bigger than that) - see (1). Of course,
if in the best case that same algorithm can be proven to perform at most c * log N operations
(where c is some constant), then this algorithm's best case complexity would be informally
denoted as O(log N) and not as O(N) and people would say it is O(log N) in its best case.
Formally speaking, "the algorithm's best case complexity is O(f(n))"
is an informal and wrong way of saying that "the algorithm's complexity
is Ω(f(n))" (in the sense of the Knuth definition - see (2)).
See also:
(1) Wikipedia "Family of Bachmann-Landau notations"
(2) Knuth's paper "Big Omicron and Big Omega and Big Theta"
(3)
Big Omega notation - what is f = Ω(g)?
(4)
What is the difference between Θ(n) and O(n)?
(5)
What is a plain English explanation of "Big O" notation?
I find it easier to think of O() as about ratios than about bounds. It is defined as bounds, and so that is a valid way to think of it, but it seems a bit more useful to think about "if I double the number/size of inputs to my algorithm, does my processing time double (O(n)), quadruple (O(n^2)), etc...". Thinking about it that way makes it a little bit less abstract - at least to me...
I am trying to understand big oh notations. Any help would be appreciated.
Say there is a program that creates a max heap and then pushes and removes the item.
Say there is n items.
To create a heap,it takes O(n) to heapify if you have read it into an array and then, heapifies it.
To push an item, it takes O(1) and to remove it, it takes O(1)
To heapify it after that, it takes log n for each remove and n log n for n items
So the big oh notation is O(n + n log n)
OR, is it O(n log n) only because we choose the biggest one.
The complexity to heapify the new element in the heap is O(logN), not O(1)(unless you use an Fibonacci heap which it seems is not the case).
Also there is no notation O(N + NlogN) as NlogN grows faster than N so this notation is simply written as O(NlogN).
EDIT: The big-oh notation only describes the asymptotic behavior of a function, that is - how fast it grows. As you get close to infinity 2*f(x) and 11021392103*f(x) behave similarly and that is why when writing big-oh notation, we ignore any constants in front of the function.
Formally speaking, O(N + N log N) is equivalent to O(N log N).
That said, it's assumed that there are coefficients buried in each of these, ala: O( aN + bN log(cN) ). If you have very large N values, these coefficients become unimportant and the algorithm is bounded only by its largest term, which, in this case, is log(N).
But it doesn't mean the coefficients are entirely unimportant. This is why in discussions of graph algorithms you'll often see authors say something like "the Floyd-Warshall algorithm runs in O(N^3) time, but has small coefficients".
If we could somehow write O(0.5N^3) in this case, we would. But it turns out that the coefficients vary depending on how you implement an algorithm and which computer you run it on. Thus, we settle for asymptotic comparisons, not necessarily because it is the best way, but because there isn't really a good alternative.
You'll also see things like "Worst-case: O(N^2), Average case: O(N)". This is an attempt to capture how the behavior of the algorithm varies with the input. Often times, presorted or random inputs can give you that average case, whereas an evil villain can construct inputs that produce the worst case.
Ultimately, what I am saying is this: O(N + N log N)=O(N log N). This is true, and it's the right answer for your homework. But we use this big-O notation to communicate and, in the fullness of time, you may find situations where you feel that O(N + N log N) is more expressive, perhaps if your algorithm is generally used for small N. In this case, do not worry so much about the formalism - just be clear about what it is you are trying to convey with it.