It seems like the best complexity would be linear O(n).
Doesn't matter the case really, I'm speaking of greedy algorithms in general.
Sometimes it pays off to be greedy?
In the specific case that I am interested would be computing change.
Say you need to give 35 cents in change. You have coins of 1, 5, 10, 25. The greedy algorithm, coded simply, would solve this problem quickly and easily. First grabbing 25 cents the highest value going in 35 and then next 10 cents to complete the total. This would be best case. Of course there are bad cases and cases where this greedy algorithm would have issues. I'm talking best case complexity for determining this type of problem.
Any algorithm that has an output of n items that must be taken individually has at best O(n) time complexity; greedy algorithms are no exception. A more natural greedy version of e.g. a knapsack problem converts something that is NP-complete into something that is O(n^2)--you try all items, pick the one that leaves the least free space remaining; then try all the remaining ones, pick the best again; and so on. Each step is O(n). But the complexity can be anything--it depends on how hard it is to be greedy. (For example, a greedy clustering algorithm like hierarchical agglomerative clustering has individual steps that are O(n^2) to evaluate (at least naively) and requires O(n) of these steps.)
When you're talking about greedy algorithms, typically you're talking about the correctness of the algorithm rather than the time complexity, especially for problems such as change making.
Greedy heuristics are used because they're simple. This means easy implementations for easy problems, and reasonable approximations for hard problems. In the latter case you'll find time complexities that are better than guaranteed correct algorithms. In the former case, you can't hope for better than optimal time complexity.
GREEDY APPROACH
knapsack problem...sort the given element using merge sort ..(nlogn)
find max deadline that will take O(n)
using linear search select one by one element....O(n²)
nlogn + n + n² = n² in worst case....
now can we apply binary search instead of linear search.....?
Greedy or not has essentially nothing to do with computational complexity, other than the fact that greedy algorithms tend to be simpler than other algorithms to solve the same problem, and hence they tend to have lower complexity.
Related
I'm trying to answer this question about algorithms and I don't understand what possibly this could be. I don't have any example to provide to you and I'm sharing it same as it was shared with me:
"If the complexity of the X algorithm for the worst case is equal to the complexity of the Y algorithm for the best case, which of these two algorithms is faster? Explain why!"
They're not looking for any specific answer. They're looking for how you reason about the question. For example, you can reason as follows:
Obviously, one would prefer an algorithm whose worst case is as good as another algorithm's best case. Because in the worst case, they're equal and in the best case, it's better. But complexity isn't the only criteria by which algorithms should be judged, ...
This is one of those "see how you reason about things" questions and not a "get the right answer" question.
Let me try to explain it in steps:
Understand that an algorithm can have different best/average/worst time complexity depending on factors such as input size, etc.
If algorithm X's worst performance is equal to the complexity of the algorithm Y's best performance then you can reason that, overall, the algorithm X is faster than algorithm Y but this is considering only the asymptotic complexity. See 3.
Of course there are many other factors you have to consider. Consider the scenario when Algorithm X performs better than Y for very specific input but on average and in worst case both, X and Y perform the same, then it is worth understanding the trade offs between these two algorithms such as space complexity and amortized complexity.
I know that for some problems, no matter what algorithm you use to solve it, there will always be a certain minimum amount of time that will be required to solve the problem. I know BigO captures the worst-case (maximum time needed), but how can you find the minimum time required as a function of n? Can we find the minimum time needed for sorting n integers, or perhaps maybe finding the minimum of n integers?
what you are looking for is called best case complexity. It is kind of useless analysis for algorithms while worst case analysis is the most important analysis and average case analysis is sometimes used in special scenario.
the best case complexity depends on the algorithms. for example in a linear search the best case is, when the searched number is at the beginning of the array. or in a binary search it is in the first dividing point. in these cases the complexity is O(1).
for a single problem, best case complexity may vary depending on the algorithm. for example lest discuss about some basic sorting algorithms.
in bubble sort best case is when the array is already sorted. but even in this case you have to check all element to be sure. so the best case here is O(n). same goes to the insertion sort
for quicksort/mergesort/heapsort the best case complexity is O(n log n)
for selection sort it is O(n^2)
So from the above case you can understand that the complexity ( whether it is best , worst or average) depends on the algorithm, not on the problem
I am attempting to prepare a presentation to explain the basics of algorithm analysis to my co-workers - some of them have never had a lecture on the subject before, but everyone has at least a few years programming behind them and good math backgrounds, so I think I can teach this. I can explain the concepts fine, but I need concrete examples of some code structures or patterns that result in factors so I can demonstrate them.
Geometric factors (n, n^2, n^3, etc) are easy, embedded loops using the same sentinel, but I am getting lost on how to describe and show off some of the less common ones.
I would like to incorporate exponential (2^n or c^n), logarithmic (n log(n) or just log(n)) and factoral (n!) factors in the presentation. What are some short, teachable ways to get these in code?
A divide-and-conquer algorithm that does a constant amount of work for each time it divides the problem in half is O(log n). For example a binary search.
A divide-and-conquer algorithm that does a linear amount of work for each time it divides the problem in half is O(n * log n). For example a merge sort.
Exponential and factorial are probably best illustrated by iterating respectively over all subsets of a set, or all permutations of a set.
Exponential: naive Fibonacci implementation.
n log(n) or just log(n): Sorting and Binary seach
Factorial: Naive traveling salesman solutions. Many naive solutions to NP-complete problems.
n! problems are pretty simple. There are many NP-complete n! time problems such as the travelling salesman problem
In doubt pick one of the Sort algorithms - everyone knows what they're supposed to do and therefore they're easy to explain in relation to the complexity stuff: Wikipedia has a quite good overview
Why do divide and conquer algorithms often run faster than brute force? For example, to find closest pair of points. I know you can show me the mathematical proof. But intuitively, why does this happen? Magic?
Theoretically, is it true that "divide and conquer is always better than brute force"? If it is not, is there any counterexample?
For your first question, the intuition behind divide-and-conquer is that in many problems the amount of work that has to be done is based on some combinatorial property of the input that scales more than linearly.
For example, in the closest pair of points problem, the runtime of the brute-force answer is determined by the fact that you have to look at all O(n2) possible pairs of points.
If you take something that grows quadratically and cut it into two pieces, each of which is half the size as before, it takes one quarter of the initial time to solve the problem in each half, so solving the problem in both halves takes time roughly one half the time required for the brute force solution. Cutting it into four pieces would take one fourth of the time, cutting it into eight pieces one eighth the time, etc.
The recursive version ends up being faster in this case because at each step, we avoid doing a lot of work from dealing with pairs of elements by ensuring that there aren't too many pairs that we actually need to check. Most algorithms that have a divide and conquer solution end up being faster for a similar reason.
For your second question, no, divide and conquer algorithms are not necessarily faster than brute-force algorithms. Consider the problem of finding the maximum value in an array. The brute-force algorithm takes O(n) time and uses O(1) space as it does a linear scan over the data. The divide-and-conquer algorithm is given here:
If the array has just one element, that's the maximum.
Otherwise:
Cut the array in half.
Find the maximum in each half.
Compute the maximum of these two values.
This takes time O(n) as well, but uses O(log n) memory for the stack space. It's actually worse than the simple linear algorithm.
As another example, the maximum single-sell profit problem has a divide-and-conquer solution, but the optimized dynamic programming solution is faster in both time and memory.
Hope this helps!
I recommend you read through the chapter 5 of Algorithm Design, it explains divide-and-conquer very well.
Intuitively, for a problem, if you can divide it into two sub-problems with the same pattern as the origin one, and the time complexity to merge the results of the two sub-problems into the final result is somehow small, then it's faster than solve the orignal complete problem by brute-force.
As said in Algorithm Design, you actually cannot gain too much from divide-and-conquer in terms of time, general you can only reduce time complexity from higher polynomial to lower polynomial(e.g. from O(n^3) to O(n^2)), but hardly from exponential to polynomial(e.g. from O(2^n) to O(n^3)).
I think the most you can gain from divide-and-conquer is the mindset for problem solving. It's always a good attempt to break the original big problem down to smaller and easier sub-problems. Even if you don't get a better running time, it still helps you think through the problem.
Is there any way to test an algorithm for perfect optimization?
There is no easy way to prove that any given algorithm is asymptotically optimal.
Proving optimality (if ever) sometimes follows years and/or decades after the algorithm has been written. A classic example is the Union-Find/disjoint-set data structure.
Disjoint-set forests are a data structure where each set is represented by a tree data structure, in which each node holds a reference to its parent node. They were first described by Bernard A. Galler and Michael J. Fischer in 1964, although their precise analysis took years.
[...] These two techniques complement each other; applied together, the amortized time per operation is only O(α(n)), where α(n) is the inverse of the function f(n) = A(n,n), and A is the extremely quickly-growing Ackermann function.
[...] In fact, this is asymptotically optimal: Fredman and Saks showed in 1989 that Ω(α(n)) words must be accessed by any disjoint-set data structure per operation on average.
For some algorithms optimality can be proven after very careful analysis, but generally speaking, there's no easy way to tell if an algorithm is optimal once it's written. In fact, it's not always easy to prove if the algorithm is even correct.
See also
Wikipedia/Matrix multiplication
The naive algorithm is O(N3), Strassen's is roughly O(N2.807), Coppersmith-Winograd is O(N2.376), and we still don't know what is optimal.
Wikipedia/Asymptotically optimal
it is an open problem whether many of the most well-known algorithms today are asymptotically optimal or not. For example, there is an O(nα(n)) algorithm for finding minimum spanning trees. Whether this algorithm is asymptotically optimal is unknown, and would be likely to be hailed as a significant result if it were resolved either way.
Practical considerations
Note that sometimes asymptotically "worse" algorithms are better in practice due to many factors (e.g. ease of implementation, actually better performance for the given input parameter range, etc).
A typical example is quicksort with a simple pivot selection that may exhibit quadratic worst-case performance, but is still favored in many scenarios over a more complicated variant and/or other asymptotically optimal sorting algorithms.
For those among us mortals that merely want to know if an algorithm:
reasonably works as expected;
is faster than others;
there is an easy step called 'benchmark'.
Pick up the best contenders in the area and compare them with your algorithm.
If your algorithm wins then it better matches your needs (the ones defined by
your benchmarks).