What is the difference between dynamic programming and greedy approach? - algorithm

What is the main difference between dynamic programming and greedy approach in terms of usage?
As far as I understood, the greedy approach sometimes gives an optimal solution; in other cases, the dynamic programming approach gives an optimal solution.
Are there any particular conditions which must be met in order to use one approach (or the other) to obtain an optimal solution?

Based on Wikipedia's articles.
Greedy Approach
A greedy algorithm is an algorithm that follows the problem solving heuristic of making
the locally optimal choice at each stage with the hope of finding a global optimum. In
many problems, a greedy strategy does not in general produce an optimal solution, but nonetheless a greedy heuristic may yield locally optimal solutions that approximate a global optimal solution in a reasonable time.
We can make whatever choice seems best at the moment and then solve the subproblems that arise later. The choice made by a greedy algorithm may depend on choices made so far but not on future choices or all the solutions to the subproblem. It iteratively makes one greedy choice after another, reducing each given problem into a smaller one.
Dynamic programming
The idea behind dynamic programming is quite simple. In general, to solve a given problem, we need to solve different parts of the problem (subproblems), then combine the solutions of the subproblems to reach an overall solution. Often when using a more naive method, many of the subproblems are generated and solved many times. The dynamic programming approach seeks to solve each subproblem only once, thus reducing the number of computations: once the solution to a given subproblem has been computed, it is stored or "memo-ized": the next time the same solution is needed, it is simply looked up. This approach is especially useful when the number of repeating subproblems grows exponentially as a function of the size of the input.
Difference
Greedy choice property
We can make whatever choice seems best at the moment and then solve the subproblems that arise later. The choice made by a greedy algorithm may depend on choices made so far but not on future choices or all the solutions to the subproblem. It iteratively makes one greedy choice after another, reducing each given problem into a smaller one. In other words, a greedy algorithm never reconsiders its choices.
This is the main difference from dynamic programming, which is exhaustive and is guaranteed to find the solution. After every stage, dynamic programming makes decisions based on all the decisions made in the previous stage, and may reconsider the previous stage's algorithmic path to solution.
For example, let's say that you have to get from point A to point B as fast as possible, in a given city, during rush hour. A dynamic programming algorithm will look into the entire traffic report, looking into all possible combinations of roads you might take, and will only then tell you which way is the fastest. Of course, you might have to wait for a while until the algorithm finishes, and only then can you start driving. The path you will take will be the fastest one (assuming that nothing changed in the external environment).
On the other hand, a greedy algorithm will start you driving immediately and will pick the road that looks the fastest at every intersection. As you can imagine, this strategy might not lead to the fastest arrival time, since you might take some "easy" streets and then find yourself hopelessly stuck in a traffic jam.
Some other details...
In mathematical optimization, greedy algorithms solve combinatorial problems having the properties of matroids.
Dynamic programming is applicable to problems exhibiting the properties of overlapping subproblems and optimal substructure.

I would like to cite a paragraph which describes the major difference between greedy algorithms and dynamic programming algorithms stated in the book Introduction to Algorithms (3rd edition) by Cormen, Chapter 15.3, page 381:
One major difference between greedy algorithms and dynamic programming is that instead of first finding optimal solutions to subproblems and then making an informed choice, greedy algorithms first make a greedy choice, the choice that looks best at the time, and then solve a resulting subproblem, without bothering to solve all possible related smaller subproblems.

Difference between greedy method and dynamic programming are given below :
Greedy method never reconsiders its choices whereas Dynamic programming may consider the previous state.
Greedy algorithm is less efficient whereas Dynamic programming is more efficient.
Greedy algorithm have a local choice of the sub-problems whereas Dynamic programming would solve the all sub-problems and then select one that would lead to an optimal solution.
Greedy algorithm take decision in one time whereas Dynamic programming take decision at every stage.

In simple words we can say that in Dynamic Programming (having problem sending message on network) one can first examine the path which takes the shortest time and then start journey,
On the other hand Greedy algorithm take the optimal decision on the spot without thinking for the next step and on the next step change its decision again and so on...
Notes: Dynamic programming is reliable while Greedy Algorithms is not reliable always.

With the reference of Biswajit Roy:
Dynamic Programming firstly plans then Go.
and
Greedy algorithm uses greedy choice, it firstly Go then continuously Plans.

the major difference between greedy method and dynamic programming is in greedy method only one optimal decision sequence is ever generated and in dynamic programming more than one optimal decision sequence may be generated.

Related

Is it possible that a greedy algorithm is also a dynamic programming algorithm?

Is it possible that a greedy algorithm is also a dynamic programming algorithm?
I took an Analysis of Algorithms class but still, I am not sure with the two concepts.
I understand that the greedy approach uses the current optimal solution to find the global optimal solution and DP algorithm reuse the overlapping sub-results.
I believe the answer is "YES" but I couldn't find a good example which is both greedy and DP algorithm.
Could someone give me an example?
If the answer to the above question is "NO" then could someone explain to me why?
From looking at the Bellman equation:
If in the minimization we can separate the f part (current period) from the J part (optimal from previous periods) then this corresponds precisely to the greedy approach. An easy example of this is when the optimization function is the sum of the costs at each period,
J(u1,u2,...)= sum(f_i(u_i)).
Here's my understanding
Greedy algorithm and dynamic algorithm are two different things. The greedy algorithm always makes the choice that seems to be the best at that moment. It will make choice as soon as the new option pops up regardless what is going to happen next.
the dynamic algorithm is combining the solution for the subprogram to get the final solution. It makes the decision based on the results of subprogram and it usually works when there's variable that influences the final solution. So, these are two kinds of way thinking.
The dynamic algorithm always works in the problem that can be solved by greedy algorithm ,but the time cost and space cost of dynamic algorithm are much higher than those of the greedy algorithm. The greedy algorithm mostly can not solve the DP problem.
So the answer is No
In optimization algorithms, the greedy approach and the dynamic programming approach are basically opposites. The greedy approach is to choose the locally optimal option, while the whole purpose of dynamic programming is to efficiently evaluate the whole range of options.
BUT that doesn't mean you can't have an algorithm that takes advantage of both strategies. The A* path-finding algorithm, for example, does just that, and is both a greedy algorithm and a dynamic programming algorithm. It uses the greedy approach to optimize the best cases, and the dynamic programming approach to optimize the worst cases.
See: https://en.wikipedia.org/wiki/A*_search_algorithm

The greedy algorithm and implementation

Hello I've just started learning greedy algorithm and I've first looked at the classic coin changing problem. I could understand the greediness (i.e., choosing locally optimal solution towards a global optimum.) in the algorithm as I am choosing the highest value of coin such that the
sum+{value of chosen coin}<=total value . Then I started to solve some greedy algorithm problem in some sites. I could solve most of the problems but couldn't figure out exactly where the greediness is applied in the problem. I coded the only solution i could think of, for the problems and got it accepted. The editorials also show the same way of solving problem but i could not understand the application of greedy paradigm in the algorithm.
Are greedy algorithms the only way of solving a particular range of problems? Or they are one way of solving problems which could be more efficient?
Could you give me pseudo codes of a same problem with and without the application of greedy paradigm?
There are lots of real life examples of greedy algorithms. One of the obvious is the coin changing problem, to make change in a certain currency, we repeatedly dispense the largest denomination, thus , to give out seventeen dollars and sixty one cents in change, we give out a ten-dollar bill, a five-dollar bill, two one-dollar bills, two quarters , one dime, and one penny. By doing this, we are guaranteed to minimize the number of bills and coins. This algorithm does not work in all monetary systems...more here
I think that there is always another way to solve a problem, but sometimes, as you've stated, it probably will be less efficient.
For example, you can always check all the options (coins permutations), store the results and choose the best, but of course the efficiency is terrible.
Hope it helps.
Greedy algorithms are just a class of algorithms that iteratively construct/improve a solution.
Imagine the most famous problem - TSP. You can formulate it as Integer Linear Programming problem and give it to an ILP solver and it will give you globally optimal solution (if it has enought time). But you could do it in a greedy way. You can construct some solution (e.g. randomly) and then look for changes (e.g. switch an order of two cities) that improve your solution and you keep doing these changes until there is no such change possible.
So the bottom line is: greedy algorithms are only a method of solving hard problems efficiently (in time, but not necessary in the quality of solution), but there are other classes of algorithms for solving such problems.
For coins, greedy algorithm is also the optimal one, therefore the "greediness" is not as visible as with some other problems.
In some cases you prefer solution, which is not the best one, but you can compute it much faster (computing the real best solution can takes years for example).
Then you choose heuristic, that should give you the best results - based on average input data, its structure and what you want to want to accomplish.
On wikipedia, there is good solution on finding the biggest sum of numbers in tree
Imagine that you have for example 2^1000 nodes in this tree. To find optimal solution, you have to visit each node once. Personal computer today is not able to do this in your lifetime, therefore you want some heuristic. Greedy alghoritm however find solution in just 1000 steps (which does not take more than one milisecond)

How to spot a "greedy" algorithm?

I am reading a tutorial about "greedy" algorithms but I have a hard time spotting them solving real "Top Coder" problems.
If I know that a given problem can be solved with a "greedy" algorithm it is pretty easy to code the solution. However if I am not told that this problem is "greedy" I can not spot it.
What are the common properties and patterns of the problems solved with "greedy" algorithms? Can I reduce them to one of the known "greedy" problems (e.g. MST)?
Formally, you'd have to prove the matroid property of course. However, I assume that in terms of topcoder you rather want to find out quickly if a problem can be approached greedily or not.
In that case, the most important point is the optimal sub-structure property. For this, you have to be able to spot that the problem can be decomposed into sub-problems and that their optimal solution is part of the optimal solution of the whole problem.
Of course, greedy problems come in such a wide variety that it's next to impossible to offer a general correct answer to your question. My best advice would hence be to think somewhere along these lines:
Do I have a choice between different alternatives at some point?
Does this choice result in sub-problems that can be solved individually?
Will I be able to use the solution of the sub-problem to derive a solution for the overall problem?
Together with loads and loads of experience (just had to say that, too) this should help you to quickly spot greedy problems. Of course, you may eventually classify a problem as greedy, which is not. In that case, you can only hope to realize it before working on the code for too long.
(Again, for reference, I assume a topcoder context.. for anything more realistic and of practical consequence I strongly advise to actually verify the matroid structure before selecting a greedy algorithm.)
A part of your problem may be caused by thinking of "greedy problems". There are greedy algorithms and problems where there is a greedy algorithm, that leads to an optimal solution. There are other hard problems that can also be solved by greedy algorithms but the result will not necessarily be optimal.
For example, for the bin packing problem, there are several greedy algorithms all of them with much better complexity than the exponential algorithm, but you can only be sure that you'll get a solution that is below a certain threshold compared to the optimal solution.
Only regarding problems where greedy algorithms will lead to an optimal solution, my guess would be that an inductive correctness proof feels totally natural and easy. For every single one of your greedy steps, it is quite clear that this was the best thing to do.
Typically problems with optimal, greedy solutions are easy anyway, and others will force you to come up with a greedy heuristic, because of complexity limitations. Usually a meaningful reduction would be showing that your problem is in fact at least NP-hard and hence you know you'll have to find a heuristic. For those problems, I'm a big fan of trying out. Implement your algorithm and try to find out if solutions are "pretty good" (ideal if you also have a slow but correct algorithm you can compare results against, otherwise you might need manually created ground truths). Only if you have something that works well, try to think why and maybe even try to come up with proof of boundaries. Maybe it works, maybe you'll spot border cases where it doesn't work and needs refinement.
"A term used to describe a family of algorithms. Most algorithms try to reach some "good" configuration from some initial configuration, making only legal moves. There is often some measure of "goodness" of the solution (assuming one is found).
The greedy algorithm always tries to perform the best legal move it can. Note that this criterion is local: the greedy algorithm doesn't "think ahead", agreeing to perform some mediocre-looking move now, which will allow better moves later.
For instance, the greedy algorithm for egyptian fractions is trying to find a representation with small denominators. Instead of looking for a representation where the last denominator is small, it takes at each step the smallest legal denominator. In general, this leads to very large denominators at later steps.
The main advantage of the greedy algorithm is usually simplicity of analysis. It is usually also very easy to program. Unfortunately, it is often sub-optimal."
--- ariels
(http://www.everything2.com/title/greedy+algorithm?searchy=search)

Divide and conquer, dynamic programming and greedy algorithms!

When I have a problem with optimal substructur and no subproblem shares subsubproblems then I can use a divide and conquer algorithm to solve it?
But when the subproblem shares subsubproblems (overlapping subproblems) then I can use dynamic programming to solve the problem?
Is this correct?
And how is greedy algorithms similar to dynamic programming?
When I have a problem with optimal
substructur and no subproblem shares
subsubproblems then I can use a divide
and conquer algorithm to solve it?
Yes, as long as you can find an optimal algorithm for each kind of subproblem.
But when the subproblem shares
subsubproblems (overlapping
subproblems) then I can use dynamic
programming to solve the problem?
Is this correct?
Yes. Dynamic programming is basically a special case of the family of Divide & Conquer algorithms, where all subproblems are the same.
And how is greedy algorithms similar
to dynamic programming?
They're different.
Dynamic programming gives you the optimal solution.
A Greedy algorithm usually give a good/fair solution in a small amount of time but it doesn't assure to reach the optimum.
It is, let's say, similar because it usually divides the solution construction in several stages in which it takes choices that are locally optimal. But if stages are not optimal substructures of the original problem, then normally it doesn't lead to the best solution.
EDIT:
As pointed out by #rrenaud, there are some greedy algorithms that have been proven to be optimal (e.g. Dijkstra, Kruskal, Prim etc.).
So, to be more correct, the main difference between greedy and dynamic programming is that the former is not exhaustive on the space of solutions while the latter is.
In fact greedy algorithms are short-sighted on that space, and each choice made during solution construction is never reconsidered.
Dynamic program uses bottom-up approach, saves the previous solution and refer it, this will allow us to make optimal solution among all available solutions, whereas greedy approach uses the top-down approach, so it takes an optimal solution from the locally available solution, will not take the previous level solutions which leads to the less optimized solution.
Dynamic= bottom up, optimal solution
Greedy= top down, less optimal, less time consuming

Solutions to problems using dynamic programming or greedy methods?

What properties should the problem have so that I can decide which method to use dynamic programming or greedy method?
Dynamic programming problems exhibit optimal substructure. This means that the solution to the problem can be expressed as a function of solutions to subproblems that are strictly smaller.
One example of such a problem is matrix chain multiplication.
Greedy algorithms can be used only when a locally optimal choice leads to a totally optimal solution. This can be harder to see right away, but generally easier to implement because you only have one thing to consider (the greedy choice) instead of multiple (the solutions to all smaller subproblems).
One famous greedy algorithm is Kruskal's algorithm for finding a minimum spanning tree.
The second edition of Cormen, Leiserson, Rivest and Stein's Algorithms book has a section (16.4) titled "Theoretical foundations for greedy methods" that discusses when the greedy methods yields an optimum solution. It covers many cases of practical interest, but not all greedy algorithms that yield optimum results can be understood in terms of this theory.
I also came across a paper titled "From Dynamic Programming To Greedy Algorithms" linked here that talks about certain greedy algorithms can be seen as refinements of dynamic programming. From a quick scan, it may be of interest to you.
There's really strict rule to know it. As someone already said, there are some things that should turn the red light on, but at the end, only experience will be able to tell you.
We apply greedy method when a decision can be made on the local information available at each stage.We are sure that following the set of decisions at each stage,we will find the optimal solution.
However, in dynamic approach we may not be sure about making a decision at one stage, so we carry a set of probable decisions , one of the probable elements may take to a solution.

Resources