Dynamic programming and backtack search - algorithm

Can a backtrack and "branch and bound" problem be always solved using dynamic programming?? i.e. given a problem which can be solved using a backtrack method be also solved using dynamic programming

In the general case, whether dynamic programming can be applied, maybe. But whether dynamic programming will definitely lead to an efficient or a pseudo-efficient solution, no.
For example, there can be a number of NP complete Integer Linear Programming problems that need to be solve using branch & bound or through brute-force backtracking since dynamic programming formulation is not possible.
For example this question that I asked some time back, I could not form a DP formulation and I had to resort to finding a solver for my ILP problem. Strange but practical 2D bin packing optimization

There is certainly no such comparison of backtracking and DP because in general DP is used for optimization problems where you need best of many possible solutions whereas backtracking is used to search a single solution to a problem. Whereas you may have a good DP solution to problems which can be solved using Branch and Bound but not always as some problems may not be decomposable to smaller subproblems hence DP solution may not exist.

Related

Dynamic Programming technique for solving problems

Is it possible to solve any Dynamic Programming problem using recursion+memoization instead of using tabulation/iteration? Or there are some problems where it is must to use tabulation/iteration.
Also can we obtain the same time complexity when solving any problem using recursion+memoization ( I know space complexity differs and also recursion overhead cost exists).
Every Dynamic Programming problem can be expressed as recurrence relation which can be solved using recursion+memoization which can be converted into tabulation+iteration.
When you solve a DP problem using tabulation you solve the problem bottom up, typically by filling up an n-dimensional table. Based on the results in the table, the solution to the original problem is then computed.
When you solve a DP problem using memoization, you do it by maintaining a map of already solved sub problems. You do it top down in the sense that you solve the "top" problem first (which typically recurses down to solve the sub-problems).
The time complexity of a DP problem which uses tabulation+iteration is the same as an converted equivalent and correct memoization+recursion version of the solution. It is usually easy to find the time complexity in an tabulation+iteration method. On the other hand, memoization+recursion version of DP solution is more intuitive and readable.

Is it possible that a greedy algorithm is also a dynamic programming algorithm?

Is it possible that a greedy algorithm is also a dynamic programming algorithm?
I took an Analysis of Algorithms class but still, I am not sure with the two concepts.
I understand that the greedy approach uses the current optimal solution to find the global optimal solution and DP algorithm reuse the overlapping sub-results.
I believe the answer is "YES" but I couldn't find a good example which is both greedy and DP algorithm.
Could someone give me an example?
If the answer to the above question is "NO" then could someone explain to me why?
From looking at the Bellman equation:
If in the minimization we can separate the f part (current period) from the J part (optimal from previous periods) then this corresponds precisely to the greedy approach. An easy example of this is when the optimization function is the sum of the costs at each period,
J(u1,u2,...)= sum(f_i(u_i)).
Here's my understanding
Greedy algorithm and dynamic algorithm are two different things. The greedy algorithm always makes the choice that seems to be the best at that moment. It will make choice as soon as the new option pops up regardless what is going to happen next.
the dynamic algorithm is combining the solution for the subprogram to get the final solution. It makes the decision based on the results of subprogram and it usually works when there's variable that influences the final solution. So, these are two kinds of way thinking.
The dynamic algorithm always works in the problem that can be solved by greedy algorithm ,but the time cost and space cost of dynamic algorithm are much higher than those of the greedy algorithm. The greedy algorithm mostly can not solve the DP problem.
So the answer is No
In optimization algorithms, the greedy approach and the dynamic programming approach are basically opposites. The greedy approach is to choose the locally optimal option, while the whole purpose of dynamic programming is to efficiently evaluate the whole range of options.
BUT that doesn't mean you can't have an algorithm that takes advantage of both strategies. The A* path-finding algorithm, for example, does just that, and is both a greedy algorithm and a dynamic programming algorithm. It uses the greedy approach to optimize the best cases, and the dynamic programming approach to optimize the worst cases.
See: https://en.wikipedia.org/wiki/A*_search_algorithm

What is the difference between dynamic programming and greedy approach?

What is the main difference between dynamic programming and greedy approach in terms of usage?
As far as I understood, the greedy approach sometimes gives an optimal solution; in other cases, the dynamic programming approach gives an optimal solution.
Are there any particular conditions which must be met in order to use one approach (or the other) to obtain an optimal solution?
Based on Wikipedia's articles.
Greedy Approach
A greedy algorithm is an algorithm that follows the problem solving heuristic of making
the locally optimal choice at each stage with the hope of finding a global optimum. In
many problems, a greedy strategy does not in general produce an optimal solution, but nonetheless a greedy heuristic may yield locally optimal solutions that approximate a global optimal solution in a reasonable time.
We can make whatever choice seems best at the moment and then solve the subproblems that arise later. The choice made by a greedy algorithm may depend on choices made so far but not on future choices or all the solutions to the subproblem. It iteratively makes one greedy choice after another, reducing each given problem into a smaller one.
Dynamic programming
The idea behind dynamic programming is quite simple. In general, to solve a given problem, we need to solve different parts of the problem (subproblems), then combine the solutions of the subproblems to reach an overall solution. Often when using a more naive method, many of the subproblems are generated and solved many times. The dynamic programming approach seeks to solve each subproblem only once, thus reducing the number of computations: once the solution to a given subproblem has been computed, it is stored or "memo-ized": the next time the same solution is needed, it is simply looked up. This approach is especially useful when the number of repeating subproblems grows exponentially as a function of the size of the input.
Difference
Greedy choice property
We can make whatever choice seems best at the moment and then solve the subproblems that arise later. The choice made by a greedy algorithm may depend on choices made so far but not on future choices or all the solutions to the subproblem. It iteratively makes one greedy choice after another, reducing each given problem into a smaller one. In other words, a greedy algorithm never reconsiders its choices.
This is the main difference from dynamic programming, which is exhaustive and is guaranteed to find the solution. After every stage, dynamic programming makes decisions based on all the decisions made in the previous stage, and may reconsider the previous stage's algorithmic path to solution.
For example, let's say that you have to get from point A to point B as fast as possible, in a given city, during rush hour. A dynamic programming algorithm will look into the entire traffic report, looking into all possible combinations of roads you might take, and will only then tell you which way is the fastest. Of course, you might have to wait for a while until the algorithm finishes, and only then can you start driving. The path you will take will be the fastest one (assuming that nothing changed in the external environment).
On the other hand, a greedy algorithm will start you driving immediately and will pick the road that looks the fastest at every intersection. As you can imagine, this strategy might not lead to the fastest arrival time, since you might take some "easy" streets and then find yourself hopelessly stuck in a traffic jam.
Some other details...
In mathematical optimization, greedy algorithms solve combinatorial problems having the properties of matroids.
Dynamic programming is applicable to problems exhibiting the properties of overlapping subproblems and optimal substructure.
I would like to cite a paragraph which describes the major difference between greedy algorithms and dynamic programming algorithms stated in the book Introduction to Algorithms (3rd edition) by Cormen, Chapter 15.3, page 381:
One major difference between greedy algorithms and dynamic programming is that instead of first finding optimal solutions to subproblems and then making an informed choice, greedy algorithms first make a greedy choice, the choice that looks best at the time, and then solve a resulting subproblem, without bothering to solve all possible related smaller subproblems.
Difference between greedy method and dynamic programming are given below :
Greedy method never reconsiders its choices whereas Dynamic programming may consider the previous state.
Greedy algorithm is less efficient whereas Dynamic programming is more efficient.
Greedy algorithm have a local choice of the sub-problems whereas Dynamic programming would solve the all sub-problems and then select one that would lead to an optimal solution.
Greedy algorithm take decision in one time whereas Dynamic programming take decision at every stage.
In simple words we can say that in Dynamic Programming (having problem sending message on network) one can first examine the path which takes the shortest time and then start journey,
On the other hand Greedy algorithm take the optimal decision on the spot without thinking for the next step and on the next step change its decision again and so on...
Notes: Dynamic programming is reliable while Greedy Algorithms is not reliable always.
With the reference of Biswajit Roy:
Dynamic Programming firstly plans then Go.
and
Greedy algorithm uses greedy choice, it firstly Go then continuously Plans.
the major difference between greedy method and dynamic programming is in greedy method only one optimal decision sequence is ever generated and in dynamic programming more than one optimal decision sequence may be generated.

How do you reason whether an exercise has a dynamic programming solution or not? If it has then how do you develop the algorithm to solve it?

What conditions does a programming problem need to meet in order to be solved trough dynamic programming? What reasoning do you do in order to find out?
Once you conclude that it indeed has a DP solution then how would you go on about creating a DP algorithm that solves it? What's the logic behind creating such algorithms?
A problem must meet two conditions in order to be solved trough dynamic programming. Citing Introduction to Algorithms, chapter 15:
Optimal substructure if an optimal solution to the problem contains within it optimal solutions to subproblems.
When a recursive algorithm revisits the same problem repeatedly, we say that the optimization problem has overlapping subproblems.
In The Algorithm Design Manual, chapter 8, the author describes the three steps involved in solving a problem by dynamic programming:
Formulate the answer as a recurrence relation or recursive algorithm.
Show that the number of different parameter values taken on by your recur- rence is bounded by a (hopefully small) polynomial.
Specify an order of evaluation for the recurrence so the partial results you need are always available when you need them.
As usual, the wikipedia contains an extensive discussion on the topic, if you want to dig deeper.

Difference between back tracking and dynamic programming

I heard the only difference between dynamic programming and back tracking is DP allows overlapping of sub problems, e.g.
fib(n) = fib(n-1) + fib (n-2)
Is it right? Are there any other differences?
Also, I would like know some common problems solved using these techniques.
There are two typical implementations of Dynamic Programming approach: bottom-to-top and top-to-bottom.
Top-to-bottom Dynamic Programming is nothing else than ordinary recursion, enhanced with memorizing the solutions for intermediate sub-problems. When a given sub-problem arises second (third, fourth...) time, it is not solved from scratch, but instead the previously memorized solution is used right away. This technique is known under the name memoization (no 'r' before 'i').
This is actually what your example with Fibonacci sequence is supposed to illustrate. Just use the recursive formula for Fibonacci sequence, but build the table of fib(i) values along the way, and you get a Top-to-bottom DP algorithm for this problem (so that, for example, if you need to calculate fib(5) second time, you get it from the table instead of calculating it again).
In Bottom-to-top Dynamic Programming the approach is also based on storing sub-solutions in memory, but they are solved in a different order (from smaller to bigger), and the resultant general structure of the algorithm is not recursive. LCS algorithm is a classic Bottom-to-top DP example.
Bottom-to-top DP algorithms are usually more efficient, but they are generally harder (and sometimes impossible) to build, since it is not always easy to predict which primitive sub-problems you are going to need to solve the whole original problem, and which path you have to take from small sub-problems to get to the final solution in the most efficient way.
Dynamic problems also requires "optimal substructure".
According to Wikipedia:
Dynamic programming is a method of
solving complex problems by breaking
them down into simpler steps. It is
applicable to problems that exhibit
the properties of 1) overlapping
subproblems which are only slightly
smaller and 2) optimal substructure.
Backtracking is a general algorithm
for finding all (or some) solutions to
some computational problem, that
incrementally builds candidates to the
solutions, and abandons each partial
candidate c ("backtracks") as soon as
it determines that c cannot possibly
be completed to a valid solution.
For a detailed discussion of "optimal substructure", please read the CLRS book.
Common problems for backtracking I can think of are:
Eight queen puzzle
Map coloring
Sudoku
DP problems:
This website at MIT has a good collection of DP problems with nice animated explanations.
A chapter from a book from a professor at Berkeley.
One more difference could be that Dynamic programming problems usually rely on the principle of optimality. The principle of optimality states that an optimal sequence of decision or choices each sub sequence must also be optimal.
Backtracking problems are usually NOT optimal on their way! They can only be applied to problems which admit the concept of partial candidate solution.
Say that we have a solution tree, whose leaves are the solutions for the original problem, and whose non-leaf nodes are the suboptimal solutions for part of the problem. We try to traverse the solution tree for the solutions.
Dynamic programming is more like BFS: we find all possible suboptimal solutions represented the non-leaf nodes, and only grow the tree by one layer under those non-leaf nodes.
Backtracking is more like DFS: we grow the tree as deep as possible and prune the tree at one node if the solutions under the node are not what we expect.
Then there is one inference derived from the aforementioned theory: Dynamic programming usually takes more space than backtracking, because BFS usually takes more space than DFS (O(N) vs O(log N)). In fact, dynamic programming requires memorizing all the suboptimal solutions in the previous step for later use, while backtracking does not require that.
DP allows for solving a large, computationally intensive problem by breaking it down into subproblems whose solution requires only knowledge of the immediate prior solution. You will get a very good idea by picking up Needleman-Wunsch and solving a sample because it is so easy to see the application.
Backtracking seems to be more complicated where the solution tree is pruned is it is known that a specific path will not yield an optimal result.
Therefore one could say that Backtracking optimizes for memory since DP assumes that all the computations are performed and then the algorithm goes back stepping through the lowest cost nodes.
IMHO, the difference is very subtle since both (DP and BCKT) are used to explore all possibilities to solve a problem.
As for today, I see two subtelties:
BCKT is a brute force solution to a problem. DP is not a brute force solution. Thus, you might say: DP explores the solution space more optimally than BCKT. In practice, when you want to solve a problem using DP strategy, it is recommended to first build a recursive solution. Well, that recursive solution could be considered also the BCKT solution.
There are hundreds of ways to explore a solution space (wellcome to the world of optimization) "more optimally" than a brute force exploration. DP is DP because in its core it is implementing a mathematical recurrence relation, i.e., current value is a combination of past values (bottom-to-top). So, we might say, that DP is DP because the problem space satisfies exploring its solution space by using a recurrence relation. If you explore the solution space based on another idea, then that won't be a DP solution. As in any problem, the problem itself may facilitate to use one optimization technique or another, based on the problem structure itself. The structure of some problems enable to use DP optimization technique. In this sense, BCKT is more general though not all problems allow BCKT too.
Example: Sudoku enables BCKT to explore its whole solution space. However, it does not allow to use DP to explore more efficiently its solution space, since there is no recurrence relation anywhere that can be derived. However, there are other optimization techniques that fit with the problem and improve brute force BCKT.
Example: Just get the minimum of a classic mathematical function. This problem does not allow BCKT to explore the state space of the problem.
Example: Any problem that can be solved using DP can also be solved using BCKT. In this sense, the recursive solution of the problem could be considered the BCKT solution.
Hope this helps a bit.
In a very simple sentence I can say: Dynamic programming is a strategy to solve optimization problem. optimization problem is about minimum or maximum result (a single result). but in, Backtracking we use brute force approach, not for optimization problem. it is for when you have multiple results and you want all or some of them.
Depth first node generation of state space tree with bounding function is called backtracking. Here the current node is dependent on the node that generated it.
Depth first node generation of state space tree with memory function is called top down dynamic programming. Here the current node is dependant on the node it generates.

Resources