Dynamic Programming technique for solving problems - algorithm

Is it possible to solve any Dynamic Programming problem using recursion+memoization instead of using tabulation/iteration? Or there are some problems where it is must to use tabulation/iteration.
Also can we obtain the same time complexity when solving any problem using recursion+memoization ( I know space complexity differs and also recursion overhead cost exists).

Every Dynamic Programming problem can be expressed as recurrence relation which can be solved using recursion+memoization which can be converted into tabulation+iteration.
When you solve a DP problem using tabulation you solve the problem bottom up, typically by filling up an n-dimensional table. Based on the results in the table, the solution to the original problem is then computed.
When you solve a DP problem using memoization, you do it by maintaining a map of already solved sub problems. You do it top down in the sense that you solve the "top" problem first (which typically recurses down to solve the sub-problems).
The time complexity of a DP problem which uses tabulation+iteration is the same as an converted equivalent and correct memoization+recursion version of the solution. It is usually easy to find the time complexity in an tabulation+iteration method. On the other hand, memoization+recursion version of DP solution is more intuitive and readable.

Related

What is complexity of simplex algorithm for binary integer programming?

What is complexity of simplex algorithm for binary integer programming problem? For worst case or average case?
I'm solving assignment problem.
References:
https://en.wikipedia.org/wiki/Integer_programming
https://en.wikipedia.org/wiki/Simplex_algorithm
Since it's for the assignment problem, that changes matters. In that case, as the wiki page notes, the constraint matrix is totally unimodular, which is exactly what you need to make your problem an instance of normal linear programming as well (that is, you can drop the integrality constraint, and the result will still be integral).
So, it can be solved in polynomial time. The Simplex algorithm doesn't guarantee that however.
Of course there are also other polynomial time algorithms to solve the assignment problem.
In a general sense, binary integer programming is one of Karp's 21 NP-complete problems, so assuming P!=NP it's safe to say that Simplex's worst-case running time is lower-bounded by Ω(poly(n)). Again, in general, similar to SAT solvers, the "average" case is going to be heavily dependent upon what you're taking the average across. Until you've got more specific information about the class of problems you're trying to solve with simplex, I don't think there is a good answer.
I'll do some more thinking and update when I have more information.

Dynamic programming and backtack search

Can a backtrack and "branch and bound" problem be always solved using dynamic programming?? i.e. given a problem which can be solved using a backtrack method be also solved using dynamic programming
In the general case, whether dynamic programming can be applied, maybe. But whether dynamic programming will definitely lead to an efficient or a pseudo-efficient solution, no.
For example, there can be a number of NP complete Integer Linear Programming problems that need to be solve using branch & bound or through brute-force backtracking since dynamic programming formulation is not possible.
For example this question that I asked some time back, I could not form a DP formulation and I had to resort to finding a solver for my ILP problem. Strange but practical 2D bin packing optimization
There is certainly no such comparison of backtracking and DP because in general DP is used for optimization problems where you need best of many possible solutions whereas backtracking is used to search a single solution to a problem. Whereas you may have a good DP solution to problems which can be solved using Branch and Bound but not always as some problems may not be decomposable to smaller subproblems hence DP solution may not exist.

Memoization or Tabulation approach for Dynamic programming

There are many problems that can be solved using Dynamic programming e.g. Longest increasing subsequence. This problem can be solved by using 2 approaches
Memoization (Top Down) - Using recursion to solve the sub-problem and storing the result in some hash table.
Tabulation (Bottom Up) - Using Iterative approach to solve the problem by solving the smaller sub-problems first and then using it during the execution of bigger problem.
My question is which is better approach in terms of time and space complexity?
Short answer: it depends on the problem!
Memoization usually requires more code and is less straightforward, but has computational advantages in some problems, mainly those which you do not need to compute all the values for the whole matrix to reach the answer.
Tabulation is more straightforward, but may compute unnecessary values. If you do need to compute all the values, this method is usually faster, though, because of the smaller overhead.
First understand what is dynamic programming?
If a problem at hand can be broken down to sub-problems whose solutions are also optimal and can be combined to reach solution of original/bigger problem. For such problems, we can apply dynamic programming.
It's way of solving a problem by storing the results of sub-problems in program memory and reuse it instead of recalculating it at later stage.
Remember the ideal case of dynamic programming usage is, when you can reuse the solutions of sub-problems more than one time, otherwise, there is no point in storing the result.
Now, dynamic programming can be applied in bottom-up approach(Tabulation) and top-down approach(Memoization).
Tabulation: We start with calculating solutions to smallest sub-problem and progress one level up at a time. Basically follow bottom-up approach.
Here note, that we are exhaustively finding solutions for each of the sub-problems, without knowing if they are really neeeded in future.
Memoization: We start with the original problem and keep breaking it one level down till the base case whose solution we know. In most cases, such breaking down(top-down approach) is recursive. Hence, time taken is slower if problem is using each steps sub-solutions due to recursive calls. But, in case when all sub-solutions are not needed then, Memoization performs better than Tabulation.
I found this short video quite helpful: https://youtu.be/p4VRynhZYIE
Asymptotically a dynamic programming implementation that is top down is the same as going bottom up, assuming you're using the same recurrence relation. However, bottom up is generally more efficient because of the overhead of recursion which is used in memoization.
If the problem has overlapping sub-problems property then use Memoization, else it depends on the problem

DP approach for the n-puzzle problem

is there a DP approach for the n-puzzle problem
thanks all, appreciate that...
rajan
Dynamic Programming is a technique used to solve problems by reducing difficult cases to simpler cases, recursively, until you reach a case simple enough to solve "by inspection". Therefore, there can only be a sensible DP approach to the n-puzzle problem if, at each stage, you can consider a move which reduces the complexity of the problem.
For instance, if the first "move" in a n-puzzle always made it into an "(n-1)-puzzle" (for some concrete definition of "move", and assuming an (n-1)-puzzle made sense), then you could apply DP, eventually solving the "1-puzzle", and composing back upwards to solve the n-puzzle.
I don't know of any such simplification process for the n-puzzle; and I can't think of one at the moment. However, that doesn't mean one doesn't exist.

Difference between back tracking and dynamic programming

I heard the only difference between dynamic programming and back tracking is DP allows overlapping of sub problems, e.g.
fib(n) = fib(n-1) + fib (n-2)
Is it right? Are there any other differences?
Also, I would like know some common problems solved using these techniques.
There are two typical implementations of Dynamic Programming approach: bottom-to-top and top-to-bottom.
Top-to-bottom Dynamic Programming is nothing else than ordinary recursion, enhanced with memorizing the solutions for intermediate sub-problems. When a given sub-problem arises second (third, fourth...) time, it is not solved from scratch, but instead the previously memorized solution is used right away. This technique is known under the name memoization (no 'r' before 'i').
This is actually what your example with Fibonacci sequence is supposed to illustrate. Just use the recursive formula for Fibonacci sequence, but build the table of fib(i) values along the way, and you get a Top-to-bottom DP algorithm for this problem (so that, for example, if you need to calculate fib(5) second time, you get it from the table instead of calculating it again).
In Bottom-to-top Dynamic Programming the approach is also based on storing sub-solutions in memory, but they are solved in a different order (from smaller to bigger), and the resultant general structure of the algorithm is not recursive. LCS algorithm is a classic Bottom-to-top DP example.
Bottom-to-top DP algorithms are usually more efficient, but they are generally harder (and sometimes impossible) to build, since it is not always easy to predict which primitive sub-problems you are going to need to solve the whole original problem, and which path you have to take from small sub-problems to get to the final solution in the most efficient way.
Dynamic problems also requires "optimal substructure".
According to Wikipedia:
Dynamic programming is a method of
solving complex problems by breaking
them down into simpler steps. It is
applicable to problems that exhibit
the properties of 1) overlapping
subproblems which are only slightly
smaller and 2) optimal substructure.
Backtracking is a general algorithm
for finding all (or some) solutions to
some computational problem, that
incrementally builds candidates to the
solutions, and abandons each partial
candidate c ("backtracks") as soon as
it determines that c cannot possibly
be completed to a valid solution.
For a detailed discussion of "optimal substructure", please read the CLRS book.
Common problems for backtracking I can think of are:
Eight queen puzzle
Map coloring
Sudoku
DP problems:
This website at MIT has a good collection of DP problems with nice animated explanations.
A chapter from a book from a professor at Berkeley.
One more difference could be that Dynamic programming problems usually rely on the principle of optimality. The principle of optimality states that an optimal sequence of decision or choices each sub sequence must also be optimal.
Backtracking problems are usually NOT optimal on their way! They can only be applied to problems which admit the concept of partial candidate solution.
Say that we have a solution tree, whose leaves are the solutions for the original problem, and whose non-leaf nodes are the suboptimal solutions for part of the problem. We try to traverse the solution tree for the solutions.
Dynamic programming is more like BFS: we find all possible suboptimal solutions represented the non-leaf nodes, and only grow the tree by one layer under those non-leaf nodes.
Backtracking is more like DFS: we grow the tree as deep as possible and prune the tree at one node if the solutions under the node are not what we expect.
Then there is one inference derived from the aforementioned theory: Dynamic programming usually takes more space than backtracking, because BFS usually takes more space than DFS (O(N) vs O(log N)). In fact, dynamic programming requires memorizing all the suboptimal solutions in the previous step for later use, while backtracking does not require that.
DP allows for solving a large, computationally intensive problem by breaking it down into subproblems whose solution requires only knowledge of the immediate prior solution. You will get a very good idea by picking up Needleman-Wunsch and solving a sample because it is so easy to see the application.
Backtracking seems to be more complicated where the solution tree is pruned is it is known that a specific path will not yield an optimal result.
Therefore one could say that Backtracking optimizes for memory since DP assumes that all the computations are performed and then the algorithm goes back stepping through the lowest cost nodes.
IMHO, the difference is very subtle since both (DP and BCKT) are used to explore all possibilities to solve a problem.
As for today, I see two subtelties:
BCKT is a brute force solution to a problem. DP is not a brute force solution. Thus, you might say: DP explores the solution space more optimally than BCKT. In practice, when you want to solve a problem using DP strategy, it is recommended to first build a recursive solution. Well, that recursive solution could be considered also the BCKT solution.
There are hundreds of ways to explore a solution space (wellcome to the world of optimization) "more optimally" than a brute force exploration. DP is DP because in its core it is implementing a mathematical recurrence relation, i.e., current value is a combination of past values (bottom-to-top). So, we might say, that DP is DP because the problem space satisfies exploring its solution space by using a recurrence relation. If you explore the solution space based on another idea, then that won't be a DP solution. As in any problem, the problem itself may facilitate to use one optimization technique or another, based on the problem structure itself. The structure of some problems enable to use DP optimization technique. In this sense, BCKT is more general though not all problems allow BCKT too.
Example: Sudoku enables BCKT to explore its whole solution space. However, it does not allow to use DP to explore more efficiently its solution space, since there is no recurrence relation anywhere that can be derived. However, there are other optimization techniques that fit with the problem and improve brute force BCKT.
Example: Just get the minimum of a classic mathematical function. This problem does not allow BCKT to explore the state space of the problem.
Example: Any problem that can be solved using DP can also be solved using BCKT. In this sense, the recursive solution of the problem could be considered the BCKT solution.
Hope this helps a bit.
In a very simple sentence I can say: Dynamic programming is a strategy to solve optimization problem. optimization problem is about minimum or maximum result (a single result). but in, Backtracking we use brute force approach, not for optimization problem. it is for when you have multiple results and you want all or some of them.
Depth first node generation of state space tree with bounding function is called backtracking. Here the current node is dependent on the node that generated it.
Depth first node generation of state space tree with memory function is called top down dynamic programming. Here the current node is dependant on the node it generates.

Resources