LCS using Dynamic programming - algorithm

I am learning how to solve the longest common subsequence using dynamic programming. I understand how the table works, however, I don't get the reason why the formula for x[i]!=y[j] is like this max{c[i − 1, j], c[i, j − 1]}. Why not other formulas?

Related

Solving fractional knapsack problem with dynamic programming

A few days ago, I was reading about greedy algorithms and dynamic programming for the fractional knapsack problem, and I saw that this problem can be solved optimally with the greedy method. Can anyone give an example or a solution to solve this problem with the dynamic programming method?
P.S: I know that the greedy method is the best way to solve this question, but I want to know how dynamic programming works for this issue.
Yes, you can solve the problem with dynamic programming.
Let f(i, j) denote the maximum total value that can be obtained using the first i elements using a knapsack whose capacity is j.
If you are familiar with the 0-1 knapsack problem, then you may remember that we had the exact same function. However, the recurrence for the 0-1 knapsack problem was f(i, j) = max{f(i - 1, j), V[i] + f(i - 1, j - W[i])} (the first argument considers the case in which we don't take the item at index i, and the second argument considers the case in which we do take the item at index i).
In the fractional knapsack problem, we are allowed to take fractional amounts of some item. Thus, our recurrence would look something like, f(i, j) = max{f(i - 1, j), delta * V[i] f(i - 1, j - delta * W[i]) over all possible values of delta, where delta represents the amount of the item that we are taking.
Now if you increment delta in sufficiently small increments, you should get the correct answer.

Number of partition of `n` into sum of three squares (fast algorithm)

A few years ago I found an interesting programming problem:
"To find number of partition of n into sum of three squares with n < 10^9 and 1 second time limit."
Question: Does anyone know how to solve this problem with given constraints?
I think it can be do purely with asymptotic time complexity faster than O(n) only? Is there some clever math approach or it is code optimization engineering problem?
I found some info on https://oeis.org/A000164, but there are an O(n)-algo in FORMULA section
(because we need to find all divisors of each n-k^2 number for compute e(n-k^2)) and O(n)-algo in MAPLE section.
Yes. First factor the number, n - z^2, into primes, decompose the primes into Gaussian conjugates and find different expressions to expand and simplify to get a + bi, which can be then raised, a^2 + b^2. We can rule out any candidate n - z^2 that contains a prime of form 4k + 3 with an odd power.
This is based on expressing numbers as Gaussian integer conjugates. (a + bi)*(a - bi) = a^2 + b^2. See https://mathoverflow.net/questions/29644/enumerating-ways-to-decompose-an-integer-into-the-sum-of-two-squares and https://stackoverflow.com/a/54839035/2034787

Is this Dynamic Programming Solution to Text Justification just Brute Force?

I'm having trouble understanding the dynamic programming solution to the text justification problem as specified in the MIT open courseware lecture here. Some notes from that lecture are here, and page 3 of the notes is what I am referring to.
I thought that Dynamic Programming meant you memoize some of the computations so that you don't need to recompute, thus saving you time, but in the algorithm given in the lecture, I don't see any use of memoization, just a whole bunch of deep recursive calls, i.e. the main function is this:
DP[i] = min(badness (i, j) + DP[j] for j in range (i + 1, n + 1))
DP[n] = 0
where badness is a function that determines the the amount of unused space after subtracting the length of the words from the line length. To me it looks like this algorithm calculates all possible "badness" calculations and chooses the smallest one, which seems like brute force to me. Where is the advantage Dynamic Programming usually gives us by memoizing past calculations so we don't have to recompute?
If you memoize the results, you don't have to compute each DP[i] several times.
That is, DP[0] "calls" DP[2] for example, but so does DP[1]. In the second time DP[2] is called, it won't be necessary to compute it again, you can just return the memoized value.
This also makes it easy to verify a polynomial upper bound for this algorithm. Since each DP[i] will perform O(n) operations, and there are n of them, the overall algorithm is O(n^2), assuming, of course, that badness(i, j) is O(1).

Proving Heap's Algorithm for Generating Permutations

I need to prove the correctness of Heap's algorithm for generating permutations. The pseudocode for it is as follows:
HeapPermute(n)
//Implements Heap’s algorithm for generating permutations
//Input: A positive integer n and a global array A[1..n]
//Output: All permutations of elements of A
if n = 1
write A
else
for i ←1 to n do
HeapPermute(n − 1)
if n is odd
swap A[1] and A[n]
else swap A[i] and A[n]
(taken from Introduction to the Design and Analysis of Algorithms by Levitin)
I know I need to use induction to prove its correctness, but I'm not sure exactly how to go about doing so. I've proved mathematical equations but never algorithms.
I was thinking the proof would look something like this...
1) For n = 1, heapPermute is obviously correct. {1} is printed.
2) Assume heapPermute() outputs a set of n! permutations for a given n. Then
??
I'm just not sure how to go about finishing the induction step. Am I even on the right track here? Any help would be greatly appreciated.
For n = 1, heapPermute is obviously correct. {1} is printed.
Assume heapPermute() outputs a set of n! permutations for a given n. Then
??
Now, given the first two assumptions, show that heapPermutate(n+1) returns all the (n+1)! permutations.
Yes, that sounds like a good approach. Think about how to recursively define a set of all permutations, i.e. how can be permutations of {1..n} be expressed in terms of permutations of {1.. n-1}. For this, recall the inductive proof that there are n! permutations. How does the inductive step proceed there?
A recursive approach is definitely the way to go. Given your first two steps, to prove that heapPermutate(n+1) returns all the $(n+1)!$ permutations, you may want to explain that each element is adjoined to each permutation of the rest of the elements.
If you would like to have a look at an explanation by example, this blog post provides one.

Dynamic programming aspect in Kadane's algorithm

Initialize:
max_so_far = 0
max_ending_here = 0
Loop for each element of the array
(a) max_ending_here = max_ending_here + a[i]
(b) if(max_ending_here < 0)
max_ending_here = 0
(c) if(max_so_far < max_ending_here)
max_so_far = max_ending_here
return max_so_far
Can anyone help me in understanding the optimal substructure and overlapping problem(bread and butter of DP) i the above algo?
According to this definition of overlapping subproblems, the recursive formulation of Kadane's algorithm (f[i] = max(f[i - 1] + a[i], a[i])) does not exhibit this property. Each subproblem would only be computed once in a naive recursive implementation.
It does however exhibit optimal substructure according to its definition here: we use the solution to smaller subproblems in order to find the solution to our given problem (f[i] uses f[i - 1]).
Consider the dynamic programming definition here:
In mathematics, computer science, and economics, dynamic programming is a method for solving complex problems by breaking them down into simpler subproblems. It is applicable to problems exhibiting the properties of overlapping subproblems1 and optimal substructure (described below). When applicable, the method takes far less time than naive methods that don't take advantage of the subproblem overlap (like depth-first search).
The idea behind dynamic programming is quite simple. In general, to solve a given problem, we need to solve different parts of the problem (subproblems), then combine the solutions of the subproblems to reach an overall solution. Often when using a more naive method, many of the subproblems are generated and solved many times. The dynamic programming approach seeks to solve each subproblem only once, thus reducing the number of computations
This leaves room for interpretation as to whether or not Kadane's algorithm can be considered a DP algorithm: it does solve the problem by breaking it down into easier subproblems, but its core recursive approach does not generate overlapping subproblems, which is what DP is meant to handle efficiently - so this would put it outside DP's specialty.
On the other hand, you could say that it is not necessary for the basic recursive approach to lead to overlapping subproblems, but this would make any recursive algorithm a DP algorithm, which would give DP a much too broad scope in my opinion. I am not aware of anything in the literature that definitely settles this however, so I wouldn't mark down a student or disconsider a book or article either way they labeled it.
So I would say that it is not a DP algorithm, just a greedy and / or recursive one, depending on the implementation. I would label it as greedy from an algorithmic point of view for the reasons listed above, but objectively I would consider other interpretations just as valid.
Note that I derived my explanation from this answer. It demonstrates how Kadane’s algorithm can be seen as a DP algorithm which has overlapping subproblems.
Identifying subproblems and recurrence relations
Imagine we have an array a from which we want to get the maximum subarray. To determine the max subarray that ends at index i the following recursive relation holds:
max_subarray_to(i) = max(max_subarray_to(i - 1) + a[i], a[i])
In order to get the maximum subarray of a we need to compute max_subarray_to() for each index i in a and then take the max() from it:
max_subarray = max( for i=1 to n max_subarray_to(i) )
Example
Now, let's assume we have an array [10, -12, 11, 9] from which we want to get the maximum subarray. This would be the work required running Kadane's algorithm:
result = max(max_subarray_to(0), max_subarray_to(1), max_subarray_to(2), max_subarray_to(3))
max_subarray_to(0) = 10 # base case
max_subarray_to(1) = max(max_subarray_to(0) + (-12), -12)
max_subarray_to(2) = max(max_subarray_to(1) + 11, 11)
max_subarray_to(3) = max(max_subarray_to(2) + 9, 49)
As you can see, max_subarray_to() is evaluated twice for each i apart from the last index 3, thus showing that Kadane's algorithm does have overlapping subproblems.
Kadane's algorithm is usually implemented using a bottom up DP approach to take advantage of the overlapping subproblems and to only compute each subproblem once, hence turning it to O(n).

Resources