Dynamic Programming - "maximize" matrix chain multiplication - algorithm

I'm now practice dynamic programming by myself. For the classic problem "matrix-chain multiplication" is to find the minimize number of scalar multiplication. Which is,
M[i,j] = 0 if i=j
= Min(i<=k<j){M[i,k-1]+M[k,j]+Pi-1*Pj*Pk}
and its time complexity is O(n^3)
But I'm just curious what if I want to find the "maximization"(instead of min) of scalar multiplication, does it exist a optimal structure and is it possible to solve it in polynomial time?

The exact same reasoning as the minimization applies:
If you multiply a1 ... ai, the resulting matrix dimension does not rely on the internal parenthetization.
It follows that that if the optimal - that is, most expensive - partition of a1 ... ai ... an is to multiply the matrices from 1 to i and from i + 1 to n, then it is composed of the optimal solutions to a1 ... ai and ai + 1 ... an
Since the optimal substructure remains, you can use the same algorithm as minimization (of course, changing the criteria for optimality from minimum to maximum).

Related

Number of partition of `n` into sum of three squares (fast algorithm)

A few years ago I found an interesting programming problem:
"To find number of partition of n into sum of three squares with n < 10^9 and 1 second time limit."
Question: Does anyone know how to solve this problem with given constraints?
I think it can be do purely with asymptotic time complexity faster than O(n) only? Is there some clever math approach or it is code optimization engineering problem?
I found some info on https://oeis.org/A000164, but there are an O(n)-algo in FORMULA section
(because we need to find all divisors of each n-k^2 number for compute e(n-k^2)) and O(n)-algo in MAPLE section.
Yes. First factor the number, n - z^2, into primes, decompose the primes into Gaussian conjugates and find different expressions to expand and simplify to get a + bi, which can be then raised, a^2 + b^2. We can rule out any candidate n - z^2 that contains a prime of form 4k + 3 with an odd power.
This is based on expressing numbers as Gaussian integer conjugates. (a + bi)*(a - bi) = a^2 + b^2. See https://mathoverflow.net/questions/29644/enumerating-ways-to-decompose-an-integer-into-the-sum-of-two-squares and https://stackoverflow.com/a/54839035/2034787

Solving knapsack with fractional knapsack approach

There are two well-known knapsack problems:
1) Given n items, each has its weight and cost. We need to select items, that will fit in our knapsack and have maximal cost in sum. It can be easily solved using dynamic programming.
2) Fractional knapsack: same as the first, but we can take not the whole item only, but its part. This problem can be easily solved with greedy algorithm.
Imagine we are using greedy algorithm from second problem to solve the first one. How can I prove, that the solution we will get is no more than two times worse, than the optimal one?
As far I can see, the greedy solution can be as much as inefficient as you want.
Imagine that you have a knapsack with capacity 1 and two (n = 2) items:
item weight cost density
------------------------
A ε ε 1 <- greedy choice
B 1 1-ε 1-ε <- optimal choice
so the greedy algorithm takes A with ε cost when the optimal solution is
to take B with 1-ε cost. The chosen (greedy) solution is
(1-ε)/ε = 1/ε - 1
times inefficient than optimal one. Make ε as little as you want (say, ε = 1e-100) and have a very inefficient greedy solution.
Edit: in case of integer values only, just scale the sample above: you have a knapsack with capacity X and two (n = 2) items
item weight cost density
------------------------
A 1 1 1 <- greedy choice
B X X-1 1-1/X <- optimal choice
in this case the greedy solution is
(X - 1) / 1 = X - 1
times inefficient than optimal one. Finally, make X to be large enough (say, X = 1e100)

Rewrite O(N W) in terms of N

I have this question that asks to rewrite the subset sum problem in terms of only N.
If unaware the problem is that given weights, each with cost 1 how would you find the optimal solution given a max weight to achieve.
So the O(NW) is the space and time costs, where space will be for the 2d matrix and in the use of dynamic programming. This problem is a special case of the knapsac problem.
I'm not sure how to approach this as I tried to think about it and only thing I thought of was find the sum of all weights and just have a general worst case scenario. Thanks
If the weight is not bounded, and so the complexity must depend solely on N, there is at least an O (2N) approach, which is trying all possible subsets of N elements and computing their sums.
If you are willing to use exponential space rather than polynomial space, you can solve the problem in O(n 2^(n/2)) time and O(2^(n/2)) space if you split your set of n weights into two sets A and B of roughly equal size and compute the sum of weights for all the subsets of the two sets, and then hash all sums of subsets in A and hash W - x for all sums x of subsets of B, and if you get a collision between a subset of A and a subset of B in the hash table then you have found a subset that sums to W.

Finding maximum subsequence below or equal to a certain value

I'm learning dynamic programming and I've been having a great deal of trouble understanding more complex problems. When given a problem, I've been taught to find a recursive algorithm, memoize the recursive algorithm and then create an iterative, bottom-up version. At almost every step I have an issue. In terms of the recursive algorithm, I write different different ways to do recursive algorithms, but only one is often optimal for use in dynamic programming and I can't distinguish what aspects of a recursive algorithm make memoization easier. In terms of memoization, I don't understand which values to use for indices. For conversion to a bottom-up version, I can't figure out which order to fill the array/double array.
This is what I understand:
- it should be possible to split the main problem to subproblems
In terms of the problem mentioned, I've come up with a recursive algorithm that has these important lines of code:
int optionOne = values[i] + find(values, i+1, limit - values[i]);
int optionTwo = find(values, i+1, limit);
If I'm unclear or this is not the correct qa site, let me know.
Edit:
Example: Given array x: [4,5,6,9,11] and max value m: 20
Maximum subsequence in x under or equal to m would be [4,5,11] as 4+5+11 = 20
I think this problem is NP-hard, meaning that unless P = NP there isn't a polynomial-time algorithm for solving the problem.
There's a simple reduction from the subset-sum problem to this problem. In subset-sum, you're given a set of n numbers and a target number k and want to determine whether there's a subset of those numbers that adds up to exactly k. You can solve subset-sum with a solver for your problem as follows: create an array of the numbers in the set and find the largest subsequence whose sum is less than or equal to k. If that adds up to exactly k, the set has a subset that adds up to k. Otherwise, it does not.
This reduction takes polynomial time, so because subset-sum is NP-hard, your problem is NP-hard as well. Therefore, I doubt there's a polynomial-time algorithm.
That said - there is a pseudopolynomial-time algorithm for subset-sum, which is described on Wikipedia. This algorithm uses DP in two variables and isn't strictly polynomial time, but it will probably work in your case.
Hope this helps!

Polynominal multiplication with even #of coefs. in n distinct multiplications

Looking for some help with an upcoming exam, this is a question from the review. Seeing if someone could restate a) so I might be able to better understand what it is asking.
So it wants me to instead of using extra multiplications maybe obtain some of the terms in the answer (PQ) by subtracting and adding already multiplied terms. Such as Strassen does in his algorithm to compute the product of 2x2 matrices in 7 multiplications instead of 8.
a) Suppose P(x) and Q(x) are two polynomials of (even) size n.
Let P1(x) and P2(x) denote the polynomials of size n/2 determined by the first n/2 and last n/2 coefficients of P(x). Similarly define Q1(x) and Q2(x),
i.e., P = P1 + x^(n/2)P2. and Q = Q1 + x^(n/2) Q2.
Show how the product PQ can be computed using only 3 distinct multiplications of polynomials of size n/2.
b) Briefly explain how the result in a) can be used to design a divide-and-conquer algorithm for multiplying two polynomials of size n (explain what the recursive calls are and what the bootstrap condition is).
c) Analyze the worst-case complexity of algorithm you have given in part b). In particular derive a recurrence formula for W(n) and solve. As usual, to simplify the math, you may assume that n is a power of 2.
Here is a link I found which does polynomial multiplication.
http://algorithm.cs.nthu.edu.tw/~course/Extra_Info/Divide%20and%20Conquer_supplement.pdf
Notice here that if we do polynomial multiplication the way we learned in high school, it would take big-omega(n^2) time. The question wants you to see that there is a more efficient algorithm out there by first preprocessing the polynomials, by dividing it into two pieces. This lecture gives a pretty detailed explanation of how to do this.
Especially, look at page 12 of the link. It shows you explicitly how a 4 multiplication process can be done in 3 when multiplying polynomials.

Resources