Pseudo-polynomial time algorithm for modified coin change - algorithm

I thought of the following problem while thinking of coin change, but I don't know an efficient (pseudo-polynomial time) algorithm to solve it. I'd like to know if there is any pseudo-polynomial time solution or maybe some classic literature on it that I'm missing.
There is a well-known pseudo-polynomial time dynamic programming solution to the coin change problem, which asks the following:
You have coins, the of which has a value of . How many subsets of coins exist, such that the sum of the coin values is ?
The dynamic programming solution runs in , and is a classic. But I'm interested in the following slight generalization:
What if the coin no longer has just a value of , but instead can assume a set of values ? Now, the question is: How many subsets of coins exist, such there exists an assignment of coins to values such that the sum of the values of these coins is ?
(The classic coin change problem is actually an instance of this problem with for all , so I do not expect a polynomial time solution.)
For example, given and , and the following coins:
Then, there are subsets:
Take coin only, assign this coin to have value .
Take coin only, assign this coin to have value .
Take coins and , assign them to have values and , respectively or and , respectively—these are considered to be the same way.
Take coins , and , and assign them to have values , and , respectively.
The issue I'm having is precisely the third case; it would be easy to modify the classic dynamic programming solution for this problem, but it will count these two subsets as separate because different values are assigned, even though the coins taken are the same.
So far, I have only been able to find the straightforward exponential time algorithm for this problem: consider each of the subsets, and run the standard dynamic programming coin change algorithm (which is ) to check whether this subset of coins is valid, giving an time algorithm. That's not what I'm looking for here; I'm trying to find a solution without the factor.
Can you provide me with a pseudo-polynomial time algorithm to solve this question, or can it be proven that none exists unless, say, P = NP? That is, this problem is NP-complete (or similar).

Maybe this is hard, but the way you asked the question it doesn't seem so.
Take coins 1 and 3, assign them to have values 7 and 3, respectively or 6 and 4, respectively—these are considered to be the same way.
Let us encode the two equivalent solutions in this way:
((1, 3), (7, 3))
((1, 3), (6, 4))
Now, when we memoize a solution, can't we just ask if the first element of the pair, (1, 3), has already been solved? Then we would suppress computing the (6, 4) solution.

Related

Difficulty in thinking a divide and conquer approach

I am self-learning algorithms. As we know Divide and Conquer is one of the algorithm design paradigms. I have studied mergeSort, QuickSort, Karatsuba Multiplication, counting inversions of an array as examples of this particular design pattern. Although it sounds very simple, divides the problems into subproblems, solves each subproblem recursively, and merges the result of each of them, I found it very difficult to develop an idea of how to apply that logic to a new problem. To my understanding, all those above-mentioned canonical examples come up with a very clever trick to solve the problem. For example, I am trying to solve the following problem:
Given a sequence of n numbers such that the difference between two consecutive numbers is constant, find the missing term in logarithmic time.
Example: [5, 7, 9, 11, 15]
Answer: 13
First, I came up with the idea that it can be solved using the divide and conquer approach as the naive approach will take O(n) time. From my understanding of divide and conquer, this is how I approached:
The original problem can be divided into two independent subproblems. I can search for the missing term in the two subproblems recursively. So, I first divide the problem.
leftArray = [5,7,9]
rightArray = [11, 15]
Now it says, I need to solve the subproblems recursively until it becomes trivial to solve. In this case, the subproblem becomes of size 1. If there is only one element, there are 0 missing elements. Now to combine the result. But I am not sure how to do it or how it will solve my original problem.
Definitely, I am missing something crucial here. My question is how to approach when solving this type of divide and conquer problem. Should I come up with a trick like a mergeSort or QuickSort? The more I see the solution to this kind of problem, it feels I am memorizing the approach to solve, not understanding and each problem solves it differently. Any help or suggestion regarding the mindset when solving divide and conquer would be greatly appreciated. I have been trying for a long time to develop my algorithmic skill but I improved very little. Thanks in advance.
You have the right approach. The only missing part is an O(1) way to decide which side you are discarding.
First, note that the numbers in your problem must be ordered, otherwise you can't do better than O(n). There also needs to be at least three numbers, otherwise you wouldn't figure out the "step".
With this understanding in place, you can determine the "step" in O(1) time by examining the initial three terms, and see what's the difference between the consecutive ones. Two outcomes are possible:
Both differences are the same, and
One difference is twice as big as the other.
Case 2 hands you a solution by luck, so we will consider only the first case from now on. With the step in hand, you can determine if the range has a gap in it by subtracting the endpoints, and comparing the result to the number of gaps times the step. If you arrive at the same result, the range does not have a missing term, and can be discarded. When both halves can be discarded, the gap is between them.
As #Sergey Kalinichenko points out, this assumes the incoming set is ordered
However, if you're certain the input is ordered (which is likely in this case) observe the nth position's value to be start + jumpsize * index; this allows you to bisect to find where it shifts
Example: [5, 7, 9, 11, 15]
Answer: 13
start = 5
jumpsize = 2
check midpoint: 5 * 2 * 2 -> 9
this is valid, so the shift must be after the midpoint
recurse
You can find the jumpsize by checking the first 3 values
a, b, c = (language-dependent retrieval)
gap1 = b - a
gap2 = c - b
if gap1 != gap2:
if (value at 4th index) - c == gap1:
missing value is b + gap1 # 2nd gap doesn't match
else:
missing value is a + gap2 # 1st gap doesn't match
bisect remaining values

Number of ways to represent a number as a sum of K numbers in subset S

Let the set S be {1 , 2 , 4 , 5 , 10}
Now i want to find the number of ways to represent x as sum of K numbers of the set S. (a number can be included any number of times)
if x = 10 and k = 3
Then the ans should be 2 => (5,4,1) , (4,4,2)
The order of the numbers doesn't matter ie.(4,4,2) and (4,2,4) count as one.
I did some research and found that the set can be represented as a polynomial x^1+x^2+x^4+x^5+x^10 and after raising the polynomial to the power K the coefficients of the product polynomial gives the ans.
But the ans includes (4,4,2) and (4,2,4) as unique terms which i don't want
Is there any way to make (4,4,2) and (4,2,4) count as same term ?
This is a NP-complete, a variant of the sum-subset problem as described here.
So frankly, I don't think you can solve it via a non-exponential (iterate though all combinations) solution, without any restrictions on the problem input (such as maximum number range, etc.).
Without any restrictions on the problem domain, I suggest iterating through all your possible k-set instances (as described in the Pseudo-polynomial time dynamic programming solution) and see which are a solution.
Checking whether 2 solutions are identical is nothing compared to the complexity of the overall algo. So, a hash of the solution set-elements will work just fine:
E.g. hash-order-insensitive(4,4,2)==hash-order-insensitive(4,2,4) => check the whole set, otherwise the solutions are distinct.
PS: you can also describe step-by-step your current solution.

Knapsack with unique elements

I'm trying to solve the following:
The knapsack problem is as follows: given a set of integers S={s1,s2,…,sn}, and a given target number T, find a subset of S that adds up exactly to T. For example, within S={1,2,5,9,10} there is a subset that adds up to T=22 but not T=23. Give a correct programming algorithm for knapsack that runs in O(nT) time.
but the only algorithm I could come up with is generating all the 1 to N combinations and try the sum out (exponential time).
I can't devise a dynamic programming solution since the fact that I can't reuse an object makes this problem different from a coin rest exchange problem and from a general knapsack problem.
Can somebody help me out with this or at least give me a hint?
The O(nT) running time gives you the hint: do dynamic programming on two axes. That is, let f(a,b) denote the maximum sum <= b which can be achieved with the first a integers.
f satisfies the recurrence
f(a,b) = max( f(a-1,b), f(a-1,b-s_a)+s_a )
since the first value is the maximum without using s_a and the second is the maximum including s_a. From here the DP algorithm should be straightforward, as should outputting the correct subset of S.
I did find a solution but with O(T(n2)) time complexity. If we make a table from bottom to top. In other words If we sort the array and start with the greatest number available and make a table where columns are the target values and rows the provided number. We will need to consider the sum of all possible ways of making i- cost [j] +j . Which will take n^2 time. And this multiplied with target.

Dividing set of integers

I've got a problem with a solution for algorithmic problem as described below.
We have a set (e.g. array) of integers. Our task is to divide them into the groups (they don't have to have same amount of elements) that sum is equal to each other. I case that primal set cannot be divide we have to give answer "impossible to divide".
For example:
Set A is given [-7 3 3 1 2 5 14]. The answer is [-7 14], [3 3 1], [2 5].
It seems that it's easy to say when for sure it would be impossible. When sum of primal set isn't divisible by 3: sum(A) % 3 != 0.
Do you have any idea how to solve that problem?
This is the 3-partition problem variant of the partition problem, the difference being that the classic partition problem splits the set into two sets (not three) whose sums are equal to each other. This problem is NP-complete, so you're almost certainly not going to find a polynomial time solution for it; the 2-partition problem has a pseudopolynomial time solution, but the 3-partition problem does not.
See this answer for an outline of how to adapt the 2-partition algorithm to a 3-partition algorithm. See also this paper for a parallel solution.

Choosing permutations with constraints [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to solve the “Mastermind” guessing game?
I have to choose k items out of n choices, and my selection needs to be in the correct order (i.e. permutation, not combination). After I make a choice, I receive a hint that tells me how many of my selections were correct, and how many were in the correct order.
For example, if I'm trying to choose k=4 out of n=6 items, and the correct ordered set is 5, 3, 1, 2, then an exchange may go as follows:
0,1,2,3
(3, 0) # 3 correct, 0 in the correct position
0,1,2,5
(3, 0)
0,1,5,3
(3, 0)
0,5,2,3
(3,0)
5,1,2,3
(4,1)
5,3,1,2
(4,4)
-> correct order, the game is over
The problem is I'm only given a limited number of tries to get the order right, so if n=6, k=4, then I only get t=6 tries, if n=10,k=5 then t=5, and if n=35,k=6 then t=18.
Where do I start to write an algorithm that solves this? It almost seems like a constraint solving problem. The hard part seems to be that I only know something for sure if I only change 1 thing at once, but the upper bound on that is way more than the number of tries I get.
A simple strategy for an algorithm is to come up with a next guess that is consistent with all previous hints. This will eventually lead to the right solution, but most likely not in the lowest possible number of guesses.
As I can see, this a variation of mastermind board game http://en.m.wikipedia.org/wiki/Mastermind_(board_game)
Also, you can find more details about the problem in this paper
http://arxiv.org/abs/cs.CC/0512049

Resources