Choosing permutations with constraints [duplicate] - algorithm

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to solve the “Mastermind” guessing game?
I have to choose k items out of n choices, and my selection needs to be in the correct order (i.e. permutation, not combination). After I make a choice, I receive a hint that tells me how many of my selections were correct, and how many were in the correct order.
For example, if I'm trying to choose k=4 out of n=6 items, and the correct ordered set is 5, 3, 1, 2, then an exchange may go as follows:
0,1,2,3
(3, 0) # 3 correct, 0 in the correct position
0,1,2,5
(3, 0)
0,1,5,3
(3, 0)
0,5,2,3
(3,0)
5,1,2,3
(4,1)
5,3,1,2
(4,4)
-> correct order, the game is over
The problem is I'm only given a limited number of tries to get the order right, so if n=6, k=4, then I only get t=6 tries, if n=10,k=5 then t=5, and if n=35,k=6 then t=18.
Where do I start to write an algorithm that solves this? It almost seems like a constraint solving problem. The hard part seems to be that I only know something for sure if I only change 1 thing at once, but the upper bound on that is way more than the number of tries I get.

A simple strategy for an algorithm is to come up with a next guess that is consistent with all previous hints. This will eventually lead to the right solution, but most likely not in the lowest possible number of guesses.

As I can see, this a variation of mastermind board game http://en.m.wikipedia.org/wiki/Mastermind_(board_game)
Also, you can find more details about the problem in this paper
http://arxiv.org/abs/cs.CC/0512049

Related

Pseudo-polynomial time algorithm for modified coin change

I thought of the following problem while thinking of coin change, but I don't know an efficient (pseudo-polynomial time) algorithm to solve it. I'd like to know if there is any pseudo-polynomial time solution or maybe some classic literature on it that I'm missing.
There is a well-known pseudo-polynomial time dynamic programming solution to the coin change problem, which asks the following:
You have coins, the of which has a value of . How many subsets of coins exist, such that the sum of the coin values is ?
The dynamic programming solution runs in , and is a classic. But I'm interested in the following slight generalization:
What if the coin no longer has just a value of , but instead can assume a set of values ? Now, the question is: How many subsets of coins exist, such there exists an assignment of coins to values such that the sum of the values of these coins is ?
(The classic coin change problem is actually an instance of this problem with for all , so I do not expect a polynomial time solution.)
For example, given and , and the following coins:
Then, there are subsets:
Take coin only, assign this coin to have value .
Take coin only, assign this coin to have value .
Take coins and , assign them to have values and , respectively or and , respectively—these are considered to be the same way.
Take coins , and , and assign them to have values , and , respectively.
The issue I'm having is precisely the third case; it would be easy to modify the classic dynamic programming solution for this problem, but it will count these two subsets as separate because different values are assigned, even though the coins taken are the same.
So far, I have only been able to find the straightforward exponential time algorithm for this problem: consider each of the subsets, and run the standard dynamic programming coin change algorithm (which is ) to check whether this subset of coins is valid, giving an time algorithm. That's not what I'm looking for here; I'm trying to find a solution without the factor.
Can you provide me with a pseudo-polynomial time algorithm to solve this question, or can it be proven that none exists unless, say, P = NP? That is, this problem is NP-complete (or similar).
Maybe this is hard, but the way you asked the question it doesn't seem so.
Take coins 1 and 3, assign them to have values 7 and 3, respectively or 6 and 4, respectively—these are considered to be the same way.
Let us encode the two equivalent solutions in this way:
((1, 3), (7, 3))
((1, 3), (6, 4))
Now, when we memoize a solution, can't we just ask if the first element of the pair, (1, 3), has already been solved? Then we would suppress computing the (6, 4) solution.

How does the recursion unfolds in coin changing algorithm?

I know there are tonnes of editorials and blogs explaining this but there is one common point where i am getting stuck.
Considering the recursion given below:
coin_change(coins,i,N) = coin_change(coins,i-1,N) + coin_change(coins,i-1,N-val[i])
Now this seems pretty straightforward which i think says that either we exclude the coin or we include it and solve the problem for remaining sum.
But my doubt is since there is infinite supply of coins, we can take as many coins as possible to achieve the sum, so how are we incorporating that thing in the recursive solution?
Also i am not able to understand the base cases for this problem!
This creates a binary tree, where the right branch searches subtracting the same coin again and again and the left branch searches all the other coins.
Take the simple case of N = 3 and coins = {1, 2}:
The right hand branch would be:
{1,2}: 1->1->1 (1,1,1)
{2}: ->2 (1,2)
The left hand branch would be:
{2}: 2->X (No solution)
Would give the same result if 2 was the first coin:
Right hand branch:
{2,1}: 2->X (No solution)
{1} ->1 (2,1)
Left hand branch:
{1}: 1->1->1 (1,1,1)
Note 1: you shouldn't have -1 on the second call:
coin_change(coins,i,N) = coin_change(coins,i-1,N) + coin_change(coins,i,N-val[i])
Note 2: this isn't dynamic programming.
If there is infinite supply of coins, then given condition allows to exclude whole nominal of coins. For example, no more nickels in solution. val array could look as [1,5,10,25...]
Note that problem with limited number of coins is slightly more complex - we have to organize array with repeated values [1,1,1,5,5,10,10,10,10,10,...] or use array of counters for every coin nominal [1:3; 5:0; 10:12; ...].

Sorting algorithm based on subset inversion

I'm looking for a sorting algorithm based on subset inversion. It's like pancake sort, only instead of taking all the pancakes on top of the spatula, you can just invert any subset you want. Length of the subset doesn't matter.
Like this:
http://www.yourgenome.org/sites/default/files/illustrations/diagram/dna_mutations_inversion_yourgenome.png
So we can't simply swap numbers without inverting everything in between.
We're doing this to determine how one subspecies of fruitfly can mutate into the other. Both have the same genes but in a different order. The second subspecies' genome is 'sorted', i.e. the gene numbers are 1-25. The first subspecies genome is unsorted. Hence, we're looking for a sorting algorithm.
This is the "genome" we're looking at (though we should be able to have this work on all lists of numbers):
[23, 1, 2, 11, 24, 22, 19, 6, 10, 7, 25, 20, 5, 8, 18, 12, 13, 14, 15, 16, 17, 21, 3, 4, 9];
We're looking at two separate problems:
1) To sort a list of 25 numbers with the least amount of inversions
2) To sort a list of 25 numbers with the least amount of numbers moved
We also want to establish both upper and lower bounds for both.
We've already found a way to sort like this by just going from left to right, searching for the next lowest value and inverting everything in between, but we're absolutely certain we should be able to do this faster. However, we still haven't found any other methods so I'm asking for your help!
UPDATE: the method we currently use is based on the above method
but instead works both ways. It looks at the next elements needed
for both ends (e.g. 1 and 25 at the beginning) and then calculates
which inversion would be cheapest. All values at the ends can be
ignored for the rest of the algorithm because they get put into the
correct place immediately. Our first method took 18/19 steps and 148
genes, and this one does it in 17 steps and 101 genes. For both
optimalisation tactics (the two mentioned above), this is a better
method. It is however not cheaper in terms of code and processing.
Right now, we're working in Python because we have most experience with that, but I'd be happy with any pseudocode ideas on how we can more efficiently tackle this. If you think another language might be better suited, please let me know. Pseudocode, ideas, thoughts and actual code are all welcome!
Thanks in advance!
Regarding the first question: Do you know (and care about) which of the two strands the genes are on?
If so, you're in luck: This is called the inversion distance between signed permutations problem, and there is a linear-time algorithm for it: http://www.ncbi.nlm.nih.gov/pubmed/11694179. I haven't looked at the details.
If not, then unfortunately (as described on p. 2 of that paper) the problem is NP-hard, so it's very unlikely that any algorithm exists that is efficient (polynomial-time) in the worst case.
Regarding the second question: Assuming you mean that you want to find the minimum number of swaps needed to sort a list of numbers, you should be able to find solutions to this by searching here on SO and elsewhere. I think this is a clear and concise explanation. You can also use the optimal solution to this problem to get an upper bound for your first question: Any swap of positions i and j can be simulated using the two interval reversals (i, j) and (i+1, j-1). (This upper bound might be very bad, though, and in particular could be worse than your existing greedy algorithm.)
I think what you're looking for for the second question is the minimum number of swaps of adjacent elements to sort a sequence, which is equal to the number of inversions in the sequence (where a[i] > a[j] and i < j).
The first question seems quite a bit more complicated to me. One potential heuristic might be to think of the subset inversion as similar to the adjacent swap of more than one element. For example, if you've managed to get a sequence to this position,
5,6,1,2,3,4,7,8
we can "adjacent swap" indexes [0,1] with [2,3] (so inverting [0,1,2,3]),
2,1,6,5,3,4,7,8
and then [2,3] with [4,5] (inverting [2,3,4,5]),
2,1,4,3,5,6,7,8
and arrive at a sequence that now has significantly less element inversions, meaning less single adjacent swaps are needed to now complete the sort.
So maybe attempting to quantify inversions (in the sense of a[i] > a[j] and i < j) of sections rather than single elements could help move in the direction of estimating or building a method for the first question.

Minimum number of coins required to get a value [duplicate]

This question already has answers here:
Coin changing algorithm
(4 answers)
Closed 8 years ago.
Given a list of denomination of coins, I need to find the minimum number of coins required to get a given value.
My approach using greedy algorithm,
Divide value by max denomination, take remainder value and divide by second maximum denomination and so on till be get required value.
But this approach fails for some cases.
I want to know
Approach which works for all cases
Why greedy approach fails?
Example where approach fails.
Coins Denomination (1,3,4,5)
Value Required 7
Using Greedy Approach
(7/5)=1 and +2 because 3 and 4 can't be used so we need to use 2 1's value coins. So total of 3 coins.
However optimal value is 4+3 ( 2 coins).
There is classic solution for the problem you described - a kind of knapsack problem.
It can be solved using dynamic programming approach.
For the clear explanation you may follow this tutorial
Lets say, you want to get the value "2,10€" and have got a coins of value 2, 1, 0.50, 3 times 0.20.
Greedy will always take the biggest coin first, so it will choose the coin of value 2.
Now it is no longer possible to reach 2,10 as you do not have a coin of value 0.10 even though it would be possible if you would have choosen 1, 0.50 and the 3 0.20 Coins.
In this case (just as in most cases) I would recommend you to use Backtracking which would not fail in this case.

Can someone help me with my algorithm homework? [closed]

This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 10 years ago.
I have a bit of a problem with the algorithm proposed as homework by our teachers. It goes something like this:
Having a number of sticks like so:
4 (the number of piles to use)
11 7 5 4 (length of the sticks)
1 1 3 3 (how many sticks per length)
I have to figure out an algorithm that will form the minimal number of sticks by merging them. The solution for the previous example is this:
15 3 (15(optimal sum) * 3(minimal sticks) = 45 = 11*1 + 7*1 + 5*3 + 4*3)
11 4
7 4 4
5 5 5
Now I am not asking for you guys to solve this problem, but to give me a line to follow, I have tried to reduce it to a "Make Change" problem, it went good until the part where I had to select from the remaining solutions the good ones.
The complexity desired is an exponential one and the restrictions are:
0 < sticks < 100
0 < max_sum_of_sticks < 1000
So do you guys have a second thought on this?
Thank you very much for your time.
Explanation on minimal number of sticks : if for instance i had a set of sticks. The sum to be formed is 80, i have a fair amount of solutions :
1 stick of length 80
2 sticks of length 40
4 sticks of length 20 so on.
The first one is trivial and we discard it,for the remaining solutions I have to test if I can build them with the sets of sticks I have because there is a possibility that the solution chosen, for example 2*40, isn't a reliable one because we have sticks that were not used.
This looks a lot like the Knapsack problem.
You might also take a look at Branch and bound, which is a general algorithm for all kinds of optimization problems.
Like Julian, I'm not entirely sure what you mean, but it sounds a lot like the "Knapsack Problem" over multiple knapsacks, and is NP-complete. There are many different ways of approaching it - from the simple heuristics like "use the big stuff first", down to ant-colony (genetic) optimisation. And almost everything in between.
In fact, there are almost as many approaches as there are candidate sets... I wonder if the question is NP-complete? ;-p
Note: I'm calling merged sticks "piles" in this answer.
Of course, the solution is always "1". Merge all of your sticks into one big pile; you have optimal sum = total length of all sticks and minimal piles = 1.
Now, assuming you want the next smallest number after 1, there are a few feasible options. Would you try 2 minimal piles? Why not? What about 4? 5?
Let's say you are left with two candidates, 3 and 5. (i.e. optimal sum=15,minimal piles=3 and optimal sum=9,minimal piles=5) If you know you can arrange your sticks into 3 piles of length 15, do you need to check 5 piles (and what length would they be)?
So, the problem comes down to finding whether you can arrange your sticks into m piles of length n.
I'm sure there's a lot of literature on this problem, but if I were doing it for homework, I'd start by solving it on my own.
And I would start by trying to form one pile of length n. Then trying to form m-1 piles of length n with the remaining sticks...
The thing to be careful about with this approach is that you may form a wrong pile at any given time, so you'd need a way of backtracking and trying another combination. For example, suppose we have these sticks: 20 1 7 7 7 7 14 6 15 and are trying to form 4 piles of length 21. This is possible with the combination (20 1) (7 7 7) (7 14) (6 15), but if you start with (14 6 1), there's no solution that will give you 3x21 piles for the rest of the sticks. Now, I'm not sure if this indicates that 4x21 is not the answer. (The answer is, in fact, 2x42.) If that were the case, you would not run into this problem of "wrong" piles if you always started with the smaller number, i.e. tried 2x42 before trying 4x21. However, not being sure, I'd write code that would backtrack and try all the different combinations before giving up.
I am not sure I understand the problem.
I assume each pile has to be the same length? That is, the sum of length of sticks has to be same in all piles?
We have here three piles. Where did this three come from? If it were possible to make 2 piles, which one would you choose? For example, if you only have six sticks of X length, would you make three piles each with two sticks or two piles, each with three?
I guess the brute force methods is: you try to make X piles. Put all permutation/combination in each pile and see if you end up with the same total length in each.
Would it help if you give unique names to each stick? In this case, you have 11-1, 7-1, 5-1, 5-2, 5-3, etc.

Resources