Related
I'm trying to figure out how to solve a problem that seems a tricky variation of a common algorithmic problem but require additional logic to handle specific requirements.
Given a list of coins and an amount, I need to count the total number of possible ways to extract the given amount using an unlimited supply of available coins (and this is a classical change making problem https://en.wikipedia.org/wiki/Change-making_problem easily solved using dynamic programming) that also satisfy some additional requirements:
extracted coins are splittable into two sets of equal size (but not necessarily of equal sum)
the order of elements inside the set doesn't matter but the order of set does.
Examples
Amount of 6 euros and coins [1, 2]: solutions are 4
[(1,1), (2,2)]
[(1,1,1), (1,1,1)]
[(2,2), (1,1)]
[(1,2), (1,2)]
Amount of 8 euros and coins [1, 2, 6]: solutions are 7
[(1,1,2), (1,1,2)]
[(1,2,2), (1,1,1)]
[(1,1,1,1), (1,1,1,1)]
[(2), (6)]
[(1,1,1), (1,2,2)]
[(2,2), (2,2)]
[(6), (2)]
By now I tried different approaches but the only way I found was to collect all the possible solution (using dynamic programming) and then filter non-splittable solution (with an odd number of coins) and duplicates. I'm quite sure there is a combinatorial way to calculate the total number of duplication but I can't figure out how.
(The following method first enumerates partitions. My other answer generates the assignments in a bottom-up fashion.) If you'd like to count splits of the coin exchange according to coin count, and exclude redundant assignments of coins to each party (for example, where splitting 1 + 2 + 2 + 1 into two parts of equal cardinality is only either (1,1) | (2,2), (2,2) | (1,1) or (1,2) | (1,2) and element order in each part does not matter), we could rely on enumeration of partitions where order is disregarded.
However, we would need to know the multiset of elements in each partition (or an aggregate of similar ones) in order to count the possibilities of dividing them in two. For example, to count the ways to split 1 + 2 + 2 + 1, we would first count how many of each coin we have:
Python code:
def partitions_with_even_number_of_parts_as_multiset(n, coins):
results = []
def C(m, n, s, p):
if n < 0 or m <= 0:
return
if n == 0:
if not p:
results.append(s)
return
C(m - 1, n, s, p)
_s = s[:]
_s[m - 1] += 1
C(m, n - coins[m - 1], _s, not p)
C(len(coins), n, [0] * len(coins), False)
return results
Output:
=> partitions_with_even_number_of_parts_as_multiset(6, [1,2,6])
=> [[6, 0, 0], [2, 2, 0]]
^ ^ ^ ^ this one represents two 1's and two 2's
Now since we are counting the ways to choose half of these, we need to find the coefficient of x^2 in the polynomial multiplication
(x^2 + x + 1) * (x^2 + x + 1) = ... 3x^2 ...
which represents the three ways to choose two from the multiset count [2,2]:
2,0 => 1,1
0,2 => 2,2
1,1 => 1,2
In Python, we can use numpy.polymul to multiply polynomial coefficients. Then we lookup the appropriate coefficient in the result.
For example:
import numpy
def count_split_partitions_by_multiset_count(multiset):
coefficients = (multiset[0] + 1) * [1]
for i in xrange(1, len(multiset)):
coefficients = numpy.polymul(coefficients, (multiset[i] + 1) * [1])
return coefficients[ sum(multiset) / 2 ]
Output:
=> count_split_partitions_by_multiset_count([2,2,0])
=> 3
(Posted a similar answer here.)
Here is a table implementation and a little elaboration on algrid's beautiful answer. This produces an answer for f(500, [1, 2, 6, 12, 24, 48, 60]) in about 2 seconds.
The simple declaration of C(n, k, S) = sum(C(n - s_i, k - 1, S[i:])) means adding all the ways to get to the current sum, n using k coins. Then if we split n into all ways it can be partitioned in two, we can just add all the ways each of those parts can be made from the same number, k, of coins.
The beauty of fixing the subset of coins we choose from to a diminishing list means that any arbitrary combination of coins will only be counted once - it will be counted in the calculation where the leftmost coin in the combination is the first coin in our diminishing subset (assuming we order them in the same way). For example, the arbitrary subset [6, 24, 48], taken from [1, 2, 6, 12, 24, 48, 60], would only be counted in the summation for the subset [6, 12, 24, 48, 60] since the next subset, [12, 24, 48, 60] would not include 6 and the previous subset [2, 6, 12, 24, 48, 60] has at least one 2 coin.
Python code (see it here; confirm here):
import time
def f(n, coins):
t0 = time.time()
min_coins = min(coins)
m = [[[0] * len(coins) for k in xrange(n / min_coins + 1)] for _n in xrange(n + 1)]
# Initialize base case
for i in xrange(len(coins)):
m[0][0][i] = 1
for i in xrange(len(coins)):
for _i in xrange(i + 1):
for _n in xrange(coins[_i], n + 1):
for k in xrange(1, _n / min_coins + 1):
m[_n][k][i] += m[_n - coins[_i]][k - 1][_i]
result = 0
for a in xrange(1, n + 1):
b = n - a
for k in xrange(1, n / min_coins + 1):
result = result + m[a][k][len(coins) - 1] * m[b][k][len(coins) - 1]
total_time = time.time() - t0
return (result, total_time)
print f(500, [1, 2, 6, 12, 24, 48, 60])
I found this problem in a programming forum Ohjelmointiputka:
https://www.ohjelmointiputka.net/postit/tehtava.php?tunnus=ahdruu and
https://www.ohjelmointiputka.net/postit/tehtava.php?tunnus=ahdruu2
Somebody said that there is a solution found by a computer, but I was unable to find a proof.
Prove that there is a matrix with 117 elements containing the digits such that one can read the squares of the numbers 1, 2, ..., 100.
Here read means that you fix the starting position and direction (8 possibilities) and then go in that direction, concatenating the numbers. For example, if you can find for example the digits 1,0,0,0,0,4 consecutively, you have found the integer 100004, which contains the square numbers of 1, 2, 10, 100 and 20, since you can read off 1, 4, 100, 10000, and 400 (reversed) from that sequence.
But there are so many numbers to be found (100 square numbers, to be precise, or 81 if you remove those that are contained in another square number with total 312 digits) and so few integers in a matrix that you have to put all those square numbers so densely that finding such a matrix is difficult, at least for me.
I found that if there is such a matrix mxn, we may assume without loss of generalty that m<=n. Therefore, the matrix must be of the type 1x117, 3x39 or 9x13. But what kind of algorithm will find the matrix?
I have managed to do the program that checks if numbers to be added can be put on the board. But how can I implemented the searching algorithm?
# -*- coding: utf-8 -*-
# Returns -1 if can not put and value how good a solution is if can be put. Bigger value of x is better.
def can_put_on_grid(grid, number, start_x, start_y, direction):
# Check that the new number lies inside the grid.
x = 0
if start_x < 0 or start_x > len(grid[0]) - 1 or start_y < 0 or start_y > len(grid) - 1:
return -1
end = end_coordinates(number, start_x, start_y, direction)
if end[0] < 0 or end[0] > len(grid[0]) - 1 or end[1] < 0 or end[1] > len(grid) - 1:
return -1
# Test if new number does not intersect any previous number.
A = [-1,-1,-1,0,0,1,1,1]
B = [-1,0,1,-1,1,-1,0,1]
for i in range(0,len(number)):
if grid[start_x + A[direction] * i][start_y + B[direction] * i] not in ("X", number[i]):
return -1
else:
if grid[start_x + A[direction] * i][start_y + B[direction] * i] == number[i]:
x += 1
return x
def end_coordinates(number, start_x, start_y, direction):
end_x = None
end_y = None
l = len(number)
if direction in (1, 4, 7):
end_x = start_x - l + 1
if direction in (3, 6, 5):
end_x = start_x + l - 1
if direction in (2, 0):
end_x = start_x
if direction in (1, 2, 3):
end_y = start_y - l + 1
if direction in (7, 0, 5):
end_y = start_y + l - 1
if direction in (4, 6):
end_y = start_y
return (end_x, end_y)
if __name__ == "__main__":
A = [['X' for x in range(13)] for y in range(9)]
numbers = [str(i*i) for i in range(1, 101)]
directions = [0, 1, 2, 3, 4, 5, 6, 7]
for i in directions:
C = can_put_on_grid(A, "10000", 3, 5, i)
if C > -1:
print("One can put the number to the grid!")
exit(0)
I also found think that brute force search or best first search is too slow. I think there might be a solution using simulated annealing, genetic algorithm or bin packing algorithm. I also wondered if one can apply Markov chains somehow to find the grid. Unfortunately those seems to be too hard for me to implemented at current skills.
There is a program for that in https://github.com/minkkilaukku/square-packing/blob/master/sqPackMB.py . Just change M=9, N=13 from the lines 20 and 21.
First off, I'm not even sure the terminology is the right one, as I havent found anything similar (especially since I dont even know what keywords to use)
The problem:
There is a population of people, and I want to assign them into groups. I have a set of rules to give each assignation a score. I want to find the best one (or at least a very good one).
For example, with a population of four {A,B,C,D} and assigning to two groups of two, the possible assignations are:
{A,B},{C,D}
{A,C},{B,D}
{A,D},{B,C}
And for example, {B,A},{C,D} and {C,D},{A,B} are both the same as the first one (I don't care about the order inside the groups and the order of the groups themselves).
The number of people, the amount of groups and how many people fit in each group are all inputs.
My idea was to list each possible assignation, calculate their score and keep track of the best one. That is, to brute force it. As the population can be big, I was thinking of going through them in a random order and return the best one found when time runs out (probably when the user gets bored or thinks it is a good enough find). The population can vary from very small (the four listed) to really big (maybe 200+) so just trying random ones without caring about repeats breaks down with the small ones, where a brute force is possible (plus I wouldn't know when to stop if I used plain random permutations).
The population is big enough that listing all the assignations to be able to shuffle them doesn't fit into memory. So I need either a method to find all the possible assignations in a random order, or a method to, given an index, generate the corresponding assignation, and use an index array and shuffle that (the second would be better because I can then easily distribute the tasks into multiple servers).
A simple recursive algorithm for generating these pairings is to pair the first element with each of the remaining elements, and for each of those couplings, recursively generate all the pairings of the remaining elements. For groups, generate all the groups made up of the first element and all the combinations of the remaining elements, then recurse for the remainders.
You can compute how many possible sets of groups there are like this:
public static int numGroupingCombinations(int n, int groupSize)
{
if(n % groupSize != 0)
return 0; // n must be a multiple of groupSize
int count = 1;
while(n > groupSize)
{
count *= nCr(n - 1, groupSize - 1);
n -= groupSize;
}
return count;
}
public static int nCr(int n, int r)
{
int ret = 1;
for (int k = 0; k < r; k++) {
ret = ret * (n-k) / (k+1);
}
return ret;
}
So I need either a method to find all the possible assignations in a random order, or a method to, given an index, generate the corresponding assignation, and use an index array and shuffle that (the second would be better because I can then easily distribute the tasks into multiple servers).
To generate a grouping from an index, choose a combination of items to group with the first element by taking the modulo of the index with the number of possible combinations, and generating the combination from the result using this algorithm. Then divide the index by that same number and recursively generate the rest of the set.
public static void generateGrouping(String[] elements, int groupSize, int start, int index)
{
if(elements.length % groupSize != 0)
return;
int remainingSize = elements.length - start;
if(remainingSize == 0)
{
// output the elements:
for(int i = 0; i < elements.length; i += groupSize)
{
System.out.print("[");
for(int j = 0; j < groupSize; j++)
System.out.print(((j==0)?"":",")+elements[i+j]);
System.out.print("]");
}
System.out.println("");
return;
}
int combinations = nCr(remainingSize - 1, groupSize - 1);
// decide which combination of remaining elements to pair the first element with:
int[] combination = getKthCombination(remainingSize - 1, groupSize - 1, index % combinations);
// swap elements into place
for(int i = 0; i < groupSize - 1; i++)
{
String temp = elements[start + 1 + i];
elements[start + 1 + i] = elements[start + 1 + combination[i]];
elements[start + 1 + combination[i]] = temp;
}
generateGrouping(elements, groupSize, start + groupSize, index / combinations);
// swap them back:
for(int i = groupSize - 2; i >= 0; i--)
{
String temp = elements[start + 1 + i];
elements[start + 1 + i] = elements[start + 1 + combination[i]];
elements[start + 1 + combination[i]] = temp;
}
}
public static void getKthCombination(int n, int r, int k, int[] c, int start, int offset)
{
if(r == 0)
return;
if(r == n)
{
for(int i = 0; i < r; i++)
c[start + i] = i + offset;
return;
}
int count = nCr(n - 1, r - 1);
if(k < count)
{
c[start] = offset;
getKthCombination(n-1, r-1, k, c, start + 1, offset + 1);
return;
}
getKthCombination(n-1, r, k-count, c, start, offset + 1);
}
public static int[] getKthCombination(int n, int r, int k)
{
int[] c = new int[r];
getKthCombination(n, r, k, c, 0, 0);
return c;
}
Demo
The start parameter is just how far along the list you are, so pass zero when calling the function at the top level. The function could easily be rewritten to be iterative. You could also pass an array of indices instead of an array of objects that you want to group, if swapping the objects is a large overhead.
What you call "assignations" are partitions with a fixed number of equally sized parts. Well, mostly. You didn't specify what should happen if (# of groups) * (size of each group) is less than or greater than your population size.
Generating every possible partition in a non-specific order is not too difficult, but it is only good for small populations or for filtering and finding any partition that matches some independent criteria. If you need to optimize or minimize something, you'll end up looking at the whole set of partitions, which may not be feasible.
Based on the description of your actual problem, you want to read up on local search and optimization algorithms, of which the aforementioned simulated annealing is one such technique.
With all that said, here is a simple recursive Python function that generates fixed-length partitions with equal-sized parts in no particular order. It is a specialization of my answer to a similar partition problem, and that answer is itself a specialization of this answer. It should be fairly straightforward to translate into JavaScript (with ES6 generators).
def special_partitions(population, num_groups, group_size):
"""Yields all partitions with a fixed number of equally sized parts.
Each yielded partition is a list of length `num_groups`,
and each part a tuple of length `group_size.
"""
assert len(population) == num_groups * group_size
groups = [] # a list of lists, currently empty
def assign(i):
if i >= len(population):
yield list(map(tuple, groups))
else:
# try to assign to an existing group, if possible
for group in groups:
if len(group) < group_size:
group.append(population[i])
yield from assign(i + 1)
group.pop()
# assign to an entirely new group, if possible
if len(groups) < num_groups:
groups.append([population[i]])
yield from assign(i + 1)
groups.pop()
yield from assign(0)
for partition in special_partitions('ABCD', 2, 2):
print(partition)
print()
for partition in special_partitions('ABCDEF', 2, 3):
print(partition)
When executed, this prints:
[('A', 'B'), ('C', 'D')]
[('A', 'C'), ('B', 'D')]
[('A', 'D'), ('B', 'C')]
[('A', 'B', 'C'), ('D', 'E', 'F')]
[('A', 'B', 'D'), ('C', 'E', 'F')]
[('A', 'B', 'E'), ('C', 'D', 'F')]
[('A', 'B', 'F'), ('C', 'D', 'E')]
[('A', 'C', 'D'), ('B', 'E', 'F')]
[('A', 'C', 'E'), ('B', 'D', 'F')]
[('A', 'C', 'F'), ('B', 'D', 'E')]
[('A', 'D', 'E'), ('B', 'C', 'F')]
[('A', 'D', 'F'), ('B', 'C', 'E')]
[('A', 'E', 'F'), ('B', 'C', 'D')]
Let's say we have a total of N elements that we want to organize in G groups of E (with G*E = N). Neither the order of the groups nor the order of the elements within groups matter. The end goal is to produce every solution in a random order, knowing that we cannot store every solution at once.
First, let's think about how to produce one solution. Since order doesn't matter, we can normalize any solution by sorting the elements within groups as well as the groups themselves, by their first element.
For instance, if we consider the population {A, B, C, D}, with N = 4, G = 2, E = 2, then the solution {B,D}, {C,A} can be normalized as {A,C}, {B,D}. The elements are sorted within each group (A before C), and the groups are sorted (A before B).
When the solutions are normalized, the first element of the first group is always the first element of the population. The second element is one of the N-1 remaining, the third element is one of the N-2 remaining, and so on, except these elements must remain sorted. So there are (N-1)!/((N-E)!*(E-1)!) possibilities for the first group.
Similarly, the first element of the next groups are fixed : they are the first of the remaining elements after each group has been created. Thus, the number of possibilities for the (n+1)th group (n from 0 to G-1) is (N-nE-1)!/((N-(n+1)E)!*(E-1)!) = ((G-n)E-1)!/(((G-n-1)E)!*(E-1)!).
This gives us one possible way of indexing a solution. The index is not a single integer, but rather an array of G integers, the integer n (still from 0 to G-1) being in the range 1 to (N-nE-1)!/((N-nE-E)!*(E-1)!), and representing the group n (or "(n+1)th group") of the solution. This is easy to produce randomly and to check for duplicates.
The last thing we need to find is a way to produce a group from a corresponding integer, n. We need to choose E-1 elements from the N-nE-1 remaining. At this point, you can imagine listing every combination and choosing the (n+1)th one. Of course, this can be done without generating every combination : see this question.
For curiosity, the total number of solutions is (GE)!/(G!*(E!)^G).
In your example, it is (2*2)!/(2!*(2!)^2) = 3.
For N = 200 and E = 2, there are 6.7e186 solutions.
For N = 200 and E = 5, there are 6.6e243 solutions (the maximum I found for 200 elements).
Additionally, for N = 200 and E > 13, the number of possibilities for the first group is greater than 2^64 (so it cannot be stored in a 64-bit integer), which is problematic for representing an index. But as long as you don't need groups with more than 13 elements, you can use arrays of 64-bit integers as indices.
Perhaps a simulated annealing approach might work. You can start with a non-optimal initial solution and iterate using heuristics to improve.
Your scoring criteria may help you choose the initial solution, e.g. make the best scoring first group you can, then with what's left make the best scoring second group, and so on.
Good choices of "neighboring states" may be implied by your scoring criteria, but at the very least, you could consider two states neighboring if they differ by a single swap.
So the iteration portion of the algorithm would be to try a bunch of swaps, sampled randomly, and choose the one that improves the global score according to the annealing schedule.
I'm hoping you can find a better choice of the adjacent states! That is, I'm hoping you can find better rules for iteratively improving based on your scoring criteria.
If you have a sufficiently large population that you can't fit all assignations in memory and are unlikely to ever test all possible assignations, then the simplest method will just be to choose test assignations randomly. For example:
repeat
randomly shuffle population
put 1st n/2 members of the shuffled pop into assig1 and 2nd n/2 into assig2
score assignation and record it if best so far
until bored
If you have a large population it is unlikely that there will be much loss of efficiency due to duplicating a test as it is unlikely that you would chance on the same assignation again.
Depending on your scoring rules it may be more efficient to choose the next assignation to be tested by, for example swapping a pair of members between the best assignation so far found, but you haven't provided enough information to determine if that is the case.
Here is an approach targeting your optimization-problem (and ignoring your permutation-based approach).
I formulate the problem as mixed-integer-problem and use specialized solvers to calculate good solutions.
As your problem is not well-formulated, it might need some modifications. But the general message is: this approach will be hard to beat!.
Code
import numpy as np
from cvxpy import *
""" Parameters """
N_POPULATION = 50
GROUPSIZES = [3, 6, 12, 12, 17]
assert sum(GROUPSIZES) == N_POPULATION
N_GROUPS = len(GROUPSIZES)
OBJ_FACTORS = [0.4, 0.1, 0.15, 0.35] # age is the most important
""" Create fake data """
age_vector = np.clip(np.random.normal(loc=35.0, scale=10.0, size=N_POPULATION).astype(int), 0, np.inf)
height_vector = np.clip(np.random.normal(loc=180.0, scale=15.0, size=N_POPULATION).astype(int), 0, np.inf)
weight_vector = np.clip(np.random.normal(loc=85, scale=20, size=N_POPULATION).astype(int), 0, np.inf)
skill_vector = np.random.randint(0, 100, N_POPULATION)
""" Calculate a-priori stats """
age_mean, height_mean, weight_mean, skill_mean = np.mean(age_vector), np.mean(height_vector), \
np.mean(weight_vector), np.mean(skill_vector)
""" Build optimization-model """
# Variables
X = Bool(N_POPULATION, N_GROUPS) # 1 if part of group
D = Variable(4, N_GROUPS) # aux-var for deviation-norm
# Constraints
constraints = []
# (1) each person is exactly in one group
for p in range(N_POPULATION):
constraints.append(sum_entries(X[p, :]) == 1)
# (2) each group has exactly n (a-priori known) members
for g_ind, g_size in enumerate(GROUPSIZES):
constraints.append(sum_entries(X[:, g_ind]) == g_size)
# Objective: minimize deviation from global-statistics within each group
# (ugly code; could be improved a lot!)
group_deviations = [[], [], [], []] # age, height, weight, skill
for g_ind, g_size in enumerate(GROUPSIZES):
group_deviations[0].append((sum_entries(mul_elemwise(age_vector, X[:, g_ind])) / g_size) - age_mean)
group_deviations[1].append((sum_entries(mul_elemwise(height_vector, X[:, g_ind])) / g_size) - height_mean)
group_deviations[2].append((sum_entries(mul_elemwise(weight_vector, X[:, g_ind])) / g_size) - weight_mean)
group_deviations[3].append((sum_entries(mul_elemwise(skill_vector, X[:, g_ind])) / g_size) - skill_mean)
for i in range(4):
for g in range(N_GROUPS):
constraints.append(D[i,g] >= abs(group_deviations[i][g]))
obj_parts = [sum_entries(OBJ_FACTORS[i] * D[i, :]) for i in range(4)]
objective = Minimize(sum(obj_parts))
""" Build optimization-problem & solve """
problem = Problem(objective, constraints)
problem.solve(solver=GUROBI, verbose=True, TimeLimit=120) # might need to use non-commercial solver here
print('Min-objective: ', problem.value)
""" Evaluate solution """
filled_groups = [[] for g in range(N_GROUPS)]
for g_ind, g_size in enumerate(GROUPSIZES):
for p in range(N_POPULATION):
if np.isclose(X[p, g_ind].value, 1.0):
filled_groups[g_ind].append(p)
for g_ind, g_size in enumerate(GROUPSIZES):
print('Group: ', g_ind, ' of size: ', g_size)
print(' ' + str(filled_groups[g_ind]))
group_stats = []
for g in range(N_GROUPS):
age_mean_in_group = age_vector[filled_groups[g]].mean()
height_mean_in_group = height_vector[filled_groups[g]].mean()
weight_mean_in_group = weight_vector[filled_groups[g]].mean()
skill_mean_in_group = skill_vector[filled_groups[g]].mean()
group_stats.append((age_mean_in_group, height_mean_in_group, weight_mean_in_group, skill_mean_in_group))
print('group-assignment solution means: ')
for g in range(N_GROUPS):
print(np.round(group_stats[g], 1))
""" Compare with input """
input_data = np.vstack((age_vector, height_vector, weight_vector, skill_vector))
print('input-means')
print(age_mean, height_mean, weight_mean, skill_mean)
print('input-data')
print(input_data)
Output (time-limit of 2 minutes; commercial solver)
Time limit reached
Best objective 9.612058823514e-01, best bound 4.784117647059e-01, gap 50.2280%
('Min-objective: ', 0.961205882351435)
('Group: ', 0, ' of size: ', 3)
[16, 20, 27]
('Group: ', 1, ' of size: ', 6)
[26, 32, 34, 45, 47, 49]
('Group: ', 2, ' of size: ', 12)
[0, 6, 10, 12, 15, 21, 24, 30, 38, 42, 43, 48]
('Group: ', 3, ' of size: ', 12)
[2, 3, 13, 17, 19, 22, 23, 25, 31, 36, 37, 40]
('Group: ', 4, ' of size: ', 17)
[1, 4, 5, 7, 8, 9, 11, 14, 18, 28, 29, 33, 35, 39, 41, 44, 46]
group-assignment solution means:
[ 33.3 179.3 83.7 49. ]
[ 33.8 178.2 84.3 49.2]
[ 33.9 178.7 83.8 49.1]
[ 33.8 179.1 84.1 49.2]
[ 34. 179.6 84.7 49. ]
input-means
(33.859999999999999, 179.06, 84.239999999999995, 49.100000000000001)
input-data
[[ 22. 35. 28. 32. 41. 26. 25. 37. 32. 26. 36. 36.
27. 34. 38. 38. 38. 47. 35. 35. 34. 30. 38. 34.
31. 21. 25. 28. 22. 40. 30. 18. 32. 46. 38. 38.
49. 20. 53. 32. 49. 44. 44. 42. 29. 39. 21. 36.
29. 33.]
[ 161. 158. 177. 195. 197. 206. 169. 182. 182. 198. 165. 185.
171. 175. 176. 176. 172. 196. 186. 172. 184. 198. 172. 162.
171. 175. 178. 182. 163. 176. 192. 182. 187. 161. 158. 191.
182. 164. 178. 174. 197. 156. 176. 196. 170. 197. 192. 171.
191. 178.]
[ 85. 103. 99. 93. 71. 109. 63. 87. 60. 94. 48. 122.
56. 84. 69. 162. 104. 71. 92. 97. 101. 66. 58. 69.
88. 69. 80. 46. 74. 61. 25. 74. 59. 69. 112. 82.
104. 62. 98. 84. 129. 71. 98. 107. 111. 117. 81. 74.
110. 64.]
[ 81. 67. 49. 74. 65. 93. 25. 7. 99. 34. 37. 1.
25. 1. 96. 36. 39. 41. 33. 28. 17. 95. 11. 80.
27. 78. 97. 91. 77. 88. 29. 54. 16. 67. 26. 13.
31. 57. 84. 3. 87. 7. 99. 35. 12. 44. 71. 43.
16. 69.]]
Solution remarks
This solution looks quite nice (regarding mean-deviation) and it only took 2 minutes (we decided on the time-limit a-priori)
We also got tight bounds: 0.961 is our solution; we know it can't be lower than 4.784
Reproducibility
The code uses numpy and cvxpy
An commercial solver was used
You might need to use a non-commercial MIP-solver (supporting time-limit for early abortion; take current best-solution)
The valid open-source MIP-solvers supported in cvxpy are: cbc (no chance of setting time-limits for now) and glpk (check the docs for time-limit support)
Model decisions
The code uses L1-norm penalization, which results in an MIP-problem
Depending on your problem, it might be wise to use L2-norm penalization (one big deviation hurts more than many smaller ones), which will result in a harder problem (MIQP / MISOCP)
Say you have a vertical game board of length n (being the number of spaces). And you have a three-sided die that has the options: go forward one, stay and go back one. If you go below or above the number of board game spaces it is an invalid game. The only valid move once you reach the end of the board is "stay". Given an exact number of die rolls t, is it possible to algorithmically work out the number of unique dice rolls that result in a winning game?
So far I've tried producing a list of every possible combination of (-1,0,1) for the given number of die rolls and sorting through the list to see if any add up to the length of the board and also meet all the requirements for being a valid game. But this is impractical for dice rolls above 20.
For example:
t=1, n=2; Output=1
t=3, n=2; Output=3
You can use a dynamic programming approach. The sketch of a recurrence is:
M(0, 1) = 1
M(t, n) = T(t-1, n-1) + T(t-1, n) + T(t-1, n+1)
Of course you have to consider the border cases (like going off the board or not allowing to exit the end of the board, but it's easy to code that).
Here's some Python code:
def solve(N, T):
M, M2 = [0]*N, [0]*N
M[0] = 1
for i in xrange(T):
M, M2 = M2, M
for j in xrange(N):
M[j] = (j>0 and M2[j-1]) + M2[j] + (j+1<N-1 and M2[j+1])
return M[N-1]
print solve(3, 2) #1
print solve(2, 1) #1
print solve(2, 3) #3
print solve(5, 20) #19535230
Bonus: fancy "one-liner" with list compreehension and reduce
def solve(N, T):
return reduce(
lambda M, _: [(j>0 and M[j-1]) + M[j] + (j<N-2 and M[j+1]) for j in xrange(N)],
xrange(T), [1]+[0]*N)[-1]
Let M[i, j] be an N by N matrix with M[i, j] = 1 if |i-j| <= 1 and 0 otherwise (and the special case for the "stay" rule of M[N, N-1] = 0)
This matrix counts paths of length 1 from position i to position j.
To find paths of length t, simply raise M to the t'th power. This can be performed efficiently by linear algebra packages.
The solution can be read off: M^t[1, N].
For example, computing paths of length 20 on a board of size 5 in an interactive Python session:
>>> import numpy
>>> M = numpy.matrix('1 1 0 0 0;1 1 1 0 0; 0 1 1 1 0; 0 0 1 1 1; 0 0 0 0 1')
>>> M
matrix([[1, 1, 0, 0, 0],
[1, 1, 1, 0, 0],
[0, 1, 1, 1, 0],
[0, 0, 1, 1, 1],
[0, 0, 0, 0, 1]])
>>> M ** 20
matrix([[31628466, 51170460, 51163695, 31617520, 19535230],
[51170460, 82792161, 82787980, 51163695, 31617520],
[51163695, 82787980, 82792161, 51170460, 31628465],
[31617520, 51163695, 51170460, 31628466, 19552940],
[ 0, 0, 0, 0, 1]])
So there's M^20[1, 5], or 19535230 paths of length 20 from start to finish on a board of size 5.
Try a backtracking algorithm. Recursively "dive down" into depth t and only continue with dice values that could still result in a valid state. Propably by passing a "remaining budget" around.
For example, n=10, t=20, when you reached depth 10 of 20 and your budget is still 10 (= steps forward and backwards seemed to cancelled), the next recursion steps until depth t would discontinue the 0 and -1 possibilities, because they could not result in a valid state at the end.
A backtracking algorithms for this case is still very heavy (exponential), but better than first blowing up a bubble with all possibilities and then filtering.
Since zeros can be added anywhere, we'll multiply those possibilities by the different arrangements of (-1)'s:
X (space 1) X (space 2) X (space 3) X (space 4) X
(-1)'s can only appear in spaces 1,2 or 3, not in space 4. I got help with the mathematical recurrence that counts the number of ways to place minus ones without skipping backwards.
JavaScript code:
function C(n,k){if(k==0||n==k)return 1;var p=n;for(var i=2;i<=k;i++)p*=(n+1-i)/i;return p}
function sumCoefficients(arr,cs){
var s = 0, i = -1;
while (arr[++i]){
s += cs[i] * arr[i];
}
return s;
}
function f(n,t){
var numMinusOnes = (t - (n-1)) >> 1
result = C(t,n-1),
numPlaces = n - 2,
cs = [];
for (var i=1; numPlaces-i>=i-1; i++){
cs.push(-Math.pow(-1,i) * C(numPlaces + 1 - i,i));
}
var As = new Array(cs.length),
An;
As[0] = 1;
for (var m=1; m<=numMinusOnes; m++){
var zeros = t - (n-1) - 2*m;
An = sumCoefficients(As,cs);
As.unshift(An);
As.pop();
result += An * C(zeros + 2*m + n-1,zeros);
}
return result;
}
Output:
console.log(f(5,20))
19535230
I need to write an algorithm for a given problem: You have infinite pennies, nickels, dimes, and quarters. Write a class method that will output all combinations of coins such that the total is 99 cents.
It seems like a permutation nPr problem. Any algoritham for it?
Regards,
Priyank
I think this problem is most easily answered using recursion w a table of denominations
{5000, 2000, ... 1} // $50's to one penny
You would start with:
WaysToMakeChange(10000, 0) // ie. $100...highest denomination index is 0 ($50)
WaysToMakeChange(amount, maxdenomindex) would calculate using 0 or more of the maxdenom
the recurance is something like
WaysToMakeChange(amount - usedbymaxdenom, maxdenomindex - 1)
I programmed this and it can be optimized in many ways:
1) multithreading
2) caching. This is very important. B/c of the way the algorithm works, WaysToMakeChange(m,n) will be called many times with the same initial values:
For example. Changing $100 can be done by:
1 $50 + 0 $20's + 0 $10's + ways to $50 dollars with highest currency $5 (ie. WaysToMakeChange(5000, index for $5)
0 $50 + 2 $20's + 1 $10's + ways to $50 dollars with highest currency $5 (ie. WaysToMakeChange(5000, index for $5)
Clearly WaysToMakeChange(5000, index for $5) can be cached so that the subsequent call does not need to be made
3) Shortcircuiting the lowest recursion.
Suppose static const int denom[] = {5000, 2000, 1000, 500, 200, 100, 50, 25, 10, 5, 1};
The first test for WaysToMakeChange(int total, int coinIndex) should be something like:
if( coins[_countof(coins)-1] == 1 && coinIndex == _countof(coins) - 2){
return total / coins[_countof(coins)-2] + 1;
}
What does this mean? Well if your lowest denom is 1 then you only have to go as far as the second lowest denom (say a nickel). Then there are 1+ total/second lowest denom left. For example:
49c -> 5 nickels + 4 pennies. 4 nickels + 9 pennies....49 pennies = 1+ total/second lowest denom left
The easiest way is probably to spend a few moments thinking about the problem. There is a relatively nice, recursive, algorithm that lends itself neatly to either memoization or reworking into a dynamic programming solution.
This problem is classic Dynamic Programming problem. You can read about it here
http://www.algorithmist.com/index.php/Coin_Change
the python code is:
def count( n, m ):
if n == 0:
return 1
if n < 0:
return 0
if m <= 0 and n >= 1:
return 0
return count( n, m - 1 ) + count( n - S[m], m )
Here S[m] gives the value of the denomination and S is a sorted array of denominations
This problem seems like it is a diophantine equation, i.e. for a*x + b*y + ... = n, find a solution, where all letters are integers. The simplest, but not the most elegant solution would be an iterative one (displayed in python, note that I skip variable l because it resembles the number 1):
dioph_combinations = list()
for i in range(0, 99, 25):
for j in range(0, 99-i, 10):
for k in range(0, 99-i-j, 5):
for m in range(0, 99-i-j-k, 1):
if i + j + k + m == 99:
dioph_combinations.append( (i/25, j/10, k/5, m) )
The resulting list dioph_combinations will contain the possible combinations.