Discrete optimization algorithm - algorithm

I'm trying to decide on the best approach for my problem, which is as follows:
I have a set of objects (about 3k-5k) which I want to uniquely assign to about 10 groups (1 group per object).
Each object has a set of grades corresponding with how well it fits within each group.
Each group has a capacity of objects it can manage (the constraints).
My goal is to maximize the sum of grades my assignments receive.
For example, let's say I have 3 objects (o1, o2, o3) and 2 groups (g1,g2) with a cap. of 1 object each.
Now assume the grades are:
o1: g1=11, g2=8
o2: g1=10, g2=5
o3: g1=5, g2=6
In that case, for the optimal result g1 should receive o2, and g2 should receive o1, yielding a total of 10+8=18 points.
Note that the number of objects can either exceed the sum of quotas (e.g. leaving o3 as a "leftover") or fall short from filling the quotas.
How should I address this problem (Traveling Salesman, sort of a weighted Knap-Sack etc.)? How long should brute-forcing it take on a regular computer? Are there any standard tools such as the linprog function in Matlab that support this sort of problem?

It can be solved with min cost flow algorithm.
The graph can look the following way:
It should be bipartite. The left part represents objects(one vertex for each object). The right part represents groups(one vertex for each group). There is an edge from each vertex from the left part to each vertex from the right part with capacity = 1 and cost = -grade for this pair. There is also an edge from the source vertex to each vertex from the left part with capacity = 1 and cost = 0 and there is an edge from each vertex from the right part to the sink vertex(sink and source are two additional vertices) with capacity = constraints for this group and cost = 0.
The answer is -the cheapest flow cost from the source to the sink.
It is possible to implement it with O(N^2 * M * log(N + M)) time complexity(using Dijkstra algorithm with potentials)(N is the number of objects, M is the number of groups).

This can be solved with an integer program. Binary variables x_{ij} state if object i is assigned to group j. The objective maximized \sum_{i,j} s_{ij}x_{ij}, where s_{ij} is the score associated with assigning i to j and x_{ij} is whether i is assigned to j. You have two types of constraints:
\sum_i x_{ij} <= c_j for all j, the capacity constraints for groups
\sum_j x_{ij} <= 1 for all i, limiting objects to be assigned to at most one group
Here's how you would implement it in R -- the lp function in R is quite similar to the linprog function in matlab.
# Score matrix
S <- matrix(c(11, 10, 5, 8, 5, 6), nrow=3)
# Capacity vector
cvec <- c(1, 1)
# Helper function to construct constraint matrices
unit.vec <- function(pos, n) {
ret <- rep(0, n)
ret[pos] <- 1
ret
}
# Capacity constraints
cap <- t(sapply(1:ncol(S), function(j) rep(unit.vec(j, ncol(S)), nrow(S))))
# Object assignment constraints
obj <- t(sapply(1:nrow(S), function(i) rep(unit.vec(i, nrow(S)), each=ncol(S))))
# Solve the LP
res <- lp(direction="max",
objective.in=as.vector(t(S)),
const.mat=rbind(cap, obj),
const.dir="<=",
const.rhs=c(cvec, rep(1, nrow(S))),
all.bin=TRUE)
# Grab assignments and objective
sln <- t(matrix(res$solution, nrow=ncol(S)))
apply(sln, 1, function(x) ifelse(sum(x) > 0.999, which(x == 1), NA))
# [1] 2 1 NA
res$objval
# [1] 18
Although this is modeled with binary variables, it will solve quite efficiently assuming integral capacities.

Related

Queries for minimum fuel needed to travel from U to V

Question:
Given a tree with N nodes.
Each edges of the tree contains:
D : the length of the edge
T : the gold needed to pay to go through that edge (the gold should be paid before going through the edge)
When moving through an edge, if you're carrying X golds, you will need X*D fuel.
There are 2 types of queries:
u, v: find the fuel needed to transfer G golds from u to v (G is fixed among all queries)
u, v, x: update T of edge {u,v} to x ({u, v} is guaranteed to be in the tree)
Constraints:
2 ≤ N ≤ 100.000
1 ≤ Q ≤ 100.000
1 ≤ Ai, Bi ≤ N
1 ≤ D, T, G ≤ 10^9
Example:
N = 6, G = 2
Take queries 1 with u = 3 and v = 6 for example. First, you start at 3 with 11 golds , pay 2, having 9, and go to node 2 with 9*1 = 9 fuel. Next, we pay 3 gold, having 6, and go to node 4 with 6*2 = 12 fuel. Finally, we pay 4, having 2 gold, and go to node 6 with 2*1 = 2 fuel. So the fuel needed would be 9 + 12 + 2 = 23.
So the answer to query: u = 3, v = 6 would be 23
The second query is just updating T of the edge so I think there's no need for explanation.
My take
I was only able to solve the problem in O(N*Q). Since it's a tree, there's only 1 path from u to v, so for each query, I do a DFS to find the fuel needed to go from u to v. Here's the code for that subtask: https://ideone.com/SyINTQ
For some special cases that all T are 0. We just need to find the length from u to v and multiply it by G. The length from u to v can be easily found using a distance array and LCA. I think this could be a hint for the proper solution.
Is there a way to do the queries in logN or less?
P/S: Please comment if anything needs to be clarified, and sorry for my bad English.
This answer will explain my matrix group comment in detail and then
sketch the standard data structures needed to make it work.
Let’s suppose that we’re carrying Gold and have burned Fuel so far.
If we traverse an edge with parameters Distance, Toll, then the
effect is
Gold -= Toll
Fuel += Gold * Distance,
or as a functional program,
Gold' = Gold - Toll
Fuel' = Fuel + Gold' * Distance.
= Fuel + Gold * Distance - Toll * Distance.
The latter code fragment defines what mathematicians call an action:
each Distance, Toll gives rise to a function from Gold, Fuel to
Gold, Fuel.
Now, whenever we have two functions from a domain to that same domain,
we can compose them (apply one after the other):
Gold' = Gold - Toll1
Fuel' = Fuel + Gold' * Distance1,
Gold'' = Gold' - Toll2
Fuel'' = Fuel' + Gold'' * Distance2.
The point of this math is that we can expand the definitions:
Gold'' = Gold - Toll1 - Toll2
= Gold - (Toll1 + Toll2),
Fuel'' = Fuel' + (Gold - (Toll1 + Toll2)) * Distance2
= Fuel + (Gold - Toll1) * Distance1 + (Gold - (Toll1 + Toll2)) * Distance2
= Fuel + Gold * (Distance1 + Distance2) - (Toll1 * Distance1 + (Toll1 + Toll2 ) * Distance2).
I’ve tried to express Fuel'' in the same form as before: the
composition has “Distance” Distance1 + Distance2 and “Toll”
Toll1 + Toll2, but the last term doesn’t fit the pattern. What we can
do, however, is add another parameter, FuelSaved and define it to be
Toll * Distance for each of the input edges. The generalized update
rule is
Gold' = Gold - Toll
Fuel' = Fuel + Gold * Distance - FuelSaved.
I’ll let you work out the generalized composition rule for
Distance1, Toll1, FuelSaved1 and Distance2, Toll2, FuelSaved2.
Suffice it to say, we can embed Gold, Fuel as a column vector
{1, Gold, Fuel}, and parameters Distance, Toll, FuelSaved as a unit
lower triangular matrix
{{1, 0, 0}, {-Toll, 1, 0}, {-FuelSaved, Distance, 1}}. Then
composition is matrix multiplication.
Now, so far we only have a semigroup. I could take it from here with
data structures, but they’re more complicated when we don’t have an
analog of subtraction (for intuition, compare the problems of finding
the sum of each length-k window in an array with finding the max).
Happily, there is a useful notion of undoing a traversal here (inverse).
We can derive it by solving for Gold, Fuel from Gold', Fuel':
Gold = Gold' + Toll
Fuel = Fuel' - Gold * Distance + FuelSaved,
Fuel = Fuel' + Gold' * (-Distance) - (-FuelSaved - Toll * Distance)
and reading off the inverse parameters.
I promised a sketch of the data structures, so here we are. Root the
tree anywhere. It suffices to be able to
Given nodes u and v, query the leafmost common ancestor of u and v;
Given a node u, query the parameters to get from u to the root;
Given a node v, query the parameters to get from the root to v;
Update the toll on an edge.
Then to answer a query u, v, we query their leafmost common ancestor w
and return the fuel cost of the composition (u to root) (w to root)⁻¹
(root to w)⁻¹ (root to v) where ⁻¹ means “take the inverse”.
The full-on sledgehammer approach here is to implement dynamic trees,
which will do all of these
things in amortized logarithmic
time per operation. But we don’t need dynamic topology updates and can
probably afford an extra log factor, so a set of more easily digestable
pieces would be leafmost common ancestors, heavy path decomposition, and
segment trees (one per path; Fenwick is potentially another option, but
I’m not sure what complications a noncommutative operation might
create).
I told in the comments that a Dijkstra algorithm was necessary, but thinking better the DFS is really enough because there is only one path for each pair of vertices, we will always need to go from the starting point to the endpoint.
Using a priority queue instead of a stack would only change the order that the graph is explored, but in the worst case it would still visit all the vertices.
Using a queue instead of a stack would make the algorithm a breadth first search, again would only change the order in which the graph is explored.
Assuming that the number of nodes in a given distance increases exponentially with the threshold. An improvement for the typical case could be achieved by doing two searches and meeting in the middle. But only a constant factor.
So I think it is better to go with the simple solution, implementing this in C/C++ will result in a program dozens of times faster.
Solution
Prepare adjacency lists, and also makes the graph undirected
from collections import defaultdict
def process_edges(rows):
edges = defaultdict(list)
for u,v,D,T in rows:
edges[u].append((v,(D,T)))
edges[v].append((u,(D,T)))
return edges
It is interesting to do the search backwards because the amount of gold is fixed at the destination, and unknown at the origin, then we can calculate the exact amount of gold and fuel required for each node going backwards.
Of course you can remove the print statement I left there
def dfs(edges, a, b, G):
Q = [((0,G),b)]
visited = set()
while len(Q) != 0:
((Fu,Gu), current_vertex) = Q.pop()
visited.add(current_vertex)
for neighbor,(D,T) in edges[current_vertex]:
if neighbor in visited:
continue; # avoid going backwards
Gv = Gu + T # add the tax of the edge to the gold budget
Fv = Fu + Gv * D # compute the required fuel
print(neighbor, (Fv, Gv))
if neighbor == a:
return (Fv, Gv)
Q.append(((Fv,Gv), neighbor))
Running your example
edges = process_edges([
[6,4,1,4],
[5,4,2,2],
[4,2,2,3],
[3,2,1,2],
[1,2,2,1]
])
dfs(edges,3,6,2)
Will print:
4 (6, 6)
5 (22, 8)
2 (24, 9)
3 (35, 11)
and return (35, 11). It means that for rounte from 3 to 6, it requires 11 gold, and 35 is the fuel used.

Conditional sampling of binary vectors (?)

I'm trying to find a name for my problem, so I don't have to re-invent wheel when coding an algorithm which solves it...
I have say 2,000 binary (row) vectors and I need to pick 500 from them. In the picked sample I do column sums and I want my sample to be as close as possible to a pre-defined distribution of the column sums. I'll be working with 20 to 60 columns.
A tiny example:
Out of the vectors:
110
010
011
110
100
I need to pick 2 to get column sums 2, 1, 0. The solution (exact in this case) would be
110
100
My ideas so far
one could maybe call this a binary multidimensional knapsack, but I did not find any algos for that
Linear Programming could help, but I'd need some step by step explanation as I got no experience with it
as exact solution is not always feasible, something like simulated annealing brute force could work well
a hacky way using constraint solvers comes to mind - first set the constraints tight and gradually loosen them until some solution is found - given that CSP should be much faster than ILP...?
My concrete, practical (if the approximation guarantee works out for you) suggestion would be to apply the maximum entropy method (in Chapter 7 of Boyd and Vandenberghe's book Convex Optimization; you can probably find several implementations with your favorite search engine) to find the maximum entropy probability distribution on row indexes such that (1) no row index is more likely than 1/500 (2) the expected value of the row vector chosen is 1/500th of the predefined distribution. Given this distribution, choose each row independently with probability 500 times its distribution likelihood, which will give you 500 rows on average. If you need exactly 500, repeat until you get exactly 500 (shouldn't take too many tries due to concentration bounds).
Firstly I will make some assumptions regarding this problem:
Regardless whether the column sum of the selected solution is over or under the target, it weighs the same.
The sum of the first, second, and third column are equally weighted in the solution (i.e. If there's a solution whereas the first column sum is off by 1, and another where the third column sum is off by 1, the solution are equally good).
The closest problem I can think of this problem is the Subset sum problem, which itself can be thought of a special case of Knapsack problem.
However both of these problem are NP-Complete. This means there are no polynomial time algorithm that can solve them, even though it is easy to verify the solution.
If I were you the two most arguably efficient solution of this problem are linear programming and machine learning.
Depending on how many columns you are optimising in this problem, with linear programming you can control how much finely tuned you want the solution, in exchange of time. You should read up on this, because this is fairly simple and efficient.
With Machine learning, you need a lot of data sets (the set of vectors and the set of solutions). You don't even need to specify what you want, a lot of machine learning algorithms can generally deduce what you want them to optimise based on your data set.
Both solution has pros and cons, you should decide which one to use yourself based on the circumstances and problem set.
This definitely can be modeled as (integer!) linear program (many problems can). Once you have it, you can use a program such as lpsolve to solve it.
We model vector i is selected as x_i which can be 0 or 1.
Then for each column c, we have a constraint:
sum of all (x_i * value of i in column c) = target for column c
Taking your example, in lp_solve this could look like:
min: ;
+x1 +x4 +x5 >= 2;
+x1 +x4 +x5 <= 2;
+x1 +x2 +x3 +x4 <= 1;
+x1 +x2 +x3 +x4 >= 1;
+x3 <= 0;
+x3 >= 0;
bin x1, x2, x3, x4, x5;
If you are fine with a heuristic based search approach, here is one.
Go over the list and find the minimum squared sum of the digit wise difference between each bit string and the goal. For example, if we are looking for 2, 1, 0, and we are scoring 0, 1, 0, we would do it in the following way:
Take the digit wise difference:
2, 0, 1
Square the digit wise difference:
4, 0, 1
Sum:
5
As a side note, squaring the difference when scoring is a common method when doing heuristic search. In your case, it makes sense because bit strings that have a 1 in as the first digit are a lot more interesting to us. In your case this simple algorithm would pick first 110, then 100, which would is the best solution.
In any case, there are some optimizations that could be made to this, I will post them here if this kind of approach is what you are looking for, but this is the core of the algorithm.
You have a given target binary vector. You want to select M vectors out of N that have the closest sum to the target. Let's say you use the eucilidean distance to measure if a selection is better than another.
If you want an exact sum, have a look at the k-sum problem which is a generalization of the 3SUM problem. The problem is harder than the subset sum problem, because you want an exact number of elements to add to a target value. There is a solution in O(N^(M/2)). lg N), but that means more than 2000^250 * 7.6 > 10^826 operations in your case (in the favorable case where vectors operations have a cost of 1).
First conclusion: do not try to get an exact result unless your vectors have some characteristics that may reduce the complexity.
Here's a hill climbing approach:
sort the vectors by number of 1's: 111... first, 000... last;
use the polynomial time approximate algorithm for the subset sum;
you have an approximate solution with K elements. Because of the order of elements (the big ones come first), K should be a little as possible:
if K >= M, you take the M first vectors of the solution and that's probably near the best you can do.
if K < M, you can remove the first vector and try to replace it with 2 or more vectors from the rest of the N vectors, using the same technique, until you have M vectors. To sumarize: split the big vectors into smaller ones until you reach the correct number of vectors.
Here's a proof of concept with numbers, in Python:
import random
def distance(x, y):
return abs(x-y)
def show(ls):
if len(ls) < 10:
return str(ls)
else:
return ", ".join(map(str, ls[:5]+("...",)+ls[-5:]))
def find(is_xs, target):
# see https://en.wikipedia.org/wiki/Subset_sum_problem#Pseudo-polynomial_time_dynamic_programming_solution
S = [(0, ())] # we store indices along with values to get the path
for i, x in is_xs:
T = [(x + t, js + (i,)) for t, js in S]
U = sorted(S + T)
y, ks = U[0]
S = [(y, ks)]
for z, ls in U:
if z == target: # use the euclidean distance here if you want an approximation
return ls
if z != y and z < target:
y, ks = z, ls
S.append((z, ls))
ls = S[-1][1] # take the closest element to target
return ls
N = 2000
M = 500
target = 1000
xs = [random.randint(0, 10) for _ in range(N)]
print ("Take {} numbers out of {} to make a sum of {}", M, xs, target)
xs = sorted(xs, reverse = True)
is_xs = list(enumerate(xs))
print ("Sorted numbers: {}".format(show(tuple(is_xs))))
ls = find(is_xs, target)
print("FIRST TRY: {} elements ({}) -> {}".format(len(ls), show(ls), sum(x for i, x in is_xs if i in ls)))
splits = 0
while len(ls) < M:
first_x = xs[ls[0]]
js_ys = [(i, x) for i, x in is_xs if i not in ls and x != first_x]
replace = find(js_ys, first_x)
splits += 1
if len(replace) < 2 or len(replace) + len(ls) - 1 > M or sum(xs[i] for i in replace) != first_x:
print("Give up: can't replace {}.\nAdd the lowest elements.")
ls += tuple([i for i, x in is_xs if i not in ls][len(ls)-M:])
break
print ("Replace {} (={}) by {} (={})".format(ls[:1], first_x, replace, sum(xs[i] for i in replace)))
ls = tuple(sorted(ls[1:] + replace)) # use a heap?
print("{} elements ({}) -> {}".format(len(ls), show(ls), sum(x for i, x in is_xs if i in ls)))
print("AFTER {} splits, {} -> {}".format(splits, ls, sum(x for i, x in is_xs if i in ls)))
The result is obviously not guaranteed to be optimal.
Remarks:
Complexity: find has a polynomial time complexity (see the Wikipedia page) and is called at most M^2 times, hence the complexity remains polynomial. In practice, the process is reasonably fast (split calls have a small target).
Vectors: to ensure that you reach the target with the minimum of elements, you can improve the order of element. Your target is (t_1, ..., t_c): if you sort the t_js from max to min, you get the more importants columns first. You can sort the vectors: by number of 1s and then by the presence of a 1 in the most important columns. E.g. target = 4 8 6 => 1 1 1 > 0 1 1 > 1 1 0 > 1 0 1 > 0 1 0 > 0 0 1 > 1 0 0 > 0 0 0.
find (Vectors) if the current sum exceed the target in all the columns, then you're not connecting to the target (any vector you add to the current sum will bring you farther from the target): don't add the sum to S (z >= target case for numbers).
I propose a simple ad hoc algorithm, which, broadly speaking, is a kind of gradient descent algorithm. It seems to work relatively well for input vectors which have a distribution of 1s “similar” to the target sum vector, and probably also for all “nice” input vectors, as defined in a comment of yours. The solution is not exact, but the approximation seems good.
The distance between the sum vector of the output vectors and the target vector is taken to be Euclidean. To minimize it means minimizing the sum of the square differences off sum vector and target vector (the square root is not needed because it is monotonic). The algorithm does not guarantee to yield the sample that minimizes the distance from the target, but anyway makes a serious attempt at doing so, by always moving in some locally optimal direction.
The algorithm can be split into 3 parts.
First of all the first M candidate output vectors out of the N input vectors (e.g., N=2000, M=500) are put in a list, and the remaining vectors are put in another.
Then "approximately optimal" swaps between vectors in the two lists are done, until either the distance would not decrease any more, or a predefined maximum number of iterations is reached. An approximately optimal swap is one where removing the first vector from the list of output vectors causes a maximal decrease or minimal increase of the distance, and then, after the removal of the first vector, adding the second vector to the same list causes a maximal decrease of the distance. The whole swap is avoided if the net result is not a decrease of the distance.
Then, as a last phase, "optimal" swaps are done, again stopping on no decrease in distance or maximum number of iterations reached. Optimal swaps cause a maximal decrease of the distance, without requiring the removal of the first vector to be optimal in itself. To find an optimal swap all vector pairs have to be checked. This phase is much more expensive, being O(M(N-M)), while the previous "approximate" phase is O(M+(N-M))=O(N). Luckily, when entering this phase, most of the work has already been done by the previous phase.
from typing import List, Tuple
def get_sample(vects: List[Tuple[int]], target: Tuple[int], n_out: int,
max_approx_swaps: int = None, max_optimal_swaps: int = None,
verbose: bool = False) -> List[Tuple[int]]:
"""
Get a sample of the input vectors having a sum close to the target vector.
Closeness is measured in Euclidean metrics. The output is not guaranteed to be
optimal (minimum square distance from target), but a serious attempt is made.
The max_* parameters can be used to avoid too long execution times,
tune them to your needs by setting verbose to True, or leave them None (∞).
:param vects: the list of vectors (tuples) with the same number of "columns"
:param target: the target vector, with the same number of "columns"
:param n_out: the requested sample size
:param max_approx_swaps: the max number of approximately optimal vector swaps,
None means unlimited (default: None)
:param max_optimal_swaps: the max number of optimal vector swaps,
None means unlimited (default: None)
:param verbose: print some info if True (default: False)
:return: the sample of n_out vectors having a sum close to the target vector
"""
def square_distance(v1, v2):
return sum((e1 - e2) ** 2 for e1, e2 in zip(v1, v2))
n_vec = len(vects)
assert n_vec > 0
assert n_out > 0
n_rem = n_vec - n_out
assert n_rem > 0
output = vects[:n_out]
remain = vects[n_out:]
n_col = len(vects[0])
assert n_col == len(target) > 0
sumvect = (0,) * n_col
for outvect in output:
sumvect = tuple(map(int.__add__, sumvect, outvect))
sqdist = square_distance(sumvect, target)
if verbose:
print(f"sqdist = {sqdist:4} after"
f" picking the first {n_out} vectors out of {n_vec}")
if max_approx_swaps is None:
max_approx_swaps = sqdist
n_approx_swaps = 0
while sqdist and n_approx_swaps < max_approx_swaps:
# find the best vect to subtract (the square distance MAY increase)
sqdist_0 = None
index_0 = None
sumvect_0 = None
for index in range(n_out):
tmp_sumvect = tuple(map(int.__sub__, sumvect, output[index]))
tmp_sqdist = square_distance(tmp_sumvect, target)
if sqdist_0 is None or sqdist_0 > tmp_sqdist:
sqdist_0 = tmp_sqdist
index_0 = index
sumvect_0 = tmp_sumvect
# find the best vect to add,
# but only if there is a net decrease of the square distance
sqdist_1 = sqdist
index_1 = None
sumvect_1 = None
for index in range(n_rem):
tmp_sumvect = tuple(map(int.__add__, sumvect_0, remain[index]))
tmp_sqdist = square_distance(tmp_sumvect, target)
if sqdist_1 > tmp_sqdist:
sqdist_1 = tmp_sqdist
index_1 = index
sumvect_1 = tmp_sumvect
if sumvect_1:
tmp = output[index_0]
output[index_0] = remain[index_1]
remain[index_1] = tmp
sqdist = sqdist_1
sumvect = sumvect_1
n_approx_swaps += 1
else:
break
if verbose:
print(f"sqdist = {sqdist:4} after {n_approx_swaps}"
f" approximately optimal swap{'s'[n_approx_swaps == 1:]}")
diffvect = tuple(map(int.__sub__, sumvect, target))
if max_optimal_swaps is None:
max_optimal_swaps = sqdist
n_optimal_swaps = 0
while sqdist and n_optimal_swaps < max_optimal_swaps:
# find the best pair to swap,
# but only if the square distance decreases
best_sqdist = sqdist
best_diffvect = diffvect
best_pair = None
for i0 in range(M):
tmp_diffvect = tuple(map(int.__sub__, diffvect, output[i0]))
for i1 in range(n_rem):
new_diffvect = tuple(map(int.__add__, tmp_diffvect, remain[i1]))
new_sqdist = sum(d * d for d in new_diffvect)
if best_sqdist > new_sqdist:
best_sqdist = new_sqdist
best_diffvect = new_diffvect
best_pair = (i0, i1)
if best_pair:
tmp = output[best_pair[0]]
output[best_pair[0]] = remain[best_pair[1]]
remain[best_pair[1]] = tmp
sqdist = best_sqdist
diffvect = best_diffvect
n_optimal_swaps += 1
else:
break
if verbose:
print(f"sqdist = {sqdist:4} after {n_optimal_swaps}"
f" optimal swap{'s'[n_optimal_swaps == 1:]}")
return output
from random import randrange
C = 30 # number of columns
N = 2000 # total number of vectors
M = 500 # number of output vectors
F = 0.9 # fill factor of the target sum vector
T = int(M * F) # maximum value + 1 that can be appear in the target sum vector
A = 10000 # maximum number of approximately optimal swaps, may be None (∞)
B = 10 # maximum number of optimal swaps, may be None (unlimited)
target = tuple(randrange(T) for _ in range(C))
vects = [tuple(int(randrange(M) < t) for t in target) for _ in range(N)]
sample = get_sample(vects, target, M, A, B, True)
Typical output:
sqdist = 2639 after picking the first 500 vectors out of 2000
sqdist = 9 after 27 approximately optimal swaps
sqdist = 1 after 4 optimal swaps
P.S.: As it stands, this algorithm is not limited to binary input vectors, integer vectors would work too. Intuitively I suspect that the quality of the optimization could suffer, though. I suspect that this algorithm is more appropriate for binary vectors.
P.P.S.: Execution times with your kind of data are probably acceptable with standard CPython, but get better (like a couple of seconds, almost a factor of 10) with PyPy. To handle bigger sets of data, the algorithm would have to be translated to C or some other language, which should not be difficult at all.

algorithm to find unique, non equivalent configurations given the height, the width, and the number of states each element can be

SO recently, I have been attempting to solve a code challenge and can not find the answer. The issue is not the implementation, but rather what to implement. The prompt can be found here http://pastebin.com/DxQssyKd
the main useful information from the prompt is as follows
"Write a function answer(w, h, s) that takes 3 integers and returns the number of unique, non-equivalent configurations that can be found on a star grid w blocks wide and h blocks tall where each celestial body has s possible states. Equivalency is defined as above: any two star grids with each celestial body in the same state where the actual order of the rows and columns do not matter (and can thus be freely swapped around). Star grid standardization means that the width and height of the grid will always be between 1 and 12, inclusive. And while there are a variety of celestial bodies in each grid, the number of states of those bodies is between 2 and 20, inclusive. The answer can be over 20 digits long, so return it as a decimal string."
The equivalency is in a way that
00
01
is equivalent to
01
00
and so on.
The problem is, what algorithm(s) should I use? i know this is somewhat related to permutations, combinations, and group theory, but I can not find anything specific.
The key weapon is Burnside's lemma, which equates the number of orbits of the symmetry group G = Sw × Sh acting on the set of configurations X = ([w] × [h] → [s]) (i.e., the answer) to the sum 1/|G| ∑g&in;G |Xg|, where Xg = {x | g.x = x} is the set of elements fixed by g.
Given g, it's straightforward to compute |Xg|: use g to construct a graph on vertices [w] × [h] where there is an edge between (i, j) and g(i, j) for all (i, j). Count c, the number of connected components, and return sc. The reasoning is that every vertex in a connected component must have the same state, but vertices in different components are unrelated.
Now, for 12 × 12 grids, there are far too many values of g to do this calculation on. Fortunately, when g and g' are conjugate (i.e., there exists some h such that h.g.h-1 = g') we find that |Xg'| = |{x | g'.x = x}| = |{x | h.g.h-1.x = x}| = |{x | g.h-1.x = h-1.x}| = |{h.y | g.y = y}| = |{y | g.y = y}| = |Xg|. We can thus sum over conjugacy classes and multiply each term by the number of group elements in the class.
The last piece is the conjugacy class structure of G = Sw × Sh. The conjugacy class structure of this direct product is really just the direct product of the conjugacy classes of Sw and Sh. The conjugacy classes of Sn are in one-to-one correspondence with integer partitions of n, enumerable by standard recursive methods. To compute the size of the class, you'll divide n! by the product of the partition terms (because circular permutations of the cycles are equivalent) and also by the product of the number of symmetries between cycles of the same size (product of the factorials of the multiplicities). See https://groupprops.subwiki.org/wiki/Conjugacy_class_size_formula_in_symmetric_group.

Towers of Hanoi - Bellman equation solution

I have to implement an algorithm that solves the Towers of Hanoi game for k pods and d rings in a limited number of moves (let's say 4 pods, 10 rings, 50 moves for example) using Bellman dynamic programming equation (if the problem is solvable of course).
Now, I understand the logic behind the equation:
where V^T is the objective function at time T, a^0 is the action at time 0, x^0 is the starting configuration, H_0 is cumulative gain f(x^0, a^0)=x^1.
The cardinality of the state space is $k^d$ and I get that a good representation for a state is a number in base k: d digits that can go from 0 to k-1. Each digit represents a ring and the digit can go from 0 to k-1, that are the labels of the k rings.
I want to minimize the number of moves for going from the initial configuration (10 rings on the first pod) to the end one (10 rings on the last pod).
What I don't get is: how do I write my objective function?
The first you need to do is choose a reward function H_t(s,a) which will define you goal. Once this function is chosen, the (optimal) value function is defined and all you have to do is compute it.
The idea of dynamic programming for the Bellman equation is that you should compute V_t(s) bottom-up: you start with t=T, then t=T-1 and so on until t=0.
The initial case is simply given by:
V_T(s) = 0, ∀s
You can compute V_{T-1}(x) ∀x from V_T:
V_{T-1}(x) = max_a [ H_{T-1}(x,a) ]
Then you can compute V_{T-2}(x) ∀s from V_{T-1}:
V_{T-2}(x) = max_a [ H_{T-2}(x,a) + V_{T-1}(f(x,a)) ]
And you keep on computing V_{t-1}(x) ∀s from V_{t}:
V_{t-1}(x) = max_a [ H_{t-1}(x,a) + V_{t}(f(x,a)) ]
until you reach V_0.
Which gives the algorithm:
forall x:
V[T](x) ← 0
for t from T-1 to 0:
forall x:
V[t](x) ← max_a { H[t](x,a) + V[t-1](f(x,a)) }
What actually was requested was this:
def k_hanoi(npods,nrings):
if nrings == 1 and npods > 1: #one remaining ring: just one move
return 1
if npods == 3:
return 2**nrings - 1 #optimal solution with 3 pods take 2^d -1 moves
if npods > 3 and nrings > 0:
sol = []
for pivot in xrange(1, nrings): #loop on all possible pivots
sol.append(2*k_hanoi(npods, pivot)+k_hanoi(npods-1, nrings-pivot))
return min(sol) #minimization on pivot
k = 4
d = 10
print k_hanoi(k, d)
I think it is the Frame algorithm, with optimization on the pivot chosen to divide the disks in two subgroups. I also think someone demonstrated this is optimal for 4 pegs (in 2014 or something like that? Not sure btw) and conjectured to be optimal for more than 4 pegs. The limitation on the number of moves can be implemented easily.
The value function in this case was the number of steps needed to go from the initial configuration to the ending one and it needed be minimized. Thank you all for the contribution.

implementing stochastic ACO algorithm

I am trying to implement a stochastic ant colony optimisation algorithm, and I'm having trouble working out how to implement movement choices based on probabilities.
the standard (greedy) version that I have implemented so far is that an ant m at a vertex i on a graph G = (V,E) where E is the set of edges (i, j), will choose the next vertex j based on the following criteria:
j = argmax(<fitness function for j>)
such that j is connected to i
the problem I am having is in trying to implement a stochastic version of this, so that now the criteria for choosing a new vertex, j is:
P(j) = <fitness function for j>/sum(<fitness function for J>)
where P(j) is the probability of choosing vertex j,
such j is connected to i,
and J is the set of all vertices connected to i
I understand the mathematics behind it, I am just having trouble working out how i should actually implement it.
if, say, i have 3 vertices connected to i, each with a probability of 0.2, 0.3, 0.5 - what is the best way to make the selection? should I just randomly select a vertex j, then generate a random number r in the range (0,1) and if r >= P(j), select vertex j? or is there a better way?
Looking at the problem statement, I think you are not trying to visit all nodes (connected to i (say) ), but some of the nodes based on some probability distribution. Lets take an example:
You have a node i and connected to it are 5 nodes, a1...a5, with probabilities p1...p5, such that sum(p_i) = 1. No, say the precision of probabilities that you consider is 2 places after decimal. Also, you dont want to visit all 5 nodes, but only k of them. Lets say, in this example, k = 2. So, since 2 places of decimal is your probability precision, add 3 to it to increase normality of probability distribution in the random function. (You can change this 3 to any number of your choice, as far as performance is concerned) (Since you have not tagged any language, I'll take example of java's nextInt() function to generate random numbers.)
Lets give some values:
p1...p5 = {0.17, 0.11, 0.45, 0.03, 0.24}
Now, in a loop from 1 to k, generate a random number from (0...10^5). {5 = 2 + 3, ie. precision + 3}. If the generated number is from 0 to 16999, go with node a1, 17000 to 27999, go with a2, 28000 to 72999, go with a3...and so on. You get the idea.
What you're trying to implement is a weighted random choice depending on the probabilities for the components of the solution, or a random proportional selection rule on ACO terms. Here is an snippet of the implementation of this rule on the Isula Framework:
double value = random.nextDouble();
while (componentWithProbabilitiesIterator.hasNext()) {
Map.Entry<C, Double> componentWithProbability = componentWithProbabilitiesIterator
.next();
Double probability = componentWithProbability.getValue();
total += probability;
if (total >= value) {
nextNode = componentWithProbability.getKey();
getAnt().visitNode(nextNode);
return true;
}
}
You just need to generate a random value between 0 and 1 (stored in value), and start accumulating the probabilities of the components (on the total variable). When the total exceeds the threshold defined in value, we have found the component to add to the solution.

Resources