Finding a perfect matching in graphs - algorithm

I have this question :
Airline company has N different planes and T pilots. Every pilot has a list of planes he can fly. Every flight needs 2 pilots. The company want to have as much flights simultaneously as possible. Find an algorithm that finds if you can have all the flights simultaneously.
This is the solution I thought about is finding max flow on this graph:
I am just not sure what the capacity should be. Can you help me with that?

Great idea to find the max flow.
For each edge from source --> pilot, assign a capacity of 1. Each pilot can only fly one plane at a time since they are running simultaneously.
For each edge from pilot --> plane, assign a capacity of 1. If this edge is filled with flow of 1, it represents that the given pilot is flying that plane.
For each edge from plane --> sink, assign a capacity of 2. This represents that each plane must be supplied by exactly 2 pilots.
Now, find a maximum flow. If the resulting maximum flow is two times the number of planes, then it's possible to satisfy the constraints. In this case, the edges between planes and pilots that are at capacity represent the matching.

The other answer is fine but you don't really need to involve flow as this can be reduced just as well to ordinary maximum bipartite matching:
For each plane, add another auxiliary plane to the plane partition with edges to the same pilots as the first plane.
Find a maximum bipartite matching M.
The answer is now true if and only if M = 2 N.
If you like, you can think of this as saying that each plane needs a pilot and a co-pilot, and the two vertices associated to each plane now represents those two roles.
The reduction to maximum bipartite matching is linear time, so using e.g. the Hopcroft–Karp algorithm to find the matching, you can solve the problem in O(|E| √|V|) where E is the number of edges between the partitions, and V = T + N.
In practice, the improvement over using a maximum flow based approach should depend on the quality of your implementations as well as the particular choice of representation of the graph, but chances are that you're better off this way.
Implementation example
To illustrate the last point, let's give an idea of how the two reductions could look in practice. One representation of a graph that's often useful due to its built-in memory locality is that of a CSR matrix, so let us assume that the input is such a matrix, whose rows correspond to the planes, and whose columns correspond to the pilots.
We will use the Python library SciPy which comes with algorithms for both maximum bipartite matching and maximum flow, and which works with CSR matrix representations for graphs under the hood.
In the algorithm given above, we will then need to construct the biadjacency matrix of the graph with the additional vertices added. This is nothing but the result of stacking the input matrix on top of itself, which is straightforward to phrase in terms of the CSR data structures: Following Wikipedia's notation, COL_INDEX should just be repeated, and ROW_INDEX should be replaced with ROW_INDEX concatenated with a copy of ROW_INDEX in which all elements are increased by the final element of ROW_INDEX.
In SciPy, a complete implementation which answers yes or no to the problem in OP would look as follows:
import numpy as np
from scipy.sparse.csgraph import maximum_bipartite_matching
def reduce_to_max_matching(a):
i, j = a.shape
data = np.ones(a.nnz * 2, dtype=bool)
indices = np.concatenate([a.indices, a.indices])
indptr = np.concatenate([a.indptr, a.indptr[1:] + a.indptr[-1]])
graph = csr_matrix((data, indices, indptr), shape=(2*i, j))
return (maximum_bipartite_matching(graph) != -1).sum() == 2 * i
In the maximum flow approach given by #HeatherGuarnera's answer, we will need to set up the full adjacency matrix of the new graph. This is also relatively straightforward; the input matrix will appear as a certain submatrix of the adjacency matrix, and we need to add a row for the source vertex and a column for the target. The example section of the documentation for SciPy's max flow solver actually contains an illustration of what this looks like in practice. Adopting this, a complete solution looks as follows:
import numpy as np
from scipy.sparse.csgraph import maximum_flow
def reduce_to_max_flow(a):
i, j = a.shape
n = a.nnz
data = np.concatenate([2*np.ones(i, dtype=int), np.ones(n + j, dtype=int)])
indices = np.concatenate([np.arange(1, i + 1),
a.indices + i + 1,
np.repeat(i + j + 1, j)])
indptr = np.concatenate([[0],
a.indptr + i,
np.arange(n + i + 1, n + i + j + 1),
[n + i + j]])
graph = csr_matrix((data, indices, indptr), shape=(2+i+j, 2+i+j))
flow = maximum_flow(graph, 0, graph.shape[0]-1)
return flow.flow_value == 2*i
Let us compare the timings of the two approaches on a single example consisting of 40 planes and 100 pilots, on a graph whose edge density is 0.1:
from scipy.sparse import random
inp = random(40, 100, density=.1, format='csr', dtype=bool)
%timeit reduce_to_max_matching(inp) # 191 µs ± 3.57 µs per loop
%timeit reduce_to_max_flow(inp) # 1.29 ms ± 20.1 µs per loop
The matching-based approach is faster, but not by a crazy amount. On larger problems, we'll start to see the advantages of using matching instead; with 400 planes and 1000 pilots:
inp = random(400, 1000, density=.1, format='csr', dtype=bool)
%timeit reduce_to_max_matching(inp) # 473 µs ± 5.52 µs per loop
%timeit reduce_to_max_flow(inp) # 68.9 ms ± 555 µs per loop
Again, this exact comparison relies on the use of specific predefined solvers from SciPy and how those are implemented, but if nothing else, this hints that simpler is better.

Related

Optimal filling of grid figure with squares

recently I have designed a puzzle for children to solve. However I would like to now the optimal solution.
The problem is as follows: You have this figure made up off small squares
You have to fill it in with larger squares and it is scored with the following table:
| Square Size | 1x1 | 2x2 | 3x3 | 4x4 | 5x5 | 6x6 | 7x7 | 8x8 |
|-------------|-----|-----|-----|-----|-----|-----|-----|-----|
| Points | 0 | 4 | 10 | 20 | 35 | 60 | 84 | 120 |
There are simply to many possible solutions to check them all. Some other people suggested dynamic programming, but I don't know how to divide the figure in smaller ones which put together have the same optimal solution.
I would like to find a way to find the optimal solutions to these kinds of problems in reasonable time (like a couple of days max on a regular desktop). The highest score found so far with a guessing algorithm and some manual work is 1112.
Solutions to similar problems with combining sub-problems are also appreciated.
I don't need all the code written out. An outline or idea for an algorithm would be enough.
Note: The biggest square that can fit is 8x8 so scores for bigger squares are not included.
[[1,1,0,0,0,1,0,0,0,0,0,0,1,1,1,1,1,1,0,0,1,1,1,1,1,0,0,1,1,1],
[1,1,0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,1,1,0,0,0,1,1],
[1,0,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,1],
[0,0,0,1,1,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0],
[0,0,0,0,1,1,0,0,0,0,1,0,0,0,0,0,1,1,1,1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,0,0,0,0,0,0,1,1,1],
[0,0,0,0,0,0,0,0,1,1,0,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,1,1,1,1],
[1,0,0,0,0,0,0,1,1,1,1,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,1],
[1,1,0,0,0,0,0,1,1,1,1,0,0,0,1,1,1,1,1,1,1,1,0,0,1,0,0,0,0,1],
[1,1,1,0,0,0,0,1,1,1,1,1,0,0,1,1,1,1,1,1,1,0,0,0,1,1,1,0,0,0],
[0,1,1,1,0,0,0,1,1,1,1,1,0,0,0,0,1,1,1,0,0,0,0,1,1,1,1,0,0,0],
[0,0,1,1,1,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0],
[0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0],
[0,0,0,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0,1],
[0,0,0,1,1,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,1],
[0,0,1,1,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0],
[0,1,1,1,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0],
[1,1,1,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0],
[1,1,1,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0],
[1,1,1,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0],
[1,1,1,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0],
[1,1,1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0],
[0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0],
[0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1],
[0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1],
[0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1],
[0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1],
[0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1],
[0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1],
[0,0,0,0,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1],
[0,0,0,0,0,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1],
[0,0,0,0,0,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1],
[1,1,1,0,0,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1],
[1,1,1,0,0,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1],
[1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1],
[1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1],
[1,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1],
[1,1,0,0,0,1,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,1,1,1,1],
[1,0,0,0,0,1,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,1,1,1,1],
[1,0,0,0,0,1,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,1,1,0,0,1,1,1,1,1],
[1,0,0,0,0,1,0,0,0,1,1,1,0,0,0,0,0,0,0,0,1,1,1,0,0,1,1,1,1,1],
[0,0,0,0,0,1,0,0,0,1,1,1,1,1,1,0,0,0,0,1,1,1,1,0,0,0,1,1,1,1],
[0,0,0,0,0,1,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,1,1,1,1],
[0,0,0,0,0,1,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,1,1,1,1]];
Here is a quite general prototype using Mixed-integer-programming which solves your instance optimally (i obtained the value of 1112 like you deduced yourself) and might solve others too.
In general, your problem is np-complete and this makes it hard (there are some instances which will be trouble).
While i suspect that SAT-solver and CP-solver based approaches might be more powerful (because of the combinatoric nature; i even was surprised that MIP works here), the MIP-approach has also some advantages:
MIP-solvers are complete (as SAT and CP; but many random-based heuristics are not):
There are many commercial-grade solvers available if needed
The formulation is quite easy (especially compared to SAT; SAT-formulations will need advanced at most k out of n-formulations (for scoring-formulations) which are growing sub-quadratic (the naive approach grows exponentially)! They do exist, but are non-trivial)
The optimization-objective is just natural (SAT and CP would need iterative-refining = solve with some lower-bound; increment bound and re-solve)
MIP-solvers can also be quite powerful to obtain approximations of the optimal solution and also provide some proven bounds (e.g. optimum lower than x)
The following code is implemented in python using common scientific tools available (all of these are open-source). It allows setting the tile-range (e.g. adding 9x9 tiles) and different cost-functions. The comments should be enough to understand the ideas. It will use some (probably the best) open-source MIP-solver, but can also use commercial ones (outcommented line shows usage).
Code
import numpy as np
import itertools
from collections import defaultdict
import matplotlib.pyplot as plt # visualization only
import seaborn as sns # ""
from pulp import * # MIP-modelling & solver
""" INSTANCE """
instance = np.asarray([[1,1,0,0,0,1,0,0,0,0,0,0,1,1,1,1,1,1,0,0,1,1,1,1,1,0,0,1,1,1],
[1,1,0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,1,1,0,0,0,1,1],
[1,0,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,1],
[0,0,0,1,1,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0],
[0,0,0,0,1,1,0,0,0,0,1,0,0,0,0,0,1,1,1,1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,0,0,0,0,0,0,1,1,1],
[0,0,0,0,0,0,0,0,1,1,0,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,1,1,1,1],
[1,0,0,0,0,0,0,1,1,1,1,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,1],
[1,1,0,0,0,0,0,1,1,1,1,0,0,0,1,1,1,1,1,1,1,1,0,0,1,0,0,0,0,1],
[1,1,1,0,0,0,0,1,1,1,1,1,0,0,1,1,1,1,1,1,1,0,0,0,1,1,1,0,0,0],
[0,1,1,1,0,0,0,1,1,1,1,1,0,0,0,0,1,1,1,0,0,0,0,1,1,1,1,0,0,0],
[0,0,1,1,1,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0],
[0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0],
[0,0,0,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0,1],
[0,0,0,1,1,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,1],
[0,0,1,1,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0],
[0,1,1,1,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0],
[1,1,1,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0],
[1,1,1,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0],
[1,1,1,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0],
[1,1,1,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0],
[1,1,1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0],
[0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0],
[0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1],
[0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1],
[0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1],
[0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1],
[0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1],
[0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1],
[0,0,0,0,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1],
[0,0,0,0,0,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1],
[0,0,0,0,0,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1],
[1,1,1,0,0,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1],
[1,1,1,0,0,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1],
[1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1],
[1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1],
[1,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1],
[1,1,0,0,0,1,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,1,1,1,1],
[1,0,0,0,0,1,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,1,1,1,1],
[1,0,0,0,0,1,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,1,1,0,0,1,1,1,1,1],
[1,0,0,0,0,1,0,0,0,1,1,1,0,0,0,0,0,0,0,0,1,1,1,0,0,1,1,1,1,1],
[0,0,0,0,0,1,0,0,0,1,1,1,1,1,1,0,0,0,0,1,1,1,1,0,0,0,1,1,1,1],
[0,0,0,0,0,1,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,1,1,1,1],
[0,0,0,0,0,1,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,1,1,1,1]], dtype=bool)
def plot_compare(instance, solution, subgrids):
f, (ax1, ax2) = plt.subplots(2, sharex=True, sharey=True)
sns.heatmap(instance, ax=ax1, cbar=False, annot=True)
sns.heatmap(solution, ax=ax2, cbar=False, annot=True)
plt.show()
""" PARAMETERS """
SUBGRIDS = 8 # 1x1 - 8x8
SUGBRID_SCORES = {1:0, 2:4, 3:10, 4:20, 5:35, 6:60, 7:84, 8:120}
N, M = instance.shape # free / to-fill = zeros!
""" HELPER FUNCTIONS """
def get_square_covered_indices(instance, pos_x, pos_y, sg):
""" Calculate all covered tiles when given a top-left position & size
-> returns the base-index too! """
N, M = instance.shape
neighbor_indices = []
valid = True
for sX in range(sg):
for sY in range(sg):
if pos_x + sX < N:
if pos_y + sY < M:
if instance[pos_x + sX, pos_y + sY] == 0:
neighbor_indices.append((pos_x + sX, pos_y + sY))
else:
valid = False
break
else:
valid = False
break
else:
valid = False
break
return valid, neighbor_indices
def preprocessing(instance, SUBGRIDS):
""" Calculate all valid placement / tile-selection combinations """
placements = {}
index2placement = {}
placement2index = {}
placement2type = {}
type2placement = defaultdict(list)
cover2index = defaultdict(list) # cell covered by placement-index
index_gen = itertools.count()
for sg in range(1, SUBGRIDS+1): # sg = subgrid size
for pos_x in range(N):
for pos_y in range(M):
if instance[pos_x, pos_y] == 0: # free
feasible, covering = get_square_covered_indices(instance, pos_x, pos_y, sg)
if feasible:
new_index = next(index_gen)
placements[(sg, pos_x, pos_y)] = covering
index2placement[new_index] = (sg, pos_x, pos_y)
placement2index[(sg, pos_x, pos_y)] = new_index
placement2type[new_index] = sg
type2placement[sg].append(new_index)
cover2index[(pos_x, pos_y)].append(new_index)
return placements, index2placement, placement2index, placement2type, type2placement, cover2index
def calculate_collisions(placements, index2placement):
""" Calculate collisions between tile-placements (position + tile-selection)
-> only upper triangle is used: a < b! """
n_p = len(placements)
coll_mat = np.zeros((n_p, n_p), dtype=bool) # only upper triangle is used
for pA in range(n_p):
for pB in range(n_p):
if pA < pB:
covered_A = placements[index2placement[pA]]
covered_B = placements[index2placement[pB]]
if len(set(covered_A).intersection(set(covered_B))) > 0:
coll_mat[pA, pB] = True
return coll_mat
""" PREPROCESSING """
placements, index2placement, placement2index, placement2type, type2placement, cover2index = preprocessing(instance, SUBGRIDS)
N_P = len(placements)
coll_mat = calculate_collisions(placements, index2placement)
""" MIP-MODEL """
prob = LpProblem("GridFill", LpMaximize)
# Variables
X = np.empty(N_P, dtype=object)
for x in range(N_P):
X[x] = LpVariable('x'+str(x), 0, 1, cat='Binary')
# Objective
placement_scores = [SUGBRID_SCORES[index2placement[p][0]] for p in range(N_P)]
prob += lpDot(placement_scores, X), "Score"
# Constraints
# C1: Forbid collisions of placements
for a in range(N_P):
for b in range(N_P):
if a < b: # symmetry-reduction
if coll_mat[a, b]:
prob += X[a] + X[b] <= 1 # not both!
""" SOLVE """
print('solve')
#prob.solve(GUROBI()) # much faster commercial solver; if available
prob.solve(PULP_CBC_CMD(msg=1, presolve=True, cuts=True))
print("Status:", LpStatus[prob.status])
""" INTERPRET AND COMPLETE SOLUTION """
solution = np.zeros((N, M), dtype=int)
for x in range(N_P):
if X[x].value() > 0.99:
sg, pos_x, pos_y = index2placement[x]
_, positions = get_square_covered_indices(instance, pos_x, pos_y, sg)
for pos in positions:
solution[pos[0], pos[1]] = sg
fill_with_ones = np.logical_and((solution == 0), (instance == 0))
solution[fill_with_ones] = 1
""" VISUALIZE """
plot_compare(instance, solution, SUBGRIDS)
Assumptions / Nature of algorithm
There are no constraints describing the need for every free cell to be covered
This works when there are not negative scores
A positive score will be placed if it improves the objective
A zero-score (like your example) might keep some cells free, but these are proven to be 1's then (added after optimizing)
Performance
This is a good example of the discrepancy between open-source and commercial solvers. The two solvers tried were cbc and Gurobi.
cbc example output (just some final parts)
Result - Optimal solution found
Objective value: 1112.00000000
Enumerated nodes: 0
Total iterations: 307854
Time (CPU seconds): 2621.19
Time (Wallclock seconds): 2627.82
Option for printingOptions changed from normal to all
Total time (CPU seconds): 2621.57 (Wallclock seconds): 2628.24
Needed: ~45 mins
Gurobi example output
Explored 0 nodes (7004 simplex iterations) in 5.30 seconds
Thread count was 4 (of 4 available processors)
Optimal solution found (tolerance 1.00e-04)
Best objective 1.112000000000e+03, best bound 1.112000000000e+03, gap 0.0%
Needed: 6 seconds
General remarks about solver-performance
Gurobi should have much more functionality recognizing the nature of the problem and using appropriate hyper-parameters internally
I also think there are some SAT-based approaches used internally (as one of the core-developers wrote his dissertation mostly about combining these very different algorithmic techniques)
There are much better heuristics used, which could provide non-optimal solutions fast (which will help the steps after)
Example output: optimal solution with score 1112 (click to enlarge)
It is possible to reformulate problem into another NP-hard problem :-)
Create weighted graph where vertices are all possible squares that can be put on the board with weights regarding size, and edges are between intersecting squares. There is no need to represent squares 1x1 since there weight is zero.
E.g. for simple empty board 3x3, there are:
- 5 vertices: one 3x3 and four 2x2,
- 7 edges: four between 3x3 square and each 2x2 square, and six between each pair of 2x2 squares.
Now problem is to find maximum weight independent set.
I am not experienced with the topic, but from Wikipedia description it seems that there could exist fast enough algorithm. This graph is not in one of classes with known polynomial time algorithm, but it is quite close to P5-free graph. It seems to me that only possibility to have P5 in this graph is between 2x2 squares, which means to have stripe of width 2 of length 5. There is one in lower left corner. These regions can be covered (removed) before finding independent set with loosing none or very little to optimal solution.
(This is not meant to be a full answer; I'm just sharing what I'm working on so that we can collaborate.)
I think a good first step is to transform the binary grid by giving every cell the value of the maximum size of square that the cell can be the top-left corner of, like this:
0,0,3,2,1,0,3,2,2,2,2,1,0,0,0,0,0,0,2,1,0,0,0,0,0,2,1,0,0,0
0,0,2,2,2,3,3,2,1,1,1,1,0,0,0,3,3,3,3,3,3,2,1,0,0,1,2,1,0,0
0,2,1,1,1,2,3,2,1,0,0,0,0,3,2,2,2,2,2,2,3,3,2,1,0,0,3,2,1,0
3,2,1,0,0,1,3,2,1,0,0,0,3,2,2,1,1,1,1,1,2,3,3,2,1,0,2,2,2,1
3,3,2,1,0,0,2,2,2,1,0,3,2,2,1,1,0,0,0,0,1,2,4,3,2,2,1,1,1,1
2,3,3,2,1,0,2,1,1,1,2,3,2,1,1,0,0,0,0,0,0,1,3,3,2,1,1,0,0,0
1,2,3,4,3,2,1,1,0,0,1,3,2,1,0,0,0,0,0,0,0,0,2,2,2,1,0,0,0,0
0,1,2,3,3,2,1,0,0,0,0,2,2,1,0,0,0,0,0,0,0,0,2,1,1,2,2,2,1,0
0,0,1,2,3,2,1,0,0,0,0,1,2,1,0,0,0,0,0,0,0,0,2,1,0,1,1,2,1,0
0,0,0,1,2,2,1,0,0,0,0,0,2,1,0,0,0,0,0,0,0,2,1,1,0,0,0,3,2,1
1,0,0,0,1,2,1,0,0,0,0,0,4,3,2,1,0,0,0,4,3,2,1,0,0,0,0,2,2,1
2,1,0,0,0,1,2,1,0,0,5,5,4,4,4,4,4,4,4,5,5,4,3,2,1,0,0,1,2,1
3,2,1,0,0,0,1,6,6,5,4,4,4,3,3,3,3,3,3,4,4,5,4,3,2,1,0,0,1,1
3,2,1,0,0,0,0,6,5,5,4,3,3,3,2,2,2,2,2,3,3,4,5,4,3,2,1,0,0,0
3,2,2,2,2,7,6,6,5,4,4,3,2,2,2,1,1,1,1,2,2,3,5,5,4,3,2,1,0,0
2,2,1,1,1,7,6,5,5,4,3,3,2,1,1,1,0,0,0,1,1,2,4,6,5,4,3,2,1,0
2,1,1,0,0,7,6,5,4,4,3,2,2,1,0,0,0,0,0,0,0,1,3,6,5,4,3,2,1,0
1,1,0,0,8,7,6,5,4,3,3,2,1,1,0,0,0,0,0,0,0,0,2,7,6,5,4,3,2,1
1,0,0,0,8,7,6,5,4,3,2,2,1,0,0,0,0,0,0,0,0,0,1,7,6,5,4,3,2,1
0,0,0,7,8,7,6,5,4,3,2,1,1,0,0,0,0,0,0,0,0,0,0,6,6,5,4,3,2,1
0,0,0,6,8,7,6,5,4,3,2,1,0,0,0,0,0,0,0,0,0,0,0,6,5,5,4,3,2,1
0,0,0,5,7,7,6,5,4,3,2,1,0,0,0,0,0,0,0,0,0,0,0,6,5,4,4,3,2,1
0,0,0,4,6,7,7,6,5,4,3,2,1,0,0,0,0,0,0,0,0,0,6,5,5,4,3,3,2,1
0,0,0,3,5,6,7,7,6,5,4,3,2,1,0,0,0,0,0,0,0,6,6,5,4,4,3,2,2,1
1,0,0,2,4,5,6,7,8,7,6,5,4,3,2,1,0,0,0,7,6,6,5,5,4,3,3,2,1,1
1,0,0,1,3,4,5,6,7,7,8,8,8,8,8,8,7,7,6,6,6,5,5,4,4,3,2,2,1,0
2,1,0,0,2,3,4,5,6,6,7,7,8,7,7,7,7,6,6,5,5,5,4,4,3,3,2,1,1,0
2,1,0,0,1,2,3,4,5,5,6,6,8,7,6,6,6,6,5,5,4,4,4,3,3,2,2,1,0,0
3,2,1,0,0,1,2,3,4,4,5,5,8,7,6,5,5,5,5,4,4,3,3,3,2,2,1,1,0,0
3,2,1,0,0,0,1,2,3,3,4,4,8,7,6,5,4,4,4,4,3,3,2,2,2,1,1,0,0,0
4,3,2,1,0,0,0,1,2,2,3,3,8,7,6,5,4,3,3,3,3,2,2,1,1,1,0,0,0,0
3,3,2,1,0,0,0,0,1,1,2,2,8,7,6,5,4,3,2,2,2,2,1,1,0,0,0,0,0,0
2,2,2,2,1,0,0,0,0,0,1,1,8,7,6,5,4,3,2,1,1,1,1,0,0,0,0,0,0,0
1,1,1,2,1,0,0,0,0,0,0,0,8,7,6,5,4,3,2,1,0,0,0,0,0,0,0,0,0,0
0,0,0,2,1,0,0,0,0,0,0,0,8,8,7,6,5,4,3,2,1,0,0,0,0,0,0,0,0,0
0,0,0,2,1,0,0,0,0,0,0,6,8,7,7,6,6,5,4,3,2,1,0,0,0,0,0,0,0,0
0,0,0,2,2,2,3,3,3,3,3,5,7,7,6,6,5,5,4,3,3,3,3,2,1,0,0,0,0,0
0,0,3,2,1,1,3,2,2,2,2,4,6,6,6,5,5,4,4,3,2,2,2,2,1,0,0,0,0,0
0,0,3,2,1,0,3,2,1,1,1,3,5,5,5,5,4,4,3,3,2,1,1,2,1,0,0,0,0,0
0,0,3,2,1,0,3,2,1,0,0,2,4,4,4,4,4,3,3,2,2,1,0,2,1,0,0,0,0,0
0,4,3,2,1,0,3,2,1,0,0,1,3,3,3,4,3,3,2,2,1,1,0,2,1,0,0,0,0,0
0,4,3,2,1,0,3,2,1,0,0,0,2,2,2,3,3,2,2,1,1,0,0,2,1,0,0,0,0,0
0,4,3,2,1,0,3,2,1,0,0,0,1,1,1,2,2,2,1,1,0,0,0,2,1,0,0,0,0,0
3,3,3,2,1,0,3,2,1,0,0,0,0,0,0,1,1,1,1,0,0,0,0,3,2,1,0,0,0,0
2,2,2,2,1,0,2,2,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,2,1,0,0,0,0
1,1,1,1,1,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0
If you wanted to go through every option using brute force, you'd try every size of square that a cell could be the corner of (including 1x1), mark the square with zeros, adjust the values of the cells up to 7 places left/above the square, and recurse with the new grid.
If you iterated over the cells top-to-bottom and left-to-right, you'd only have to copy the grid starting from the current row to the bottom row, and you'd only have to adjust the values of cells up to 7 places to the left of the square.
The JS code I tested this with is fast for the top 2 or 3 rows of the grid (result: 24 and 44), takes 8 seconds to finish the top 4 rows (result: 70), and 30 minutes for 5 rows (result: 86). I'm not trying 6 rows.
But, as you can see from this grid, the number of possibilities is so huge that brute force will never be an option. On the other hand, trying something like adding large squares first, and then filling up the leftover space with smaller squares, is never going to guarantee the optimal result, I fear. It's too easy to come up with examples that would thwart such a strategy.
7,6,5,4,3,2,1,0,0,0,0,0,0,7,6,5,4,3,2,1
6,6,5,4,3,2,1,0,0,0,0,0,0,6,6,5,4,3,2,1
5,5,5,4,3,2,1,0,0,0,0,0,0,5,5,5,4,3,2,1
4,4,4,4,3,2,1,0,0,0,0,0,0,4,4,4,4,3,2,1
3,3,3,3,3,2,1,0,0,0,0,0,0,3,3,3,3,3,2,1
2,2,2,2,2,2,1,0,0,0,0,0,0,2,2,2,2,2,2,1
1,1,1,1,1,1,8,7,6,5,4,3,2,1,1,1,1,1,1,1
0,0,0,0,0,0,7,7,6,5,4,3,2,1,0,0,0,0,0,0
0,0,0,0,0,0,6,6,6,5,4,3,2,1,0,0,0,0,0,0
0,0,0,0,0,0,5,5,5,5,4,3,2,1,0,0,0,0,0,0
0,0,0,0,0,0,4,4,4,4,4,3,2,1,0,0,0,0,0,0
0,0,0,0,0,0,3,3,3,3,3,3,2,1,0,0,0,0,0,0
0,0,0,0,0,0,2,2,2,2,2,2,2,1,0,0,0,0,0,0
7,6,5,4,3,2,1,1,1,1,1,1,1,7,6,5,4,3,2,1
6,6,5,4,3,2,1,0,0,0,0,0,0,6,6,5,4,3,2,1
5,5,5,4,3,2,1,0,0,0,0,0,0,5,5,5,4,3,2,1
4,4,4,4,3,2,1,0,0,0,0,0,0,4,4,4,4,3,2,1
3,3,3,3,3,2,1,0,0,0,0,0,0,3,3,3,3,3,2,1
2,2,2,2,2,2,1,0,0,0,0,0,0,2,2,2,2,2,2,1
1,1,1,1,1,1,1,0,0,0,0,0,0,1,1,1,1,1,1,1
In the above example, putting an 8x8 square in the center and four 6x6 squares in the corners gives a lower score than putting a 6x6 square in the center and four 7x7 squares in the corners; so a greedy approach based on using the largest square possible will not give the optimal result.
This is how far I got by isolating zones connected by corridors of maximum width 3, and running the brute-force algorithm on the smaller grids. Where the border has no orange zone, adding another 2 cells doesn't increase the score of the isolated zone, so those cells can be used by the main zone unconditionally.

implementing stochastic ACO algorithm

I am trying to implement a stochastic ant colony optimisation algorithm, and I'm having trouble working out how to implement movement choices based on probabilities.
the standard (greedy) version that I have implemented so far is that an ant m at a vertex i on a graph G = (V,E) where E is the set of edges (i, j), will choose the next vertex j based on the following criteria:
j = argmax(<fitness function for j>)
such that j is connected to i
the problem I am having is in trying to implement a stochastic version of this, so that now the criteria for choosing a new vertex, j is:
P(j) = <fitness function for j>/sum(<fitness function for J>)
where P(j) is the probability of choosing vertex j,
such j is connected to i,
and J is the set of all vertices connected to i
I understand the mathematics behind it, I am just having trouble working out how i should actually implement it.
if, say, i have 3 vertices connected to i, each with a probability of 0.2, 0.3, 0.5 - what is the best way to make the selection? should I just randomly select a vertex j, then generate a random number r in the range (0,1) and if r >= P(j), select vertex j? or is there a better way?
Looking at the problem statement, I think you are not trying to visit all nodes (connected to i (say) ), but some of the nodes based on some probability distribution. Lets take an example:
You have a node i and connected to it are 5 nodes, a1...a5, with probabilities p1...p5, such that sum(p_i) = 1. No, say the precision of probabilities that you consider is 2 places after decimal. Also, you dont want to visit all 5 nodes, but only k of them. Lets say, in this example, k = 2. So, since 2 places of decimal is your probability precision, add 3 to it to increase normality of probability distribution in the random function. (You can change this 3 to any number of your choice, as far as performance is concerned) (Since you have not tagged any language, I'll take example of java's nextInt() function to generate random numbers.)
Lets give some values:
p1...p5 = {0.17, 0.11, 0.45, 0.03, 0.24}
Now, in a loop from 1 to k, generate a random number from (0...10^5). {5 = 2 + 3, ie. precision + 3}. If the generated number is from 0 to 16999, go with node a1, 17000 to 27999, go with a2, 28000 to 72999, go with a3...and so on. You get the idea.
What you're trying to implement is a weighted random choice depending on the probabilities for the components of the solution, or a random proportional selection rule on ACO terms. Here is an snippet of the implementation of this rule on the Isula Framework:
double value = random.nextDouble();
while (componentWithProbabilitiesIterator.hasNext()) {
Map.Entry<C, Double> componentWithProbability = componentWithProbabilitiesIterator
.next();
Double probability = componentWithProbability.getValue();
total += probability;
if (total >= value) {
nextNode = componentWithProbability.getKey();
getAnt().visitNode(nextNode);
return true;
}
}
You just need to generate a random value between 0 and 1 (stored in value), and start accumulating the probabilities of the components (on the total variable). When the total exceeds the threshold defined in value, we have found the component to add to the solution.

A variation of weighted interval scheduling

I am trying to solve an optimization problem and it ultimately boils down to this : Given a set of weighted intervals S , I am trying to find the minimum number of intervals in S such that they span a predefined, presumable larger, interval I and has the maximum weight. Firstly, it looks like we can reduce the set cover problem to this problem, but the continuity of the intervals is means the sets also has to be 'continous' ? Or is the problem multi-objective ?
This problem at least allows for a simple dynamic programming solution.
Consider all endpoints of your segments. Order them by the coordinate.
For each endpoint X, calculate the minimum number of segments to cover the whole interval from the start of I (denote this start as S) to this point X. Do it as follows: iterate over all segments that cover the point X. For each segment try to take it to your solution. Then you have to cover the interval from S to the start of this segment with minimal number of segments, and of all such solution with maximal weight --- you have calculated this answer earlier in your dynamic programming. So now after you iterated over all segments, just pick the best solutions.
Something like (pseudocode)
crop all given segments if they stretch outside of I, now each segment lies in I
sort all endpoints of your segments, assume X is the sorted array and L its length
if (X[0]!=I.left) or (X[L]!=I.right)
no solution is availavle
answer[0] = 0
weight[0] = 0
for i=1..L
answer[i] = infinity
for j=0..N (the number of segments)
if (X[i]>=segment[j].left) and (X[i]<=segment[j].right)
p = position of segment[j].left in X (can be precalculated)
candidate_answer = answer[p] + 1
candidate_weight = weight[p] + segment[j].weight
if (candidate_answer<answer[i]) or (
(candidate_answer==answer[i]) and (candidate_weigth>weight[i]))
answer[i] = candidate_answer
weight[i] = candidate_weight
answer[L] and weight[L] is your answer
This is O(N^2). Maybe a faster solution exists, but I can not think of one now.

What data structure is conducive to discrete sampling? [duplicate]

Recently I needed to do weighted random selection of elements from a list, both with and without replacement. While there are well known and good algorithms for unweighted selection, and some for weighted selection without replacement (such as modifications of the resevoir algorithm), I couldn't find any good algorithms for weighted selection with replacement. I also wanted to avoid the resevoir method, as I was selecting a significant fraction of the list, which is small enough to hold in memory.
Does anyone have any suggestions on the best approach in this situation? I have my own solutions, but I'm hoping to find something more efficient, simpler, or both.
One of the fastest ways to make many with replacement samples from an unchanging list is the alias method. The core intuition is that we can create a set of equal-sized bins for the weighted list that can be indexed very efficiently through bit operations, to avoid a binary search. It will turn out that, done correctly, we will need to only store two items from the original list per bin, and thus can represent the split with a single percentage.
Let's us take the example of five equally weighted choices, (a:1, b:1, c:1, d:1, e:1)
To create the alias lookup:
Normalize the weights such that they sum to 1.0. (a:0.2 b:0.2 c:0.2 d:0.2 e:0.2) This is the probability of choosing each weight.
Find the smallest power of 2 greater than or equal to the number of variables, and create this number of partitions, |p|. Each partition represents a probability mass of 1/|p|. In this case, we create 8 partitions, each able to contain 0.125.
Take the variable with the least remaining weight, and place as much of it's mass as possible in an empty partition. In this example, we see that a fills the first partition. (p1{a|null,1.0},p2,p3,p4,p5,p6,p7,p8) with (a:0.075, b:0.2 c:0.2 d:0.2 e:0.2)
If the partition is not filled, take the variable with the most weight, and fill the partition with that variable.
Repeat steps 3 and 4, until none of the weight from the original partition need be assigned to the list.
For example, if we run another iteration of 3 and 4, we see
(p1{a|null,1.0},p2{a|b,0.6},p3,p4,p5,p6,p7,p8) with (a:0, b:0.15 c:0.2 d:0.2 e:0.2) left to be assigned
At runtime:
Get a U(0,1) random number, say binary 0.001100000
bitshift it lg2(p), finding the index partition. Thus, we shift it by 3, yielding 001.1, or position 1, and thus partition 2.
If the partition is split, use the decimal portion of the shifted random number to decide the split. In this case, the value is 0.5, and 0.5 < 0.6, so return a.
Here is some code and another explanation, but unfortunately it doesn't use the bitshifting technique, nor have I actually verified it.
A simple approach that hasn't been mentioned here is one proposed in Efraimidis and Spirakis. In python you could select m items from n >= m weighted items with strictly positive weights stored in weights, returning the selected indices, with:
import heapq
import math
import random
def WeightedSelectionWithoutReplacement(weights, m):
elt = [(math.log(random.random()) / weights[i], i) for i in range(len(weights))]
return [x[1] for x in heapq.nlargest(m, elt)]
This is very similar in structure to the first approach proposed by Nick Johnson. Unfortunately, that approach is biased in selecting the elements (see the comments on the method). Efraimidis and Spirakis proved that their approach is equivalent to random sampling without replacement in the linked paper.
Here's what I came up with for weighted selection without replacement:
def WeightedSelectionWithoutReplacement(l, n):
"""Selects without replacement n random elements from a list of (weight, item) tuples."""
l = sorted((random.random() * x[0], x[1]) for x in l)
return l[-n:]
This is O(m log m) on the number of items in the list to be selected from. I'm fairly certain this will weight items correctly, though I haven't verified it in any formal sense.
Here's what I came up with for weighted selection with replacement:
def WeightedSelectionWithReplacement(l, n):
"""Selects with replacement n random elements from a list of (weight, item) tuples."""
cuml = []
total_weight = 0.0
for weight, item in l:
total_weight += weight
cuml.append((total_weight, item))
return [cuml[bisect.bisect(cuml, random.random()*total_weight)] for x in range(n)]
This is O(m + n log m), where m is the number of items in the input list, and n is the number of items to be selected.
I'd recommend you start by looking at section 3.4.2 of Donald Knuth's Seminumerical Algorithms.
If your arrays are large, there are more efficient algorithms in chapter 3 of Principles of Random Variate Generation by John Dagpunar. If your arrays are not terribly large or you're not concerned with squeezing out as much efficiency as possible, the simpler algorithms in Knuth are probably fine.
It is possible to do Weighted Random Selection with replacement in O(1) time, after first creating an additional O(N)-sized data structure in O(N) time. The algorithm is based on the Alias Method developed by Walker and Vose, which is well described here.
The essential idea is that each bin in a histogram would be chosen with probability 1/N by a uniform RNG. So we will walk through it, and for any underpopulated bin which would would receive excess hits, assign the excess to an overpopulated bin. For each bin, we store the percentage of hits which belong to it, and the partner bin for the excess. This version tracks small and large bins in place, removing the need for an additional stack. It uses the index of the partner (stored in bucket[1]) as an indicator that they have already been processed.
Here is a minimal python implementation, based on the C implementation here
def prep(weights):
data_sz = len(weights)
factor = data_sz/float(sum(weights))
data = [[w*factor, i] for i,w in enumerate(weights)]
big=0
while big<data_sz and data[big][0]<=1.0: big+=1
for small,bucket in enumerate(data):
if bucket[1] is not small: continue
excess = 1.0 - bucket[0]
while excess > 0:
if big==data_sz: break
bucket[1] = big
bucket = data[big]
bucket[0] -= excess
excess = 1.0 - bucket[0]
if (excess >= 0):
big+=1
while big<data_sz and data[big][0]<=1: big+=1
return data
def sample(data):
r=random.random()*len(data)
idx = int(r)
return data[idx][1] if r-idx > data[idx][0] else idx
Example usage:
TRIALS=1000
weights = [20,1.5,9.8,10,15,10,15.5,10,8,.2];
samples = [0]*len(weights)
data = prep(weights)
for _ in range(int(sum(weights)*TRIALS)):
samples[sample(data)]+=1
result = [float(s)/TRIALS for s in samples]
err = [a-b for a,b in zip(result,weights)]
print(result)
print([round(e,5) for e in err])
print(sum([e*e for e in err]))
The following is a description of random weighted selection of an element of a
set (or multiset, if repeats are allowed), both with and without replacement in O(n) space
and O(log n) time.
It consists of implementing a binary search tree, sorted by the elements to be
selected, where each node of the tree contains:
the element itself (element)
the un-normalized weight of the element (elementweight), and
the sum of all the un-normalized weights of the left-child node and all of
its children (leftbranchweight).
the sum of all the un-normalized weights of the right-child node and all of
its chilren (rightbranchweight).
Then we randomly select an element from the BST by descending down the tree. A
rough description of the algorithm follows. The algorithm is given a node of
the tree. Then the values of leftbranchweight, rightbranchweight,
and elementweight of node is summed, and the weights are divided by this
sum, resulting in the values leftbranchprobability,
rightbranchprobability, and elementprobability, respectively. Then a
random number between 0 and 1 (randomnumber) is obtained.
if the number is less than elementprobability,
remove the element from the BST as normal, updating leftbranchweight
and rightbranchweight of all the necessary nodes, and return the
element.
else if the number is less than (elementprobability + leftbranchweight)
recurse on leftchild (run the algorithm using leftchild as node)
else
recurse on rightchild
When we finally find, using these weights, which element is to be returned, we either simply return it (with replacement) or we remove it and update relevant weights in the tree (without replacement).
DISCLAIMER: The algorithm is rough, and a treatise on the proper implementation
of a BST is not attempted here; rather, it is hoped that this answer will help
those who really need fast weighted selection without replacement (like I do).
This is an old question for which numpy now offers an easy solution so I thought I would mention it. Current version of numpy is version 1.2 and numpy.random.choice allows the sampling to be done with or without replacement and with given weights.
Suppose you want to sample 3 elements without replacement from the list ['white','blue','black','yellow','green'] with a prob. distribution [0.1, 0.2, 0.4, 0.1, 0.2]. Using numpy.random module it is as easy as this:
import numpy.random as rnd
sampling_size = 3
domain = ['white','blue','black','yellow','green']
probs = [.1, .2, .4, .1, .2]
sample = rnd.choice(domain, size=sampling_size, replace=False, p=probs)
# in short: rnd.choice(domain, sampling_size, False, probs)
print(sample)
# Possible output: ['white' 'black' 'blue']
Setting the replace flag to True, you have a sampling with replacement.
More info here:
http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.choice.html#numpy.random.choice
We faced a problem to randomly select K validators of N candidates once per epoch proportionally to their stakes. But this gives us the following problem:
Imagine probabilities of each candidate:
0.1
0.1
0.8
Probabilities of each candidate after 1'000'000 selections 2 of 3 without replacement became:
0.254315
0.256755
0.488930
You should know, those original probabilities are not achievable for 2 of 3 selection without replacement.
But we wish initial probabilities to be a profit distribution probabilities. Else it makes small candidate pools more profitable. So we realized that random selection with replacement would help us – to randomly select >K of N and store also weight of each validator for reward distribution:
std::vector<int> validators;
std::vector<int> weights(n);
int totalWeights = 0;
for (int j = 0; validators.size() < m; j++) {
int value = rand() % likehoodsSum;
for (int i = 0; i < n; i++) {
if (value < likehoods[i]) {
if (weights[i] == 0) {
validators.push_back(i);
}
weights[i]++;
totalWeights++;
break;
}
value -= likehoods[i];
}
}
It gives an almost original distribution of rewards on millions of samples:
0.101230
0.099113
0.799657

matlab: optimum amount of points for linear fit

I want to make a linear fit to few data points, as shown on the image. Since I know the intercept (in this case say 0.05), I want to fit only points which are in the linear region with this particular intercept. In this case it will be lets say points 5:22 (but not 22:30).
I'm looking for the simple algorithm to determine this optimal amount of points, based on... hmm, that's the question... R^2? Any Ideas how to do it?
I was thinking about probing R^2 for fits using points 1 to 2:30, 2 to 3:30, and so on, but I don't really know how to enclose it into clear and simple function. For fits with fixed intercept I'm using polyfit0 (http://www.mathworks.com/matlabcentral/fileexchange/272-polyfit0-m) . Thanks for any suggestions!
EDIT:
sample data:
intercept = 0.043;
x = 0.01:0.01:0.3;
y = [0.0530642513911393,0.0600786706929529,0.0673485248329648,0.0794662409166333,0.0895915873196170,0.103837395346484,0.107224784565365,0.120300492775786,0.126318699218730,0.141508831492330,0.147135757370947,0.161734674733680,0.170982455701681,0.191799936622712,0.192312642057298,0.204771365716483,0.222689541632988,0.242582251060963,0.252582727297656,0.267390860166283,0.282890010610515,0.292381165948577,0.307990544720676,0.314264952297699,0.332344368808024,0.355781519885611,0.373277721489254,0.387722683944356,0.413648156978284,0.446500064130389;];
What you have here is a rather difficult problem to find a general solution of.
One approach would be to compute all the slopes/intersects between all consecutive pairs of points, and then do cluster analysis on the intersepts:
slopes = diff(y)./diff(x);
intersepts = y(1:end-1) - slopes.*x(1:end-1);
idx = kmeans(intersepts, 3);
x([idx; 3] == 2) % the points with the intersepts closest to the linear one.
This requires the statistics toolbox (for kmeans). This is the best of all methods I tried, although the range of points found this way might have a few small holes in it; e.g., when the slopes of two points in the start and end range lie close to the slope of the line, these points will be detected as belonging to the line. This (and other factors) will require a bit more post-processing of the solution found this way.
Another approach (which I failed to construct successfully) is to do a linear fit in a loop, each time increasing the range of points from some point in the middle towards both of the endpoints, and see if the sum of the squared error remains small. This I gave up very quickly, because defining what "small" is is very subjective and must be done in some heuristic way.
I tried a more systematic and robust approach of the above:
function test
%% example data
slope = 2;
intercept = 1.5;
x = linspace(0.1, 5, 100).';
y = slope*x + intercept;
y(1:12) = log(x(1:12)) + y(12)-log(x(12));
y(74:100) = y(74:100) + (x(74:100)-x(74)).^8;
y = y + 0.2*randn(size(y));
%% simple algorithm
[X,fn] = fminsearch(#(ii)P(ii, x,y,intercept), [0.5 0.5])
[~,inds] = P(X, y,x,intercept)
end
function [C, inds] = P(ii, x,y,intercept)
% ii represents fraction of range from center to end,
% So ii lies between 0 and 1.
N = numel(x);
n = round(N/2);
ii = round(ii*n);
inds = min(max(1, n+(-ii(1):ii(2))), N);
% Solve linear system with fixed intercept
A = x(inds);
b = y(inds) - intercept;
% and return the sum of squared errors, divided by
% the number of points included in the set. This
% last step is required to prevent fminsearch from
% reducing the set to 1 point (= minimum possible
% squared error).
C = sum(((A\b)*A - b).^2)/numel(inds);
end
which only finds a rough approximation to the desired indices (12 and 74 in this example).
When fminsearch is run a few dozen times with random starting values (really just rand(1,2)), it gets more reliable, but I still wouln't bet my life on it.
If you have the statistics toolbox, use the kmeans option.
Depending on the number of data values, I would split the data into a relative small number of overlapping segments, and for each segment calculate the linear fit, or rather the 1-st order coefficient, (remember you know the intercept, which will be same for all segments).
Then, for each coefficient calculate the MSE between this hypothetical line and entire dataset, choosing the coefficient which yields the smallest MSE.

Resources