For any pair of different vertices in a given undirected graph G= , I want to find the number of all the shortest paths ("SP", in abbreviation) (it is not required or necessary to find/print the exact vertices on a certain path). For example, for the following graph given in edge-list format, there are two SPs: (1,3,2) and (1,4,2).
vertex =
1 3
2 4
1 4
2 3
1 8
4 7
3 6
5 2
I want to implement this algorithm based on Floyd-Warshall algorithm , which is a famous algorithm based on the idea of dynamic programming to find the value of the shortest path for each pair of vertices in O(N^3), say the result is an 2D array a[n][n]. n is the number of vertices. For the above graph, it is:
0 2 1 1 3 2 2 1
2 0 1 1 1 2 2 3
1 1 0 2 2 1 3 2
1 1 2 0 2 3 1 2
3 1 2 2 0 3 3 4
2 2 1 3 3 0 4 3
2 2 3 1 3 4 0 3
1 3 2 2 4 3 3 0
The code for constructing graph matrix G and solving for matrix a is as follows:
v = vertex(:,1);
t = vertex(:,2);
G = zeros( max(max(v),max(t)));
% Build the matrix for graph:
for i = 1:length(v)
G(v(i), t(i)) = G(v(i), t(i)) + 1;
G(t(i), v(i)) = G(v(i), t(i)); % comment here is input is bi-directional
end
a = G;
n = length(a);
a(a==0) = Inf;
a(1:n+1:n^2)=0; % diagonal element to be zero
for k = 1:n
for i= 1:n
for j= 1:n % for j=i+1:n
if a(i,j) > a(i,k) + a(k,j)
a(i,j) = a(i,k) + a(k,j);
% a(j,i) = a(i,j);
end
end
end
end
Now, let's defined a 2D array b[n][n] as the number of ALL the SPs for each pair of vertices. For example, we expect b[1][2] = 2.
I wrote the following code in MATLAB (if you are not familiar whit MATLAB, just treat it as a pseudo-code). It gives almost correct values for all the pairs except several wrong values for certain pairs. For example. after running the cod,e b[5][8] = 0, which is wrong (correct answer should be 2)
%%
% find the number of ALL SP paths for ALL pairs based on the "a" array:
% b is a two-dim array, b(i,j) is the total number of SP for pair( i,j)
b = G;
for k=1:n
for i=1:n
for j= i+1:n
if(i==j)
continue; % b(i,i)=0
end
if (k==j) % the same as : G(k,j)==0
continue;
end
if(k==i && G(k,j)~=0)
b(i,j) = 1;
continue;
end
if(a(i,j) ~= a(i,k)+G(k,j)) % w(u,v)=G(u,v) in unweighted graph)
continue;
end
% sigma(s,v) = sigma(s,v) + sigma(s,u);
b(i,j) = b(i,j) + b(k,i);
end
end
end
I have an n × m grid and a collection of polyominos. I would like to know if it is possible to pack them into the grid: no overlapping or rotation is allowed.
I expect that like most packing problems this version is NP-hard and difficult to approximate, so I'm not expecting anything crazy, but an algorithm that could find reasonable packings on a grid around 25 × 25 and be fairly comprehensive around 10 × 10 would be great. (My tiles are mostly tetrominos -- four blocks -- but they could have 5–9+ blocks.)
I'll take whatever anyone has to offer: an algorithm, a paper, an existing program which can be adapted.
Here is a prototype-like SAT-solver approach, which tackles:
a-priori fixed polyomino patterns (see Constants / Input in code)
if rotations should be allowed, rotated pieces have to be added to the set
every polyomino can be placed 0-inf times
there is no scoring-mechanic besides:
the number of non-covered tiles is minimized!
Considering classic off-the-shelf methods for combinatorial-optimization (SAT, CP, MIP), this one will probably scale best (educated guess). It will also be very hard to beat when designing customized heuristics!
If needed, these slides provide some practical introduction to SAT-solvers in practice. Here we are using CDCL-based solvers which are complete (will always find a solution in finite time if there is one; will always be able to prove there is no solution in finite time if there is none; memory of course also plays a role!).
More complex (linear) per-tile scoring-functions are hard to incorporate in general. This is where a (M)IP-approach can be better. But in terms of pure search SAT-solving is much faster in general.
The N=25 problem with my polyomino-set takes ~ 1 second (and one could easily parallize this on multiple granularity-levels -> SAT-solver (threadings-param) vs. outer-loop; the latter will be explained later).
Of course the following holds:
as this is an NP-hard problem, there will be easy and non-easy instances
i did not do scientific benchmarks with many different sets of polyominos
it's to be expected that some sets are easier to solve than others
this is one possible SAT-formulation (not the most trivial!) of infinite many
each formulation has advantages and disadvantages
Idea
The general approach is creating a decision-problem and transforming it into CNF, which is then solved by highly efficient SAT-solvers (here: cryptominisat; CNF will be in DIMCAS-CNF format), which will be used as black-box solvers (no parameter-tuning!).
As the goal is to optimize the number of filled tiles and we are using a decision-problem, we need an outer-loop, adding a minimum tile-used constraint and try to solve it. If not successful, decrease this number. So in general we are calling the SAT-solver multiple times (from scratch!).
There are many different formulations / transformations to CNF possible. Here we use (binary) decision-variables X which indicate a placement. A placement is a tuple like polyomino, x_index, y_index (this index marks the top-left field of some pattern). There is a one-to-one mapping between the number of variables and the number of possible placements of all polyominos.
The core idea is: search in the space of all possible placement-combinations for one solution, which is not invalidating some constraints.
Additionally, we have decision-variables Y, which indicate a tile being filled. There are M*N such variables.
When having access to all possible placements, it's easy to calculate a collision-set for each tile-index (M*N). Given some fixed tile, we can check which placements can fill this one and constrain the problem to only select <=1 of those. This is active on X. In the (M)IP world this probably would be called convex-hull for the collisions.
n<=k-constraints are ubiquitous in SAT-solving and many different formulations are possible. Naive-encoding would need an exponential number of clauses in general which easily becomes infeasibly. Using new variables, there are many variable-clause trade-offs (see Tseitin-encoding) possible. I'm reusing one (old code; only reason why my code is python2-only) which worked good for me in the past. It's based on describing hardware-based counter-logic into CNF and provides good empirical- and theoretical performance (see paper). Of course there are many alternatives.
Additionally, we need to force the SAT-solver not to make all variables negative. We have to add constraints describing the following (that's one approach):
if some field is used: there has to be at least one placement active (poly + x + y), which results in covering this field!
this is a basic logical implication easily formulated as one potentially big logical or
Then only the core-loop is missing, trying to fill N fields, then N-1 until successful. This is again using the n<=k formulation mentioned earlier.
Code
This is python2-code, which needs the SAT-solver cryptominisat 5 in the directory the script is run from.
I'm also using tools from python's excellent scientific-stack.
# PYTHON 2!
import math
import copy
import subprocess
import numpy as np
import matplotlib.pyplot as plt # plotting-only
import seaborn as sns # plotting-only
np.set_printoptions(linewidth=120) # more nice console-output
""" Constants / Input
Example: 5 tetrominoes; no rotation """
M, N = 25, 25
polyominos = [np.array([[1,1,1,1]]),
np.array([[1,1],[1,1]]),
np.array([[1,0],[1,0], [1,1]]),
np.array([[1,0],[1,1],[0,1]]),
np.array([[1,1,1],[0,1,0]])]
""" Preprocessing
Calculate:
A: possible placements
B: covered positions
C: collisions between placements
"""
placements = []
covered = []
for p_ind, p in enumerate(polyominos):
mP, nP = p.shape
for x in range(M):
for y in range(N):
if x + mP <= M: # assumption: no zero rows / cols in each p
if y + nP <= N: # could be more efficient
placements.append((p_ind, x, y))
cover = np.zeros((M,N), dtype=bool)
cover[x:x+mP, y:y+nP] = p
covered.append(cover)
covered = np.array(covered)
collisions = []
for m in range(M):
for n in range(N):
collision_set = np.flatnonzero(covered[:, m, n])
collisions.append(collision_set)
""" Helper-function: Cardinality constraints """
# K-ARY CONSTRAINT GENERATION
# ###########################
# SINZ, Carsten. Towards an optimal CNF encoding of boolean cardinality constraints.
# CP, 2005, 3709. Jg., S. 827-831.
def next_var_index(start):
next_var = start
while(True):
yield next_var
next_var += 1
class s_index():
def __init__(self, start_index):
self.firstEnvVar = start_index
def next(self,i,j,k):
return self.firstEnvVar + i*k +j
def gen_seq_circuit(k, input_indices, next_var_index_gen):
cnf_string = ''
s_index_gen = s_index(next_var_index_gen.next())
# write clauses of first partial sum (i.e. i=0)
cnf_string += (str(-input_indices[0]) + ' ' + str(s_index_gen.next(0,0,k)) + ' 0\n')
for i in range(1, k):
cnf_string += (str(-s_index_gen.next(0, i, k)) + ' 0\n')
# write clauses for general case (i.e. 0 < i < n-1)
for i in range(1, len(input_indices)-1):
cnf_string += (str(-input_indices[i]) + ' ' + str(s_index_gen.next(i, 0, k)) + ' 0\n')
cnf_string += (str(-s_index_gen.next(i-1, 0, k)) + ' ' + str(s_index_gen.next(i, 0, k)) + ' 0\n')
for u in range(1, k):
cnf_string += (str(-input_indices[i]) + ' ' + str(-s_index_gen.next(i-1, u-1, k)) + ' ' + str(s_index_gen.next(i, u, k)) + ' 0\n')
cnf_string += (str(-s_index_gen.next(i-1, u, k)) + ' ' + str(s_index_gen.next(i, u, k)) + ' 0\n')
cnf_string += (str(-input_indices[i]) + ' ' + str(-s_index_gen.next(i-1, k-1, k)) + ' 0\n')
# last clause for last variable
cnf_string += (str(-input_indices[-1]) + ' ' + str(-s_index_gen.next(len(input_indices)-2, k-1, k)) + ' 0\n')
return (cnf_string, (len(input_indices)-1)*k, 2*len(input_indices)*k + len(input_indices) - 3*k - 1)
def gen_at_most_n_constraints(vars, start_var, n):
constraint_string = ''
used_clauses = 0
used_vars = 0
index_gen = next_var_index(start_var)
circuit = gen_seq_circuit(n, vars, index_gen)
constraint_string += circuit[0]
used_clauses += circuit[2]
used_vars += circuit[1]
start_var += circuit[1]
return [constraint_string, used_clauses, used_vars, start_var]
def parse_solution(output):
# assumes there is one
vars = []
for line in output.split("\n"):
if line:
if line[0] == 'v':
line_vars = list(map(lambda x: int(x), line.split()[1:]))
vars.extend(line_vars)
return vars
def solve(CNF):
p = subprocess.Popen(["cryptominisat5.exe"], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
result = p.communicate(input=CNF)[0]
sat_line = result.find('s SATISFIABLE')
if sat_line != -1:
# solution found!
vars = parse_solution(result)
return True, vars
else:
return False, None
""" SAT-CNF: BASE """
X = np.arange(1, len(placements)+1) # decision-vars
# 1-index for CNF
Y = np.arange(len(placements)+1, len(placements)+1 + M*N).reshape(M,N)
next_var = len(placements)+1 + M*N # aux-var gen
n_clauses = 0
cnf = '' # slow string appends
# int-based would be better
# <= 1 for each collision-set
for cset in collisions:
constraint_string, used_clauses, used_vars, next_var = \
gen_at_most_n_constraints(X[cset].tolist(), next_var, 1)
n_clauses += used_clauses
cnf += constraint_string
# if field marked: one of covering placements active
for x in range(M):
for y in range(N):
covering_placements = X[np.flatnonzero(covered[:, x, y])] # could reuse collisions
clause = str(-Y[x,y])
for i in covering_placements:
clause += ' ' + str(i)
clause += ' 0\n'
cnf += clause
n_clauses += 1
print('BASE CNF size')
print('clauses: ', n_clauses)
print('vars: ', next_var - 1)
""" SOLVE in loop -> decrease number of placed-fields until SAT """
print('CORE LOOP')
N_FIELD_HIT = M*N
while True:
print(' N_FIELDS >= ', N_FIELD_HIT)
# sum(y) >= N_FIELD_HIT
# == sum(not y) <= M*N - N_FIELD_HIT
cnf_final = copy.copy(cnf)
n_clauses_final = n_clauses
if N_FIELD_HIT == M*N: # awkward special case
constraint_string = ''.join([str(y) + ' 0\n' for y in Y.ravel()])
n_clauses_final += N_FIELD_HIT
else:
constraint_string, used_clauses, used_vars, next_var = \
gen_at_most_n_constraints((-Y).ravel().tolist(), next_var, M*N - N_FIELD_HIT)
n_clauses_final += used_clauses
n_vars_final = next_var - 1
cnf_final += constraint_string
cnf_final = 'p cnf ' + str(n_vars_final) + ' ' + str(n_clauses) + \
' \n' + cnf_final # header
status, sol = solve(cnf_final)
if status:
print(' SOL found: ', N_FIELD_HIT)
""" Print sol """
res = np.zeros((M, N), dtype=int)
counter = 1
for v in sol[:X.shape[0]]:
if v>0:
p, x, y = placements[v-1]
pM, pN = polyominos[p].shape
poly_nnz = np.where(polyominos[p] != 0)
x_inds, y_inds = x+poly_nnz[0], y+poly_nnz[1]
res[x_inds, y_inds] = p+1
counter += 1
print(res)
""" Plot """
# very very ugly code; too lazy
ax1 = plt.subplot2grid((5, 12), (0, 0), colspan=11, rowspan=5)
ax_p0 = plt.subplot2grid((5, 12), (0, 11))
ax_p1 = plt.subplot2grid((5, 12), (1, 11))
ax_p2 = plt.subplot2grid((5, 12), (2, 11))
ax_p3 = plt.subplot2grid((5, 12), (3, 11))
ax_p4 = plt.subplot2grid((5, 12), (4, 11))
ax_p0.imshow(polyominos[0] * 1, vmin=0, vmax=5)
ax_p1.imshow(polyominos[1] * 2, vmin=0, vmax=5)
ax_p2.imshow(polyominos[2] * 3, vmin=0, vmax=5)
ax_p3.imshow(polyominos[3] * 4, vmin=0, vmax=5)
ax_p4.imshow(polyominos[4] * 5, vmin=0, vmax=5)
ax_p0.xaxis.set_major_formatter(plt.NullFormatter())
ax_p1.xaxis.set_major_formatter(plt.NullFormatter())
ax_p2.xaxis.set_major_formatter(plt.NullFormatter())
ax_p3.xaxis.set_major_formatter(plt.NullFormatter())
ax_p4.xaxis.set_major_formatter(plt.NullFormatter())
ax_p0.yaxis.set_major_formatter(plt.NullFormatter())
ax_p1.yaxis.set_major_formatter(plt.NullFormatter())
ax_p2.yaxis.set_major_formatter(plt.NullFormatter())
ax_p3.yaxis.set_major_formatter(plt.NullFormatter())
ax_p4.yaxis.set_major_formatter(plt.NullFormatter())
mask = (res==0)
sns.heatmap(res, cmap='viridis', mask=mask, cbar=False, square=True, linewidths=.1, ax=ax1)
plt.tight_layout()
plt.show()
break
N_FIELD_HIT -= 1 # binary-search could be viable in some cases
# but beware the empirical asymmetry in SAT-solvers:
# finding solution vs. proving there is none!
Output console
BASE CNF size
('clauses: ', 31509)
('vars: ', 13910)
CORE LOOP
(' N_FIELDS >= ', 625)
(' N_FIELDS >= ', 624)
(' SOL found: ', 624)
[[3 2 2 2 2 1 1 1 1 1 1 1 1 2 2 1 1 1 1 1 1 1 1 2 2]
[3 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 1 1 1 1 2 2]
[3 3 3 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 1 1 1 1 2 2]
[2 2 3 1 1 1 1 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 2 2]
[2 2 3 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 2 2 2 2 2 2]
[1 1 1 1 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 2 2]
[1 1 1 1 3 3 3 2 2 1 1 1 1 2 2 2 2 2 2 2 2 1 1 1 1]
[2 2 1 1 1 1 3 2 2 2 2 2 2 2 2 1 1 1 1 2 2 2 2 2 2]
[2 2 2 2 2 2 3 3 3 2 2 2 2 1 1 1 1 2 2 2 2 2 2 2 2]
[2 2 2 2 2 2 2 2 3 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2]
[2 2 1 1 1 1 2 2 3 3 3 2 2 2 2 2 2 1 1 1 1 2 2 2 2]
[1 1 1 1 1 1 1 1 2 2 3 2 2 1 1 1 1 1 1 1 1 1 1 1 1]
[2 2 3 1 1 1 1 3 2 2 3 3 4 1 1 1 1 2 2 1 1 1 1 2 2]
[2 2 3 1 1 1 1 3 1 1 1 1 4 4 3 2 2 2 2 1 1 1 1 2 2]
[2 2 3 3 5 5 5 3 3 1 1 1 1 4 3 2 2 1 1 1 1 1 1 1 1]
[2 2 2 2 4 5 1 1 1 1 1 1 1 1 3 3 3 2 2 1 1 1 1 2 2]
[2 2 2 2 4 4 2 2 1 1 1 1 1 1 1 1 3 2 2 1 1 1 1 2 2]
[2 2 2 2 3 4 2 2 2 2 2 2 1 1 1 1 3 3 3 2 2 2 2 2 2]
[3 4 2 2 3 5 5 5 2 2 2 2 1 1 1 1 2 2 3 2 2 2 2 2 2]
[3 4 4 3 3 3 5 5 5 5 1 1 1 1 2 2 2 2 3 3 3 2 2 2 2]
[3 3 4 3 1 1 1 1 5 1 1 1 1 4 2 2 2 2 2 2 3 2 2 2 2]
[2 2 3 3 3 1 1 1 1 1 1 1 1 4 4 4 2 2 2 2 3 3 0 2 2]
[2 2 3 1 1 1 1 1 1 1 1 5 5 5 4 4 4 1 1 1 1 2 2 2 2]
[2 2 3 3 1 1 1 1 1 1 1 1 5 5 5 5 4 1 1 1 1 2 2 2 2]
[2 2 1 1 1 1 1 1 1 1 1 1 1 1 5 1 1 1 1 1 1 1 1 2 2]]
Output plot
One field cannot be covered in this parameterization!
Some other examples with a bigger set of patterns
Square M=N=61 (prime -> intuition: harder) where the base-CNF has 450.723 clauses and 185.462 variables. There is an optimal packing!
Non-square M,N =83,131 (double prime) where the base-CNF has 1.346.511 clauses and 553.748 variables. There is an optimal packing!
One approach could be using integer programming. I'll implement this using the python pulp package, though packages are available for pretty much any programming language.
The basic idea is to define a decision variable for every possible placement location for every tile. If a decision variable takes value 1, then its associated tile is placed there. If it takes value 0, then it is not placed there. The objective is therefore to maximize the sum of the decision variables times the number of squares in the variable's tile --- this corresponds to placing the maximum number of squares possible on the board.
My code implements two constraints:
Each tile can only be placed once (below we will relax this constraint)
Each square can have at most one tile on it
Here's the output for a set of five fixed tetrominoes on a 4x5 grid:
import itertools
import pulp
import string
def covered(tile, base):
return {(base[0] + t[0], base[1] + t[1]): True for t in tile}
tiles = [[(0,0), (1,0), (0,1), (0,2)],
[(0,0), (1,0), (2,0), (3,0)],
[(1,0), (0,1), (1,1), (2,0)],
[(0,0), (1,0), (0,1), (1,1)],
[(1,0), (0,1), (1,1), (2,1)]]
rows = 25
cols = 25
squares = {x: True for x in itertools.product(range(rows), range(cols))}
vars = list(itertools.product(range(rows), range(cols), range(len(tiles))))
vars = [x for x in vars if all([y in squares for y in covered(tiles[x[2]], (x[0], x[1])).keys()])]
x = pulp.LpVariable.dicts('tiles', vars, lowBound=0, upBound=1, cat=pulp.LpInteger)
mod = pulp.LpProblem('polyominoes', pulp.LpMaximize)
# Objective value is number of squares in tile
mod += sum([len(tiles[p[2]]) * x[p] for p in vars])
# Don't use any shape more than once
for tnum in range(len(tiles)):
mod += sum([x[p] for p in vars if p[2] == tnum]) <= 1
# Each square can be covered by at most one shape
for s in squares:
mod += sum([x[p] for p in vars if s in covered(tiles[p[2]], (p[0], p[1]))]) <= 1
# Solve and output
mod.solve()
out = [['-'] * cols for rep in range(rows)]
chars = string.ascii_uppercase + string.ascii_lowercase
numset = 0
for p in vars:
if x[p].value() == 1.0:
for off in tiles[p[2]]:
out[p[0] + off[0]][p[1] + off[1]] = chars[numset]
numset += 1
for row in out:
print(''.join(row))
It obtains the following optimal solution:
AAAB-
A-BBC
DDBCC
DD--C
If we allow repeats (comment out the constraint limiting to one copy of each shape), then we can completely tile the grid:
ABCDD
ABCDD
ABCEE
ABCEE
It worked near-instantaneously for a 10x10 grid:
ABCCDDEEFF
ABCCDDEEFF
ABGHHIJJKK
ABGHHIJJKK
LLGMMINOPP
LLGMMINOPP
QQRRSTNOUV
QQRRSTNOUV
WWXXSTYYUV
WWXXSTYYUV
The code obtains an optimal solution for the 25x25 grid in 100 seconds of runtime, though unfortunately there aren't enough letter and numbers for my output code to print the solution.
I don't know if its of any use to you but I coded up a small sketchy frame in Python. It doesn't place polyminos yet but the functions are there - checking for dead empty spaces is primitive, though and needs a better approach. Then again, maybe it is all rubbish...
import functools
import itertools
M = 4 # x
N = 5 # y
field = [[9999]*(N+1)]+[[9999]+[0]*N+[9999] for _ in range(M)]+[[9999]*(N+1)]
def field_rd(p2d):
return field[p2d[0]+1][p2d[1]+1]
def field_add(p2d,val):
field[p2d[0]+1][p2d[1]+1] += val
def add2d(p,k):
return p[0]+k[0],p[1]+k[1]
def norm(polymino_2d):
x0,y0 = min(x for x,y in polymino_2d),min(y for x,y in polymino_2d)
return tuple(sorted(map(lambda p: add2d(p,(-x0,-y0)), polymino_2d)))
def create_cutoff(occupied):
"""Receive a polymino and create the outer area of squares which could be cut off by a placement of this polymino"""
cutoff = set(itertools.chain.from_iterable(map(lambda p: add2d(p,(x,y)),occupied) for (x,y) in [(-1,0),(1,0),(0,-1),(0,1)])) #(-1,-1),(-1,0),(-1,1),(0,1),(1,1),(1,0),(1,-1)]))
return tuple(cutoff.difference(occupied))
def is_occupied(p2d):
return field_rd(p2d) == 0
def is_cutoff(p2d):
return not is_occupied(p2d) and all(map(is_occupied,map(lambda p: add2d(p,p2d),[(-1,0),(1,0),(0,-1),(0,1)])))
def polym_colliding(p2d,occupied):
return any(map(is_occupied,map(lambda p: add2d(p,p2d),occupied)))
def polym_cutoff(p2d,cutoff):
return any(map(is_cutoff,map(lambda p: add2d(p,p2d),cutoff)))
def put(p2d,occupied,polym_nr):
for p in occupied:
field_add(add2d(p2d,p),polym_nr)
def remove(p2d,occupied,polym_nr):
for p in polym:
field_add(add2d(p2d,p),-polym_nr)
def place(p2d,polym_nr):
"""Try to place a polymino at point p2d. If it fits without cutting off unreachable single cells return True else False"""
occupied = polym[polym_nr][0]
if polym_colliding(p2d,occupied):
return False
put(p2d,occupied,polym_nr)
cutoff = polym[polym_nr][1]
if polym_cutoff(p2d,cutoff):
remove(p2d,occupied,polym_nr)
return False
return True
def NxM_array(N,M):
return [[0]*N for _ in range(M)]
def generate_all_polyminos(n):
"""Create all polyminos with size n"""
def gen_recur(polymino,i,result):
if i > 1:
new_pts = set(itertools.starmap(add2d,itertools.product(polymino,[(-1,0),(1,0),(0,-1),(0,1)])))
new_pts = new_pts.difference(polymino)
for p in new_pts:
gen_recur(polymino.union({p}),i-1,result)
else:
result.add(norm(polymino))
#---------------------------------------
all_polyminos = set()
gen_recur({(0,0)},n,all_polyminos)
return all_polyminos
print("All possible Tetris blocks (all orientations): ",generate_all_polyminos(4))
It's an issue in Hackerrank.The link is here:fibonacci-finding-easy
It gives two initial values F(0),F(1) of the recursive sequence F(n+2)=F(n+1)+F(n) and assigns them to A,B respectively and asks for the Nth item of it,output it modulo (10^9 + 7).I know the classic way to solve is using quick matrix multiplication.And I write it in Python3.The tests in my IDE have no problems.But I don't know why my code always gets timeout.Here is my code:
def mul22(a, b):
r = [[0, 0], [0, 0]]
r[0][0] = (a[0][0] * b[0][0] + a[0][1] * b[1][0]) % 1000000007
r[0][1] = (a[0][0] * b[0][1] + a[0][1] * b[1][1]) % 1000000007
r[1][0] = (a[1][0] * b[0][0] + a[1][1] * b[1][0]) % 1000000007
r[1][1] = (a[1][0] * b[0][1] + a[1][1] * b[1][1]) % 1000000007
return r
def MatrixPow(A, n):
if n == 1:
return A
if n % 2 == 1:
return mul22(mul22(MatrixPow(A, n // 2),MatrixPow(A, n // 2)), A)
return mul22(MatrixPow(A, n // 2), MatrixPow(A, n // 2))
for i in range(int(input())):
A,B,N= map(int,input().split())
if N == 1:
print(B % 1000000007)
else:
print(mul22(MatrixPow([[1, 1],[1, 0]], N - 1),[[B,1],[A,1]])[0][0] % 1000000007)
In the first place I think the problem is that 10 ** 9 + 7 makes the whole recursive process so slow.But I test many times in my IDE and everything is okay,there exists no TLEs.Is there anything I missed?
The way you have written the MatrixPow function, it is not actually running in O(log(n)). Its running time is O(N).
Consider this power function:
def power_n(a,b):
print 1
if b==0:
return 1
if b%2==1:
return (((power_n(a,b/2)*power_n(a,b/2))%MOD)*a)%MOD
return (power_n(a,b/2)*power_n(a,b/2))%MOD
and this:
def power_log(a,b):
print 2
if b==0:
return 1
k = power_log(a,b/2)
if b%2==1:
return (((k*k)%MOD)*a)%MOD
return (k*k)%MOD
The difference in 1st and second is that we are going through the whole recursion tree only once in second case (as once we have a certain value, we save it) while in 1st case, we calculate it again and again.
Though they seem to be the same, the 1st one is similar to traditional loop and runs in O(n) while the 2nd function is actually the power function which runs in O(log n)
PS: n means b, ie. the power
EDIT:
Analysis:
(I just added a print statement to both the functions and ran it, here is the result)
power_n(6,20)
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
Out[22]: 414469870
power_log(6,20)
2
2
2
2
2
2
Out[25]: 414469870
See the difference in number of times the function is called.
I'm trying to come up with an algorithm that will print out all possible ways to sum N integers so that they total a given value.
Example. Print all ways to sum 4 integers so that they sum up to be 5.
Result should be something like:
5 0 0 0
4 1 0 0
3 2 0 0
3 1 1 0
2 3 0 0
2 2 1 0
2 1 2 0
2 1 1 1
1 4 0 0
1 3 1 0
1 2 2 0
1 2 1 1
1 1 3 0
1 1 2 1
1 1 1 2
This is based off Alinium's code.
I modified it so it prints out all the possible combinations, since his already does all the permutations.
Also, I don't think you need the for loop when n=1, because in that case, only one number should cause the sum to equal value.
Various other modifications to get boundary cases to work.
def sum(n, value):
arr = [0]*n # create an array of size n, filled with zeroes
sumRecursive(n, value, 0, n, arr);
def sumRecursive(n, value, sumSoFar, topLevel, arr):
if n == 1:
if sumSoFar <= value:
#Make sure it's in ascending order (or only level)
if topLevel == 1 or (value - sumSoFar >= arr[-2]):
arr[(-1)] = value - sumSoFar #put it in the n_th last index of arr
print arr
elif n > 0:
#Make sure it's in ascending order
start = 0
if (n != topLevel):
start = arr[(-1*n)-1] #the value before this element
for i in range(start, value+1): # i = start...value
arr[(-1*n)] = i # put i in the n_th last index of arr
sumRecursive(n-1, value, sumSoFar + i, topLevel, arr)
Runing sums(4, 5) returns:
[0, 0, 0, 5]
[0, 0, 1, 4]
[0, 0, 2, 3]
[0, 1, 1, 3]
[1, 1, 1, 2]
In pure math, a way of summing integers to get a given total is called a partition. There is a lot of information around if you google for "integer partition". You are looking for integer partitions where there are a specific number of elements. I'm sure you could take one of the known generating mechanisms and adapt for this extra condition. Wikipedia has a good overview of the topic Partition_(number_theory). Mathematica even has a function to do what you want: IntegerPartitions[5, 4].
The key to solving the problem is recursion. Here's a working implementation in python. It prints out all possible permutations that sum up to the total. You'll probably want to get rid of the duplicate combinations, possibly by using some Set or hashing mechanism to filter them out.
def sum(n, value):
arr = [0]*n # create an array of size n, filled with zeroes
sumRecursive(n, value, 0, n, arr);
def sumRecursive(n, value, sumSoFar, topLevel, arr):
if n == 1:
if sumSoFar > value:
return False
else:
for i in range(value+1): # i = 0...value
if (sumSoFar + i) == value:
arr[(-1*n)] = i # put i in the n_th last index of arr
print arr;
return True
else:
for i in range(value+1): # i = 0...value
arr[(-1*n)] = i # put i in the n_th last index of arr
if sumRecursive(n-1, value, sumSoFar + i, topLevel, arr):
if (n == topLevel):
print "\n"
With some extra effort, this can probably be simplified to get rid of some of the parameters I am passing to the recursive function. As suggested by redcayuga's pseudo code, using a stack, instead of manually managing an array, would be a better idea too.
I haven't tested this:
procedure allSum (int tot, int n, int desiredTotal) return int
if n > 0
int i =
for (int i = tot; i>=0; i--) {
push i onto stack;
allSum(tot-i, n-1, desiredTotal);
pop top of stack
}
else if n==0
if stack sums to desiredTotal then print the stack end if
end if
I'm sure there's a better way to do this.
i've find a ruby way with domain specification based on Alinium's code
class Domain_partition
attr_reader :results,
:domain,
:sum,
:size
def initialize(_dom, _size, _sum)
_dom.is_a?(Array) ? #domain=_dom.sort : #domain= _dom.to_a
#results, #sum, #size = [], _sum, _size
arr = [0]*size # create an array of size n, filled with zeroes
sumRecursive(size, 0, arr)
end
def sumRecursive(n, sumSoFar, arr)
if n == 1
#Make sure it's in ascending order (or only level)
if sum - sumSoFar >= arr[-2] and #domain.include?(sum - sumSoFar)
final_arr=Array.new(arr)
final_arr[(-1)] = sum - sumSoFar #put it in the n_th last index of arr
#results<<final_arr
end
elsif n > 1
#********* dom_selector ********
n != size ? start = arr[(-1*n)-1] : start = domain[0]
dom_bounds=(start*(n-1)..domain.last*(n-1))
restricted_dom=domain.select do |x|
if x < start
false; next
end
if size-n > 0
if dom_bounds.cover? sum-(arr.first(size-n).inject(:+)+x) then true
else false end
else
dom_bounds.cover?(sum+x) ? true : false
end
end # ***************************
for i in restricted_dom
_arr=Array.new(arr)
_arr[(-1*n)] = i
sumRecursive(n-1, sumSoFar + i, _arr)
end
end
end
end
a=Domain_partition.new (-6..6),10,0
p a
b=Domain_partition.new [-4,-2,-1,1,2,3],10,0
p b
If you're interested in generating (lexically) ordered integer partitions, i.e. unique unordered sets of S positive integers (no 0's) that sum to N, then try the following. (unordered simply means that [1,2,1] and [1,1,2] are the same partition)
The problem doesn't need recursion and is quickly handled because the concept of finding the next lexical restricted partition is actually very simple...
In concept: Starting from the last addend (integer), find the first instance where the difference between two addends is greater than 1. Split the partition in two at that point. Remove 1 from the higher integer (which will be the last integer in one part) and add 1 to the lower integer (the first integer of the latter part). Then find the first lexically ordered partition for the latter part having the new largest integer as the maximum addend value. I use Sage to find the first lexical partition because it's lightening fast, but it's easily done without it. Finally, join the two portions and voila! You have the next lexical partition of N having S parts.
e.g. [6,5,3,2,2] -> [6,5],[3,2,2] -> [6,4],[4,2,2] -> [6,4],[4,3,1] -> [6,4,4,3,1]
So, in Python and calling Sage for the minor task of finding the first lexical partition given n and s parts...
from sage.all import *
def most_even_partition(n,s): # The main function will need to recognize the most even partition possible (i.e. last lexical partition) so it can loop back to the first lexical partition if need be
most_even = [int(floor(float(n)/float(s)))]*s
_remainder = int(n%s)
j = 0
while _remainder > 0:
most_even[j] += 1
_remainder -= 1
j += 1
return most_even
def portion(alist, indices):
return [alist[i:j] for i, j in zip([0]+indices, indices+[None])]
def next_restricted_part(p,n,s):
if p == most_even_partition(n,s):return Partitions(n,length=s).first()
for i in enumerate(reversed(p)):
if i[1] - p[-1] > 1:
if i[0] == (s-1):
return Partitions(n,length=s,max_part=(i[1]-1)).first()
else:
parts = portion(p,[s-i[0]-1]) # split p (soup?)
h1 = parts[0]
h2 = parts[1]
next = list(Partitions(sum(h2),length=len(h2),max_part=(h2[0]-1)).first())
return h1+next
If you want zeros (not actual integer partitions), then the functions only need small modifications.
Try this code. I hope it is easier to understand. I tested it, it generate correct sequence.
void partition(int n, int m = 0)
{
int i;
// if the partition is done
if(n == 0){
// Output the result
for(i = 0; i < m; ++i)
printf("%d ", list[i]);
printf("\n");
return;
}
// Do the split from large to small int
for(i = n; i > 0; --i){
// if the number not partitioned or
// willbe partitioned no larger than
// previous partition number
if(m == 0 || i <= list[m - 1]){
// store the partition int
list[m] = i;
// partition the rest
partition(n - i, m + 1);
}
}
}
Ask for clarification, if required.
The is One of the output
6
5 1
4 2
4 1 1
3 3
3 2 1
3 1 1 1
2 2 2
2 2 1 1
2 1 1 1 1
1 1 1 1 1 1
10
9 1
8 2
8 1 1
7 3
7 2 1
7 1 1 1
6 4
6 3 1
6 2 2
6 2 1 1
6 1 1 1 1
5 5
5 4 1
5 3 2
5 3 1 1
5 2 2 1
5 2 1 1 1
5 1 1 1 1 1
4 4 2
4 4 1 1
4 3 3
4 3 2 1
4 3 1 1 1
4 2 2 2
4 2 2 1 1
4 2 1 1 1 1
4 1 1 1 1 1 1
3 3 3 1
3 3 2 2
3 3 2 1 1
3 3 1 1 1 1
3 2 2 2 1
3 2 2 1 1 1
3 2 1 1 1 1 1
3 1 1 1 1 1 1 1
2 2 2 2 2
2 2 2 2 1 1
2 2 2 1 1 1 1
2 2 1 1 1 1 1 1
2 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1