How to implement a wrap around index? - algorithm

I want a wrap around index like 1232123...., and the frame size is 3. How to implement it? Does it has a term?
for i in 1..100 {
let idx = loop_index(i);
print!("{} ", idx);
}
Expected output for frame 3:
1 2 3 2 1 2 3 2 1...
Expected output for frame 4:
1 2 3 4 3 2 1 2 3 4 3 2 1...

For a size of 3, notice that the sequence 1232 has length 4, and then it repeats. In general, for size n, the length is 2*(n-1). If we take the modulo i % (2*(n-1)), the task becomes simpler: turn a sequence 0123..(2*(n-1)-1) into 123..(n-1)n(n-1)..321. And this can be done using abs and basic arithmetic:
n = 3
r = 2 * (n - 1)
for i in range(20):
print(n - abs(n - (i % r)) - 1))

When you reach the top number, you start decreasing; when you reach the bottom number, you start increasing.
Don't change direction until you reach the top or bottom number.
def count_up_and_down(bottom, top, length):
assert(bottom < top)
direction = +1
x = bottom
for _ in range(length):
yield x
direction = -1 if x == top else +1 if x == bottom else direction
x = x + direction
for i in count_up_and_down(1, 4, 10):
print(i, end=' ')
# 1 2 3 4 3 2 1 2 3 4
Alternatively, combining two ranges with itertools:
from itertools import chain, cycle, islice
def count_up_and_down(bottom, top, length):
return islice(cycle(chain(range(bottom, top), range(top, bottom, -1))), length)
for i in count_up_and_down(1, 4, 10):
print(i, end=' ')
# 1 2 3 4 3 2 1 2 3 4

Related

Implementation: Algorithm for a special distribution Problem

We are given a number x, and a set of n coins with denominations v1, v2, …, vn.
The coins are to be divided between Alice and Bob, with the restriction that each person's coins must add up to at least x.
For example, if x = 1, n = 2, and v1 = v2 = 2, then there are two possible distributions: one where Alice gets coin #1 and Bob gets coin #2, and one with the reverse. (These distributions are considered distinct even though both coins have the same denomination.)
I'm interested in counting the possible distributions. I'm pretty sure this can be done in O(nx) time and O(n+x) space using dynamic programming; but I don't see how.
Count the ways for one person to get just less than x, double it and subtract from the doubled total number of ways to divide the collection in two, (Stirling number of the second kind {n, 2}).
For example,
{2, 3, 3, 5}, x = 5
i matrix
0 2: 1
1 3: 1 (adding to 2 is too much)
2 3: 2
3 N/A (≥ x)
3 ways for one person to get
less than 5.
Total ways to partition a set
of 4 items in 2 is {4, 2} = 7
2 * 7 - 2 * 3 = 8
The Python code below uses MBo's routine. If you like this answer, please consider up-voting that answer.
# Stirling Algorithm
# Cod3d by EXTR3ME
# https://extr3metech.wordpress.com
def stirling(n,k):
n1=n
k1=k
if n<=0:
return 1
elif k<=0:
return 0
elif (n==0 and k==0):
return -1
elif n!=0 and n==k:
return 1
elif n<k:
return 0
else:
temp1=stirling(n1-1,k1)
temp1=k1*temp1
return (k1*(stirling(n1-1,k1)))+stirling(n1-1,k1-1)
def f(coins, x):
a = [1] + (x-1) * [0]
# Code by MBo
# https://stackoverflow.com/a/53418438/2034787
for c in coins:
for i in xrange(x - 1, c - 1, -1):
if a[i - c] > 0:
a[i] = a[i] + a[i - c]
return 2 * (stirling(len(coins), 2) - sum(a) + 1)
print f([2,3,3,5], 5) # 8
print f([1,2,3,4,4], 5) # 16
If sum of all coins is S, then the first person can get x..S-x of money.
Make array A of length S-x+1 and fill it with numbers of variants of changing A[i] with given coins (like kind of Coin Change problem).
To provide uniqueness (don't count C1+C2 and C2+C1 as two variants), fill array in reverse direction
A[0] = 1
for C in Coins:
for i = S-x downto C:
if A[i - C] > 0:
A[i] = A[i] + A[i - C]
//we can compose value i as i-C and C
then sum A entries in range x..S-x
Example for coins 2, 3, 3, 5 and x=5.
S = 13, S-x = 8
Array state after using coins in order:
0 1 2 3 4 5 6 7 8 //idx
1 1
1 1 1 1
1 1 2 2 1 1
1 1 2 3 1 1 3
So there are 8 variants to distribute these coins. Quick check (3' denotes the second coin 3):
2 3 3' 5
2 3' 3 5
2 3 3' 5
2 5 3 3'
3 3' 2 5
3 5 2 3'
3' 5 2 3
5 2 3 3'
You can also solve it in O(A * x^2) time and memory adding memoization to this dp:
solve(A, pos, sum1, sum2):
if (pos == A.length) return sum1 == x && sum2 == x
return solve(A, pos + 1, min(sum1 + A[pos], x), sum2) +
solve(A, pos + 1, sum1, min(sum2 + A[pos], x))
print(solve(A, 0, 0, 0))
So depending if x^2 < sum or not you could use this or the answer provided by #Mbo (in terms of time complexity). If you care more about space, this is better only when A * x^2 < sum - x

2D bin packing on a grid

I have an n × m grid and a collection of polyominos. I would like to know if it is possible to pack them into the grid: no overlapping or rotation is allowed.
I expect that like most packing problems this version is NP-hard and difficult to approximate, so I'm not expecting anything crazy, but an algorithm that could find reasonable packings on a grid around 25 × 25 and be fairly comprehensive around 10 × 10 would be great. (My tiles are mostly tetrominos -- four blocks -- but they could have 5–9+ blocks.)
I'll take whatever anyone has to offer: an algorithm, a paper, an existing program which can be adapted.
Here is a prototype-like SAT-solver approach, which tackles:
a-priori fixed polyomino patterns (see Constants / Input in code)
if rotations should be allowed, rotated pieces have to be added to the set
every polyomino can be placed 0-inf times
there is no scoring-mechanic besides:
the number of non-covered tiles is minimized!
Considering classic off-the-shelf methods for combinatorial-optimization (SAT, CP, MIP), this one will probably scale best (educated guess). It will also be very hard to beat when designing customized heuristics!
If needed, these slides provide some practical introduction to SAT-solvers in practice. Here we are using CDCL-based solvers which are complete (will always find a solution in finite time if there is one; will always be able to prove there is no solution in finite time if there is none; memory of course also plays a role!).
More complex (linear) per-tile scoring-functions are hard to incorporate in general. This is where a (M)IP-approach can be better. But in terms of pure search SAT-solving is much faster in general.
The N=25 problem with my polyomino-set takes ~ 1 second (and one could easily parallize this on multiple granularity-levels -> SAT-solver (threadings-param) vs. outer-loop; the latter will be explained later).
Of course the following holds:
as this is an NP-hard problem, there will be easy and non-easy instances
i did not do scientific benchmarks with many different sets of polyominos
it's to be expected that some sets are easier to solve than others
this is one possible SAT-formulation (not the most trivial!) of infinite many
each formulation has advantages and disadvantages
Idea
The general approach is creating a decision-problem and transforming it into CNF, which is then solved by highly efficient SAT-solvers (here: cryptominisat; CNF will be in DIMCAS-CNF format), which will be used as black-box solvers (no parameter-tuning!).
As the goal is to optimize the number of filled tiles and we are using a decision-problem, we need an outer-loop, adding a minimum tile-used constraint and try to solve it. If not successful, decrease this number. So in general we are calling the SAT-solver multiple times (from scratch!).
There are many different formulations / transformations to CNF possible. Here we use (binary) decision-variables X which indicate a placement. A placement is a tuple like polyomino, x_index, y_index (this index marks the top-left field of some pattern). There is a one-to-one mapping between the number of variables and the number of possible placements of all polyominos.
The core idea is: search in the space of all possible placement-combinations for one solution, which is not invalidating some constraints.
Additionally, we have decision-variables Y, which indicate a tile being filled. There are M*N such variables.
When having access to all possible placements, it's easy to calculate a collision-set for each tile-index (M*N). Given some fixed tile, we can check which placements can fill this one and constrain the problem to only select <=1 of those. This is active on X. In the (M)IP world this probably would be called convex-hull for the collisions.
n<=k-constraints are ubiquitous in SAT-solving and many different formulations are possible. Naive-encoding would need an exponential number of clauses in general which easily becomes infeasibly. Using new variables, there are many variable-clause trade-offs (see Tseitin-encoding) possible. I'm reusing one (old code; only reason why my code is python2-only) which worked good for me in the past. It's based on describing hardware-based counter-logic into CNF and provides good empirical- and theoretical performance (see paper). Of course there are many alternatives.
Additionally, we need to force the SAT-solver not to make all variables negative. We have to add constraints describing the following (that's one approach):
if some field is used: there has to be at least one placement active (poly + x + y), which results in covering this field!
this is a basic logical implication easily formulated as one potentially big logical or
Then only the core-loop is missing, trying to fill N fields, then N-1 until successful. This is again using the n<=k formulation mentioned earlier.
Code
This is python2-code, which needs the SAT-solver cryptominisat 5 in the directory the script is run from.
I'm also using tools from python's excellent scientific-stack.
# PYTHON 2!
import math
import copy
import subprocess
import numpy as np
import matplotlib.pyplot as plt # plotting-only
import seaborn as sns # plotting-only
np.set_printoptions(linewidth=120) # more nice console-output
""" Constants / Input
Example: 5 tetrominoes; no rotation """
M, N = 25, 25
polyominos = [np.array([[1,1,1,1]]),
np.array([[1,1],[1,1]]),
np.array([[1,0],[1,0], [1,1]]),
np.array([[1,0],[1,1],[0,1]]),
np.array([[1,1,1],[0,1,0]])]
""" Preprocessing
Calculate:
A: possible placements
B: covered positions
C: collisions between placements
"""
placements = []
covered = []
for p_ind, p in enumerate(polyominos):
mP, nP = p.shape
for x in range(M):
for y in range(N):
if x + mP <= M: # assumption: no zero rows / cols in each p
if y + nP <= N: # could be more efficient
placements.append((p_ind, x, y))
cover = np.zeros((M,N), dtype=bool)
cover[x:x+mP, y:y+nP] = p
covered.append(cover)
covered = np.array(covered)
collisions = []
for m in range(M):
for n in range(N):
collision_set = np.flatnonzero(covered[:, m, n])
collisions.append(collision_set)
""" Helper-function: Cardinality constraints """
# K-ARY CONSTRAINT GENERATION
# ###########################
# SINZ, Carsten. Towards an optimal CNF encoding of boolean cardinality constraints.
# CP, 2005, 3709. Jg., S. 827-831.
def next_var_index(start):
next_var = start
while(True):
yield next_var
next_var += 1
class s_index():
def __init__(self, start_index):
self.firstEnvVar = start_index
def next(self,i,j,k):
return self.firstEnvVar + i*k +j
def gen_seq_circuit(k, input_indices, next_var_index_gen):
cnf_string = ''
s_index_gen = s_index(next_var_index_gen.next())
# write clauses of first partial sum (i.e. i=0)
cnf_string += (str(-input_indices[0]) + ' ' + str(s_index_gen.next(0,0,k)) + ' 0\n')
for i in range(1, k):
cnf_string += (str(-s_index_gen.next(0, i, k)) + ' 0\n')
# write clauses for general case (i.e. 0 < i < n-1)
for i in range(1, len(input_indices)-1):
cnf_string += (str(-input_indices[i]) + ' ' + str(s_index_gen.next(i, 0, k)) + ' 0\n')
cnf_string += (str(-s_index_gen.next(i-1, 0, k)) + ' ' + str(s_index_gen.next(i, 0, k)) + ' 0\n')
for u in range(1, k):
cnf_string += (str(-input_indices[i]) + ' ' + str(-s_index_gen.next(i-1, u-1, k)) + ' ' + str(s_index_gen.next(i, u, k)) + ' 0\n')
cnf_string += (str(-s_index_gen.next(i-1, u, k)) + ' ' + str(s_index_gen.next(i, u, k)) + ' 0\n')
cnf_string += (str(-input_indices[i]) + ' ' + str(-s_index_gen.next(i-1, k-1, k)) + ' 0\n')
# last clause for last variable
cnf_string += (str(-input_indices[-1]) + ' ' + str(-s_index_gen.next(len(input_indices)-2, k-1, k)) + ' 0\n')
return (cnf_string, (len(input_indices)-1)*k, 2*len(input_indices)*k + len(input_indices) - 3*k - 1)
def gen_at_most_n_constraints(vars, start_var, n):
constraint_string = ''
used_clauses = 0
used_vars = 0
index_gen = next_var_index(start_var)
circuit = gen_seq_circuit(n, vars, index_gen)
constraint_string += circuit[0]
used_clauses += circuit[2]
used_vars += circuit[1]
start_var += circuit[1]
return [constraint_string, used_clauses, used_vars, start_var]
def parse_solution(output):
# assumes there is one
vars = []
for line in output.split("\n"):
if line:
if line[0] == 'v':
line_vars = list(map(lambda x: int(x), line.split()[1:]))
vars.extend(line_vars)
return vars
def solve(CNF):
p = subprocess.Popen(["cryptominisat5.exe"], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
result = p.communicate(input=CNF)[0]
sat_line = result.find('s SATISFIABLE')
if sat_line != -1:
# solution found!
vars = parse_solution(result)
return True, vars
else:
return False, None
""" SAT-CNF: BASE """
X = np.arange(1, len(placements)+1) # decision-vars
# 1-index for CNF
Y = np.arange(len(placements)+1, len(placements)+1 + M*N).reshape(M,N)
next_var = len(placements)+1 + M*N # aux-var gen
n_clauses = 0
cnf = '' # slow string appends
# int-based would be better
# <= 1 for each collision-set
for cset in collisions:
constraint_string, used_clauses, used_vars, next_var = \
gen_at_most_n_constraints(X[cset].tolist(), next_var, 1)
n_clauses += used_clauses
cnf += constraint_string
# if field marked: one of covering placements active
for x in range(M):
for y in range(N):
covering_placements = X[np.flatnonzero(covered[:, x, y])] # could reuse collisions
clause = str(-Y[x,y])
for i in covering_placements:
clause += ' ' + str(i)
clause += ' 0\n'
cnf += clause
n_clauses += 1
print('BASE CNF size')
print('clauses: ', n_clauses)
print('vars: ', next_var - 1)
""" SOLVE in loop -> decrease number of placed-fields until SAT """
print('CORE LOOP')
N_FIELD_HIT = M*N
while True:
print(' N_FIELDS >= ', N_FIELD_HIT)
# sum(y) >= N_FIELD_HIT
# == sum(not y) <= M*N - N_FIELD_HIT
cnf_final = copy.copy(cnf)
n_clauses_final = n_clauses
if N_FIELD_HIT == M*N: # awkward special case
constraint_string = ''.join([str(y) + ' 0\n' for y in Y.ravel()])
n_clauses_final += N_FIELD_HIT
else:
constraint_string, used_clauses, used_vars, next_var = \
gen_at_most_n_constraints((-Y).ravel().tolist(), next_var, M*N - N_FIELD_HIT)
n_clauses_final += used_clauses
n_vars_final = next_var - 1
cnf_final += constraint_string
cnf_final = 'p cnf ' + str(n_vars_final) + ' ' + str(n_clauses) + \
' \n' + cnf_final # header
status, sol = solve(cnf_final)
if status:
print(' SOL found: ', N_FIELD_HIT)
""" Print sol """
res = np.zeros((M, N), dtype=int)
counter = 1
for v in sol[:X.shape[0]]:
if v>0:
p, x, y = placements[v-1]
pM, pN = polyominos[p].shape
poly_nnz = np.where(polyominos[p] != 0)
x_inds, y_inds = x+poly_nnz[0], y+poly_nnz[1]
res[x_inds, y_inds] = p+1
counter += 1
print(res)
""" Plot """
# very very ugly code; too lazy
ax1 = plt.subplot2grid((5, 12), (0, 0), colspan=11, rowspan=5)
ax_p0 = plt.subplot2grid((5, 12), (0, 11))
ax_p1 = plt.subplot2grid((5, 12), (1, 11))
ax_p2 = plt.subplot2grid((5, 12), (2, 11))
ax_p3 = plt.subplot2grid((5, 12), (3, 11))
ax_p4 = plt.subplot2grid((5, 12), (4, 11))
ax_p0.imshow(polyominos[0] * 1, vmin=0, vmax=5)
ax_p1.imshow(polyominos[1] * 2, vmin=0, vmax=5)
ax_p2.imshow(polyominos[2] * 3, vmin=0, vmax=5)
ax_p3.imshow(polyominos[3] * 4, vmin=0, vmax=5)
ax_p4.imshow(polyominos[4] * 5, vmin=0, vmax=5)
ax_p0.xaxis.set_major_formatter(plt.NullFormatter())
ax_p1.xaxis.set_major_formatter(plt.NullFormatter())
ax_p2.xaxis.set_major_formatter(plt.NullFormatter())
ax_p3.xaxis.set_major_formatter(plt.NullFormatter())
ax_p4.xaxis.set_major_formatter(plt.NullFormatter())
ax_p0.yaxis.set_major_formatter(plt.NullFormatter())
ax_p1.yaxis.set_major_formatter(plt.NullFormatter())
ax_p2.yaxis.set_major_formatter(plt.NullFormatter())
ax_p3.yaxis.set_major_formatter(plt.NullFormatter())
ax_p4.yaxis.set_major_formatter(plt.NullFormatter())
mask = (res==0)
sns.heatmap(res, cmap='viridis', mask=mask, cbar=False, square=True, linewidths=.1, ax=ax1)
plt.tight_layout()
plt.show()
break
N_FIELD_HIT -= 1 # binary-search could be viable in some cases
# but beware the empirical asymmetry in SAT-solvers:
# finding solution vs. proving there is none!
Output console
BASE CNF size
('clauses: ', 31509)
('vars: ', 13910)
CORE LOOP
(' N_FIELDS >= ', 625)
(' N_FIELDS >= ', 624)
(' SOL found: ', 624)
[[3 2 2 2 2 1 1 1 1 1 1 1 1 2 2 1 1 1 1 1 1 1 1 2 2]
[3 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 1 1 1 1 2 2]
[3 3 3 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 1 1 1 1 2 2]
[2 2 3 1 1 1 1 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 2 2]
[2 2 3 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 2 2 2 2 2 2]
[1 1 1 1 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 2 2]
[1 1 1 1 3 3 3 2 2 1 1 1 1 2 2 2 2 2 2 2 2 1 1 1 1]
[2 2 1 1 1 1 3 2 2 2 2 2 2 2 2 1 1 1 1 2 2 2 2 2 2]
[2 2 2 2 2 2 3 3 3 2 2 2 2 1 1 1 1 2 2 2 2 2 2 2 2]
[2 2 2 2 2 2 2 2 3 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2]
[2 2 1 1 1 1 2 2 3 3 3 2 2 2 2 2 2 1 1 1 1 2 2 2 2]
[1 1 1 1 1 1 1 1 2 2 3 2 2 1 1 1 1 1 1 1 1 1 1 1 1]
[2 2 3 1 1 1 1 3 2 2 3 3 4 1 1 1 1 2 2 1 1 1 1 2 2]
[2 2 3 1 1 1 1 3 1 1 1 1 4 4 3 2 2 2 2 1 1 1 1 2 2]
[2 2 3 3 5 5 5 3 3 1 1 1 1 4 3 2 2 1 1 1 1 1 1 1 1]
[2 2 2 2 4 5 1 1 1 1 1 1 1 1 3 3 3 2 2 1 1 1 1 2 2]
[2 2 2 2 4 4 2 2 1 1 1 1 1 1 1 1 3 2 2 1 1 1 1 2 2]
[2 2 2 2 3 4 2 2 2 2 2 2 1 1 1 1 3 3 3 2 2 2 2 2 2]
[3 4 2 2 3 5 5 5 2 2 2 2 1 1 1 1 2 2 3 2 2 2 2 2 2]
[3 4 4 3 3 3 5 5 5 5 1 1 1 1 2 2 2 2 3 3 3 2 2 2 2]
[3 3 4 3 1 1 1 1 5 1 1 1 1 4 2 2 2 2 2 2 3 2 2 2 2]
[2 2 3 3 3 1 1 1 1 1 1 1 1 4 4 4 2 2 2 2 3 3 0 2 2]
[2 2 3 1 1 1 1 1 1 1 1 5 5 5 4 4 4 1 1 1 1 2 2 2 2]
[2 2 3 3 1 1 1 1 1 1 1 1 5 5 5 5 4 1 1 1 1 2 2 2 2]
[2 2 1 1 1 1 1 1 1 1 1 1 1 1 5 1 1 1 1 1 1 1 1 2 2]]
Output plot
One field cannot be covered in this parameterization!
Some other examples with a bigger set of patterns
Square M=N=61 (prime -> intuition: harder) where the base-CNF has 450.723 clauses and 185.462 variables. There is an optimal packing!
Non-square M,N =83,131 (double prime) where the base-CNF has 1.346.511 clauses and 553.748 variables. There is an optimal packing!
One approach could be using integer programming. I'll implement this using the python pulp package, though packages are available for pretty much any programming language.
The basic idea is to define a decision variable for every possible placement location for every tile. If a decision variable takes value 1, then its associated tile is placed there. If it takes value 0, then it is not placed there. The objective is therefore to maximize the sum of the decision variables times the number of squares in the variable's tile --- this corresponds to placing the maximum number of squares possible on the board.
My code implements two constraints:
Each tile can only be placed once (below we will relax this constraint)
Each square can have at most one tile on it
Here's the output for a set of five fixed tetrominoes on a 4x5 grid:
import itertools
import pulp
import string
def covered(tile, base):
return {(base[0] + t[0], base[1] + t[1]): True for t in tile}
tiles = [[(0,0), (1,0), (0,1), (0,2)],
[(0,0), (1,0), (2,0), (3,0)],
[(1,0), (0,1), (1,1), (2,0)],
[(0,0), (1,0), (0,1), (1,1)],
[(1,0), (0,1), (1,1), (2,1)]]
rows = 25
cols = 25
squares = {x: True for x in itertools.product(range(rows), range(cols))}
vars = list(itertools.product(range(rows), range(cols), range(len(tiles))))
vars = [x for x in vars if all([y in squares for y in covered(tiles[x[2]], (x[0], x[1])).keys()])]
x = pulp.LpVariable.dicts('tiles', vars, lowBound=0, upBound=1, cat=pulp.LpInteger)
mod = pulp.LpProblem('polyominoes', pulp.LpMaximize)
# Objective value is number of squares in tile
mod += sum([len(tiles[p[2]]) * x[p] for p in vars])
# Don't use any shape more than once
for tnum in range(len(tiles)):
mod += sum([x[p] for p in vars if p[2] == tnum]) <= 1
# Each square can be covered by at most one shape
for s in squares:
mod += sum([x[p] for p in vars if s in covered(tiles[p[2]], (p[0], p[1]))]) <= 1
# Solve and output
mod.solve()
out = [['-'] * cols for rep in range(rows)]
chars = string.ascii_uppercase + string.ascii_lowercase
numset = 0
for p in vars:
if x[p].value() == 1.0:
for off in tiles[p[2]]:
out[p[0] + off[0]][p[1] + off[1]] = chars[numset]
numset += 1
for row in out:
print(''.join(row))
It obtains the following optimal solution:
AAAB-
A-BBC
DDBCC
DD--C
If we allow repeats (comment out the constraint limiting to one copy of each shape), then we can completely tile the grid:
ABCDD
ABCDD
ABCEE
ABCEE
It worked near-instantaneously for a 10x10 grid:
ABCCDDEEFF
ABCCDDEEFF
ABGHHIJJKK
ABGHHIJJKK
LLGMMINOPP
LLGMMINOPP
QQRRSTNOUV
QQRRSTNOUV
WWXXSTYYUV
WWXXSTYYUV
The code obtains an optimal solution for the 25x25 grid in 100 seconds of runtime, though unfortunately there aren't enough letter and numbers for my output code to print the solution.
I don't know if its of any use to you but I coded up a small sketchy frame in Python. It doesn't place polyminos yet but the functions are there - checking for dead empty spaces is primitive, though and needs a better approach. Then again, maybe it is all rubbish...
import functools
import itertools
M = 4 # x
N = 5 # y
field = [[9999]*(N+1)]+[[9999]+[0]*N+[9999] for _ in range(M)]+[[9999]*(N+1)]
def field_rd(p2d):
return field[p2d[0]+1][p2d[1]+1]
def field_add(p2d,val):
field[p2d[0]+1][p2d[1]+1] += val
def add2d(p,k):
return p[0]+k[0],p[1]+k[1]
def norm(polymino_2d):
x0,y0 = min(x for x,y in polymino_2d),min(y for x,y in polymino_2d)
return tuple(sorted(map(lambda p: add2d(p,(-x0,-y0)), polymino_2d)))
def create_cutoff(occupied):
"""Receive a polymino and create the outer area of squares which could be cut off by a placement of this polymino"""
cutoff = set(itertools.chain.from_iterable(map(lambda p: add2d(p,(x,y)),occupied) for (x,y) in [(-1,0),(1,0),(0,-1),(0,1)])) #(-1,-1),(-1,0),(-1,1),(0,1),(1,1),(1,0),(1,-1)]))
return tuple(cutoff.difference(occupied))
def is_occupied(p2d):
return field_rd(p2d) == 0
def is_cutoff(p2d):
return not is_occupied(p2d) and all(map(is_occupied,map(lambda p: add2d(p,p2d),[(-1,0),(1,0),(0,-1),(0,1)])))
def polym_colliding(p2d,occupied):
return any(map(is_occupied,map(lambda p: add2d(p,p2d),occupied)))
def polym_cutoff(p2d,cutoff):
return any(map(is_cutoff,map(lambda p: add2d(p,p2d),cutoff)))
def put(p2d,occupied,polym_nr):
for p in occupied:
field_add(add2d(p2d,p),polym_nr)
def remove(p2d,occupied,polym_nr):
for p in polym:
field_add(add2d(p2d,p),-polym_nr)
def place(p2d,polym_nr):
"""Try to place a polymino at point p2d. If it fits without cutting off unreachable single cells return True else False"""
occupied = polym[polym_nr][0]
if polym_colliding(p2d,occupied):
return False
put(p2d,occupied,polym_nr)
cutoff = polym[polym_nr][1]
if polym_cutoff(p2d,cutoff):
remove(p2d,occupied,polym_nr)
return False
return True
def NxM_array(N,M):
return [[0]*N for _ in range(M)]
def generate_all_polyminos(n):
"""Create all polyminos with size n"""
def gen_recur(polymino,i,result):
if i > 1:
new_pts = set(itertools.starmap(add2d,itertools.product(polymino,[(-1,0),(1,0),(0,-1),(0,1)])))
new_pts = new_pts.difference(polymino)
for p in new_pts:
gen_recur(polymino.union({p}),i-1,result)
else:
result.add(norm(polymino))
#---------------------------------------
all_polyminos = set()
gen_recur({(0,0)},n,all_polyminos)
return all_polyminos
print("All possible Tetris blocks (all orientations): ",generate_all_polyminos(4))

Can someone please explain the use of modulus in this code?

I know that modulus gives the remainder and that this code will give the survivor of the Josephus Problem. I have noticed a pattern that when n mod k = 0, the starting count point begins at the very beginning of the circle and that when n mod k = 1, the person immediately before the beginning of the circle survived that execution round through the circle.
I just don't understand how this recursion uses modulus to find the last man standing and what josephus(n-1,k) is actually referring to. Is it referring to the last person to get executed or the last survivor of a specific round through the circle?
def josephus( n, k):
if n ==1:
return 1
else:
return ((josephus(n-1,k)+k-1) % n)+1
This answer is both a summary of the Josephus Problem and an answer to your questions of:
What is josephus(n-1,k) referring to?
What is the modulus operator being used for?
When calling josephus(n-1,k) that means that you've executed every kth person up to a total of n-1 times. (Changed to match George Tomlinson's comment)
The recursion keeps going until there is 1 person standing, and when the function returns itself to the top, it will return the position that you will have to be in to survive. The modulus operator is being used to help stay within the circle (just as GuyGreer explained in the comments). Here is a picture to help explain:
1 2
6 3
5 4
Let the n = 6 and k = 2 (execute every 2nd person in the circle). First run through the function once and you have executed the 2nd person, the circle becomes:
1 X
6 3
5 4
Continue through the recursion until the last person remains will result in the following sequence:
1 2 1 X 1 X 1 X 1 X X X
6 3 -> 6 3 -> 6 3 -> X 3 -> X X -> X X
5 4 5 4 5 X 5 X 5 X 5 X
When we check the values returned from josephus at n we get the following values:
n = 1 return 1
n = 2 return (1 + 2 - 1) % 2 + 1 = 1
n = 3 return (1 + 2 - 1) % 3 + 1 = 3
n = 4 return (3 + 2 - 1) % 4 + 1 = 1
n = 5 return (1 + 2 - 1) % 5 + 1 = 3
n = 6 return (3 + 2 - 1) % 6 + 1 = 5
Which shows that josephus(n-1,k) refers to the position of the last survivor. (1)
If we removed the modulus operator then you will see that this will return the 11th position but there is only 6 here so the modulus operator helps keep the counting within the bounds of the circle. (2)
Your first question has been answered above in the comments.
To answer your second question, it's referring to the position of the last survivor.
Consider j(4,2).
Using the algorithm gives
j(4,2)=(j(3,2)+1)%4)+1
j(3,2)=(j(2,2)+1)%3)+1
j(2,2)=(j(1,2)+1)%2)+1
j(1,2)=1
and so
j(2,2)=((1+1)%2)+1=1
j(3,2)=((1+1)%3)+1=3
j(4,2)=((3+1)%4)+1=1
Now the table of j(2,2) is
1 2
1 x
so j(2,2) is indeed 1.
For j(3,2) we have
1 2 3
1 x 3
x x 3
so j(3,2) is 3 as required.
Finally, j(4,2) is
1 2 3 4
1 x 3 4
1 x 3 x
1 x x x
which tells us that j(4,2)=1 as required.

Print (or output to file) table of number of steps for Euclid's algorithm

I'd like to print (or send to a file in a human-readable format like below) arbitrary size square tables where each table cell contains the number of steps required to solve Euclid's algorithm for the two integers in the row/column headings like this (table written by hand, but I think the numbers are all correct):
1 2 3 4 5 6
1 1 1 1 1 1 1
2 1 1 2 1 2 1
3 1 2 1 2 3 1
4 1 1 2 1 2 2
5 1 2 3 2 1 2
6 1 1 1 2 2 1
The script would ideally allow me to choose the start integer (1 as above or 11 as below or something else arbitrary) and end integer (6 as above or 16 as below or something else arbitrary and larger than the start integer), so that I could do this too:
11 12 13 14 15 16
11 1 2 3 4 4 3
12 2 1 2 2 2 2
13 3 2 1 2 3 3
14 4 2 2 1 2 2
15 4 2 3 2 1 2
16 3 2 3 2 2 1
I realize that the table is symmetric about the diagonal and so only half of the table contains unique information, and that the diagonal itself is always a 1-step algorithm.
See this and for a graphical representation of what I'm after, but I'd like to know the actual number of steps for any two integers which the image doesn't show me.
I have the algorithms (there's probably better implementations, but I think these work):
The step counter:
def gcd(a,b):
"""Step counter."""
if b > a:
x = a
a = b
b = x
counter = 0
while b:
c = a % b
a = b
b = c
counter += 1
return counter
The list builder:
def gcd_steps(n):
"""List builder."""
print("Table of size", n - 1, "x", n - 1)
list_of_steps = []
for i in range(1, n):
for j in range(1, n):
list_of_steps.append(gcd(i,j))
print(list_of_steps)
return list_of_steps
but I'm totally hung up on how to write the table. I thought about a double nested for loop with i and j and stuff, but I'm new to Python and haven't a clue about the best way (or any way) to go about writing the table. I don't need special formatting like something to offset the row/column heads from the table cells as I can do that by eye, but just getting everything to line up so that I can read it easily is proving too difficult for me at my current skill level, I'm afraid. I'm thinking that it probably makes sense to print/output within the two nested for loops as I'm calculating the numbers I need which is why the list builder has some print statements as well as returning the list, but I don't know how to work the print magic to do what I'm after.
Try this. The programs computes data row by row and prints each row when it's available,
in order to limit memory usage.
import sys, os
def gcd(a,b):
k = 0
if b > a:
a, b = b, a
while b > 0:
a, b = b, a%b
k += 1
return k
def printgcd(name, a, b):
f = open(name, "wt")
s = ""
for i in range(a, b + 1):
s = "{}\t{}".format(s, i)
f.write("{}\n".format(s))
for i in range(a, b + 1):
s = "{}".format(i)
for j in range (a, b + 1):
s = "{}\t{}".format(s, gcd(i, j))
f.write("{}\n".format(s))
f.close()
printgcd("gcd-1-6.txt", 1, 6)
The preceding won't return a list with all computed values, since they are destroyed on purpose. It's easy to do however. Here is a solution with a hash table
def printgcd2(name, a, b):
f = open(name, "wt")
s = ""
h = { }
for i in range(a, b + 1):
s = "{}\t{}".format(s, i)
f.write("{}\n".format(s))
for i in range(a, b + 1):
s = "{}".format(i)
for j in range (a, b + 1):
k = gcd(i, j)
s = "{}\t{}".format(s, k)
h[i, j] = k
f.write("{}\n".format(s))
f.close()
return h
And here is another with a list of lists
def printgcd3(name, a, b):
f = open(name, "wt")
s = ""
u = [ ]
for i in range(a, b + 1):
s = "{}\t{}".format(s, i)
f.write("{}\n".format(s))
for i in range(a, b + 1):
v = [ ]
s = "{}".format(i)
for j in range (a, b + 1):
k = gcd(i, j)
s = "{}\t{}".format(s, k)
v.append(k)
f.write("{}\n".format(s))
u.append(v)
f.close()
return u

Print all ways to sum n integers so that they total a given sum.

I'm trying to come up with an algorithm that will print out all possible ways to sum N integers so that they total a given value.
Example. Print all ways to sum 4 integers so that they sum up to be 5.
Result should be something like:
5 0 0 0
4 1 0 0
3 2 0 0
3 1 1 0
2 3 0 0
2 2 1 0
2 1 2 0
2 1 1 1
1 4 0 0
1 3 1 0
1 2 2 0
1 2 1 1
1 1 3 0
1 1 2 1
1 1 1 2
This is based off Alinium's code.
I modified it so it prints out all the possible combinations, since his already does all the permutations.
Also, I don't think you need the for loop when n=1, because in that case, only one number should cause the sum to equal value.
Various other modifications to get boundary cases to work.
def sum(n, value):
arr = [0]*n # create an array of size n, filled with zeroes
sumRecursive(n, value, 0, n, arr);
def sumRecursive(n, value, sumSoFar, topLevel, arr):
if n == 1:
if sumSoFar <= value:
#Make sure it's in ascending order (or only level)
if topLevel == 1 or (value - sumSoFar >= arr[-2]):
arr[(-1)] = value - sumSoFar #put it in the n_th last index of arr
print arr
elif n > 0:
#Make sure it's in ascending order
start = 0
if (n != topLevel):
start = arr[(-1*n)-1] #the value before this element
for i in range(start, value+1): # i = start...value
arr[(-1*n)] = i # put i in the n_th last index of arr
sumRecursive(n-1, value, sumSoFar + i, topLevel, arr)
Runing sums(4, 5) returns:
[0, 0, 0, 5]
[0, 0, 1, 4]
[0, 0, 2, 3]
[0, 1, 1, 3]
[1, 1, 1, 2]
In pure math, a way of summing integers to get a given total is called a partition. There is a lot of information around if you google for "integer partition". You are looking for integer partitions where there are a specific number of elements. I'm sure you could take one of the known generating mechanisms and adapt for this extra condition. Wikipedia has a good overview of the topic Partition_(number_theory). Mathematica even has a function to do what you want: IntegerPartitions[5, 4].
The key to solving the problem is recursion. Here's a working implementation in python. It prints out all possible permutations that sum up to the total. You'll probably want to get rid of the duplicate combinations, possibly by using some Set or hashing mechanism to filter them out.
def sum(n, value):
arr = [0]*n # create an array of size n, filled with zeroes
sumRecursive(n, value, 0, n, arr);
def sumRecursive(n, value, sumSoFar, topLevel, arr):
if n == 1:
if sumSoFar > value:
return False
else:
for i in range(value+1): # i = 0...value
if (sumSoFar + i) == value:
arr[(-1*n)] = i # put i in the n_th last index of arr
print arr;
return True
else:
for i in range(value+1): # i = 0...value
arr[(-1*n)] = i # put i in the n_th last index of arr
if sumRecursive(n-1, value, sumSoFar + i, topLevel, arr):
if (n == topLevel):
print "\n"
With some extra effort, this can probably be simplified to get rid of some of the parameters I am passing to the recursive function. As suggested by redcayuga's pseudo code, using a stack, instead of manually managing an array, would be a better idea too.
I haven't tested this:
procedure allSum (int tot, int n, int desiredTotal) return int
if n > 0
int i =
for (int i = tot; i>=0; i--) {
push i onto stack;
allSum(tot-i, n-1, desiredTotal);
pop top of stack
}
else if n==0
if stack sums to desiredTotal then print the stack end if
end if
I'm sure there's a better way to do this.
i've find a ruby way with domain specification based on Alinium's code
class Domain_partition
attr_reader :results,
:domain,
:sum,
:size
def initialize(_dom, _size, _sum)
_dom.is_a?(Array) ? #domain=_dom.sort : #domain= _dom.to_a
#results, #sum, #size = [], _sum, _size
arr = [0]*size # create an array of size n, filled with zeroes
sumRecursive(size, 0, arr)
end
def sumRecursive(n, sumSoFar, arr)
if n == 1
#Make sure it's in ascending order (or only level)
if sum - sumSoFar >= arr[-2] and #domain.include?(sum - sumSoFar)
final_arr=Array.new(arr)
final_arr[(-1)] = sum - sumSoFar #put it in the n_th last index of arr
#results<<final_arr
end
elsif n > 1
#********* dom_selector ********
n != size ? start = arr[(-1*n)-1] : start = domain[0]
dom_bounds=(start*(n-1)..domain.last*(n-1))
restricted_dom=domain.select do |x|
if x < start
false; next
end
if size-n > 0
if dom_bounds.cover? sum-(arr.first(size-n).inject(:+)+x) then true
else false end
else
dom_bounds.cover?(sum+x) ? true : false
end
end # ***************************
for i in restricted_dom
_arr=Array.new(arr)
_arr[(-1*n)] = i
sumRecursive(n-1, sumSoFar + i, _arr)
end
end
end
end
a=Domain_partition.new (-6..6),10,0
p a
b=Domain_partition.new [-4,-2,-1,1,2,3],10,0
p b
If you're interested in generating (lexically) ordered integer partitions, i.e. unique unordered sets of S positive integers (no 0's) that sum to N, then try the following. (unordered simply means that [1,2,1] and [1,1,2] are the same partition)
The problem doesn't need recursion and is quickly handled because the concept of finding the next lexical restricted partition is actually very simple...
In concept: Starting from the last addend (integer), find the first instance where the difference between two addends is greater than 1. Split the partition in two at that point. Remove 1 from the higher integer (which will be the last integer in one part) and add 1 to the lower integer (the first integer of the latter part). Then find the first lexically ordered partition for the latter part having the new largest integer as the maximum addend value. I use Sage to find the first lexical partition because it's lightening fast, but it's easily done without it. Finally, join the two portions and voila! You have the next lexical partition of N having S parts.
e.g. [6,5,3,2,2] -> [6,5],[3,2,2] -> [6,4],[4,2,2] -> [6,4],[4,3,1] -> [6,4,4,3,1]
So, in Python and calling Sage for the minor task of finding the first lexical partition given n and s parts...
from sage.all import *
def most_even_partition(n,s): # The main function will need to recognize the most even partition possible (i.e. last lexical partition) so it can loop back to the first lexical partition if need be
most_even = [int(floor(float(n)/float(s)))]*s
_remainder = int(n%s)
j = 0
while _remainder > 0:
most_even[j] += 1
_remainder -= 1
j += 1
return most_even
def portion(alist, indices):
return [alist[i:j] for i, j in zip([0]+indices, indices+[None])]
def next_restricted_part(p,n,s):
if p == most_even_partition(n,s):return Partitions(n,length=s).first()
for i in enumerate(reversed(p)):
if i[1] - p[-1] > 1:
if i[0] == (s-1):
return Partitions(n,length=s,max_part=(i[1]-1)).first()
else:
parts = portion(p,[s-i[0]-1]) # split p (soup?)
h1 = parts[0]
h2 = parts[1]
next = list(Partitions(sum(h2),length=len(h2),max_part=(h2[0]-1)).first())
return h1+next
If you want zeros (not actual integer partitions), then the functions only need small modifications.
Try this code. I hope it is easier to understand. I tested it, it generate correct sequence.
void partition(int n, int m = 0)
{
int i;
// if the partition is done
if(n == 0){
// Output the result
for(i = 0; i < m; ++i)
printf("%d ", list[i]);
printf("\n");
return;
}
// Do the split from large to small int
for(i = n; i > 0; --i){
// if the number not partitioned or
// willbe partitioned no larger than
// previous partition number
if(m == 0 || i <= list[m - 1]){
// store the partition int
list[m] = i;
// partition the rest
partition(n - i, m + 1);
}
}
}
Ask for clarification, if required.
The is One of the output
6
5 1
4 2
4 1 1
3 3
3 2 1
3 1 1 1
2 2 2
2 2 1 1
2 1 1 1 1
1 1 1 1 1 1
10
9 1
8 2
8 1 1
7 3
7 2 1
7 1 1 1
6 4
6 3 1
6 2 2
6 2 1 1
6 1 1 1 1
5 5
5 4 1
5 3 2
5 3 1 1
5 2 2 1
5 2 1 1 1
5 1 1 1 1 1
4 4 2
4 4 1 1
4 3 3
4 3 2 1
4 3 1 1 1
4 2 2 2
4 2 2 1 1
4 2 1 1 1 1
4 1 1 1 1 1 1
3 3 3 1
3 3 2 2
3 3 2 1 1
3 3 1 1 1 1
3 2 2 2 1
3 2 2 1 1 1
3 2 1 1 1 1 1
3 1 1 1 1 1 1 1
2 2 2 2 2
2 2 2 2 1 1
2 2 2 1 1 1 1
2 2 1 1 1 1 1 1
2 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1

Resources