How to figure out whether a number is congruent in ruby? - ruby

I came across this snippet online while researching for ways to test that a number is a perfect square:
if n is a perfect square then it is congruent to 0, 1 or 2 mod 4
I tried translating it to ruby code with a ternary statement (see below) but it doesn't work correctly.
n % 4 == 0 && n % 4 == 1 && n % 4 == 2 ? true : false
How would you approach this?

Your snipped says "... is congruent to 0, 1 or 2 mod 4" but you're using &&. Changing that to || fixes your expression:
n % 4 == 0 || n % 4 == 1 || n % 4 == 2 ? true : false
The ternary statement is redundant because you already have a boolean result. It's equivalent to:
n % 4 == 0 || n % 4 == 1 || n % 4 == 2
Furthermore n % 4 returns either 0, 1, 2 or 3 for any n, so it can also be expressed as:
n % 4 <= 2 # because 0, 1, 2 are less or equal to 2
n % 4 < 3 # because 0, 1, 2 are less than 3
n % 4 != 3 # because 0, 1, 2 are not 3

You can write this:
n % 4 <= 2

You can approach by the following way
n % 4 < = 2

Try this:
(0..2).include?(n % 4)

Related

The sum of all numbers less than 1000, multiples of 3 or 5

If we list all natural numbers less than 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all numbers less than 1000, multiples of 3 or 5.
I just started learning Ruby, I used to work only with C languages. Please explain why this code doesn't work. Thank you!!!
Code:
sum = 0;
i = 3;
while (i < 1000) do
if ((i % 3 == 0) || (i % 5 == 0))
sum += i;
end
end
puts "The sum of all the multiples of 3 or 5 below 1000: #{sum}"
And when I run the file, it loads indefinitely.
enter image description here
You are never incrementing i.
The while loop will terminate if: i >= 1000
But i = 3 and there is no i+=1 so this loop will never terminate.
#Raavgo has explained the problem with your code. If you are looking for a fast solution I suggest the following.
def tot(n, limit)
m, rem = limit.divmod(n)
m * (n + limit - rem)/2
end
tot(3, 999) + tot(5, 999) - tot(15, 999)
#=> 233168
The term tot(15, 999) is to compensate for double-counting of terms that are divisible by both 3 and 5.
See Numeric#divmod.
Suppose
n = 5
limit = 999
Then
m, rem = limit.divmod(n)
#=> [199, 4]
So
m #=> 199
rem #=> 4
Then we want to compute
5 + 10 + ... + 999 - rem
#=> 5 + 10 + ... + 995
This is simply the the sum of an arithmetic progression:
199 * (5 + 995)/2
which equals
m * (n + limit - rem)/2
(0..1000).select(&->(i){ (i % 3).zero? || (i % 5).zero? }).sum
(0..1000).filter { |i| i % 3 == 0 || i % 5 == 0 }.sum
your approach is fine if you increment i as said in the other answer, but a more idiomatic Ruby looks like this.

2D bin packing on a grid

I have an n × m grid and a collection of polyominos. I would like to know if it is possible to pack them into the grid: no overlapping or rotation is allowed.
I expect that like most packing problems this version is NP-hard and difficult to approximate, so I'm not expecting anything crazy, but an algorithm that could find reasonable packings on a grid around 25 × 25 and be fairly comprehensive around 10 × 10 would be great. (My tiles are mostly tetrominos -- four blocks -- but they could have 5–9+ blocks.)
I'll take whatever anyone has to offer: an algorithm, a paper, an existing program which can be adapted.
Here is a prototype-like SAT-solver approach, which tackles:
a-priori fixed polyomino patterns (see Constants / Input in code)
if rotations should be allowed, rotated pieces have to be added to the set
every polyomino can be placed 0-inf times
there is no scoring-mechanic besides:
the number of non-covered tiles is minimized!
Considering classic off-the-shelf methods for combinatorial-optimization (SAT, CP, MIP), this one will probably scale best (educated guess). It will also be very hard to beat when designing customized heuristics!
If needed, these slides provide some practical introduction to SAT-solvers in practice. Here we are using CDCL-based solvers which are complete (will always find a solution in finite time if there is one; will always be able to prove there is no solution in finite time if there is none; memory of course also plays a role!).
More complex (linear) per-tile scoring-functions are hard to incorporate in general. This is where a (M)IP-approach can be better. But in terms of pure search SAT-solving is much faster in general.
The N=25 problem with my polyomino-set takes ~ 1 second (and one could easily parallize this on multiple granularity-levels -> SAT-solver (threadings-param) vs. outer-loop; the latter will be explained later).
Of course the following holds:
as this is an NP-hard problem, there will be easy and non-easy instances
i did not do scientific benchmarks with many different sets of polyominos
it's to be expected that some sets are easier to solve than others
this is one possible SAT-formulation (not the most trivial!) of infinite many
each formulation has advantages and disadvantages
Idea
The general approach is creating a decision-problem and transforming it into CNF, which is then solved by highly efficient SAT-solvers (here: cryptominisat; CNF will be in DIMCAS-CNF format), which will be used as black-box solvers (no parameter-tuning!).
As the goal is to optimize the number of filled tiles and we are using a decision-problem, we need an outer-loop, adding a minimum tile-used constraint and try to solve it. If not successful, decrease this number. So in general we are calling the SAT-solver multiple times (from scratch!).
There are many different formulations / transformations to CNF possible. Here we use (binary) decision-variables X which indicate a placement. A placement is a tuple like polyomino, x_index, y_index (this index marks the top-left field of some pattern). There is a one-to-one mapping between the number of variables and the number of possible placements of all polyominos.
The core idea is: search in the space of all possible placement-combinations for one solution, which is not invalidating some constraints.
Additionally, we have decision-variables Y, which indicate a tile being filled. There are M*N such variables.
When having access to all possible placements, it's easy to calculate a collision-set for each tile-index (M*N). Given some fixed tile, we can check which placements can fill this one and constrain the problem to only select <=1 of those. This is active on X. In the (M)IP world this probably would be called convex-hull for the collisions.
n<=k-constraints are ubiquitous in SAT-solving and many different formulations are possible. Naive-encoding would need an exponential number of clauses in general which easily becomes infeasibly. Using new variables, there are many variable-clause trade-offs (see Tseitin-encoding) possible. I'm reusing one (old code; only reason why my code is python2-only) which worked good for me in the past. It's based on describing hardware-based counter-logic into CNF and provides good empirical- and theoretical performance (see paper). Of course there are many alternatives.
Additionally, we need to force the SAT-solver not to make all variables negative. We have to add constraints describing the following (that's one approach):
if some field is used: there has to be at least one placement active (poly + x + y), which results in covering this field!
this is a basic logical implication easily formulated as one potentially big logical or
Then only the core-loop is missing, trying to fill N fields, then N-1 until successful. This is again using the n<=k formulation mentioned earlier.
Code
This is python2-code, which needs the SAT-solver cryptominisat 5 in the directory the script is run from.
I'm also using tools from python's excellent scientific-stack.
# PYTHON 2!
import math
import copy
import subprocess
import numpy as np
import matplotlib.pyplot as plt # plotting-only
import seaborn as sns # plotting-only
np.set_printoptions(linewidth=120) # more nice console-output
""" Constants / Input
Example: 5 tetrominoes; no rotation """
M, N = 25, 25
polyominos = [np.array([[1,1,1,1]]),
np.array([[1,1],[1,1]]),
np.array([[1,0],[1,0], [1,1]]),
np.array([[1,0],[1,1],[0,1]]),
np.array([[1,1,1],[0,1,0]])]
""" Preprocessing
Calculate:
A: possible placements
B: covered positions
C: collisions between placements
"""
placements = []
covered = []
for p_ind, p in enumerate(polyominos):
mP, nP = p.shape
for x in range(M):
for y in range(N):
if x + mP <= M: # assumption: no zero rows / cols in each p
if y + nP <= N: # could be more efficient
placements.append((p_ind, x, y))
cover = np.zeros((M,N), dtype=bool)
cover[x:x+mP, y:y+nP] = p
covered.append(cover)
covered = np.array(covered)
collisions = []
for m in range(M):
for n in range(N):
collision_set = np.flatnonzero(covered[:, m, n])
collisions.append(collision_set)
""" Helper-function: Cardinality constraints """
# K-ARY CONSTRAINT GENERATION
# ###########################
# SINZ, Carsten. Towards an optimal CNF encoding of boolean cardinality constraints.
# CP, 2005, 3709. Jg., S. 827-831.
def next_var_index(start):
next_var = start
while(True):
yield next_var
next_var += 1
class s_index():
def __init__(self, start_index):
self.firstEnvVar = start_index
def next(self,i,j,k):
return self.firstEnvVar + i*k +j
def gen_seq_circuit(k, input_indices, next_var_index_gen):
cnf_string = ''
s_index_gen = s_index(next_var_index_gen.next())
# write clauses of first partial sum (i.e. i=0)
cnf_string += (str(-input_indices[0]) + ' ' + str(s_index_gen.next(0,0,k)) + ' 0\n')
for i in range(1, k):
cnf_string += (str(-s_index_gen.next(0, i, k)) + ' 0\n')
# write clauses for general case (i.e. 0 < i < n-1)
for i in range(1, len(input_indices)-1):
cnf_string += (str(-input_indices[i]) + ' ' + str(s_index_gen.next(i, 0, k)) + ' 0\n')
cnf_string += (str(-s_index_gen.next(i-1, 0, k)) + ' ' + str(s_index_gen.next(i, 0, k)) + ' 0\n')
for u in range(1, k):
cnf_string += (str(-input_indices[i]) + ' ' + str(-s_index_gen.next(i-1, u-1, k)) + ' ' + str(s_index_gen.next(i, u, k)) + ' 0\n')
cnf_string += (str(-s_index_gen.next(i-1, u, k)) + ' ' + str(s_index_gen.next(i, u, k)) + ' 0\n')
cnf_string += (str(-input_indices[i]) + ' ' + str(-s_index_gen.next(i-1, k-1, k)) + ' 0\n')
# last clause for last variable
cnf_string += (str(-input_indices[-1]) + ' ' + str(-s_index_gen.next(len(input_indices)-2, k-1, k)) + ' 0\n')
return (cnf_string, (len(input_indices)-1)*k, 2*len(input_indices)*k + len(input_indices) - 3*k - 1)
def gen_at_most_n_constraints(vars, start_var, n):
constraint_string = ''
used_clauses = 0
used_vars = 0
index_gen = next_var_index(start_var)
circuit = gen_seq_circuit(n, vars, index_gen)
constraint_string += circuit[0]
used_clauses += circuit[2]
used_vars += circuit[1]
start_var += circuit[1]
return [constraint_string, used_clauses, used_vars, start_var]
def parse_solution(output):
# assumes there is one
vars = []
for line in output.split("\n"):
if line:
if line[0] == 'v':
line_vars = list(map(lambda x: int(x), line.split()[1:]))
vars.extend(line_vars)
return vars
def solve(CNF):
p = subprocess.Popen(["cryptominisat5.exe"], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
result = p.communicate(input=CNF)[0]
sat_line = result.find('s SATISFIABLE')
if sat_line != -1:
# solution found!
vars = parse_solution(result)
return True, vars
else:
return False, None
""" SAT-CNF: BASE """
X = np.arange(1, len(placements)+1) # decision-vars
# 1-index for CNF
Y = np.arange(len(placements)+1, len(placements)+1 + M*N).reshape(M,N)
next_var = len(placements)+1 + M*N # aux-var gen
n_clauses = 0
cnf = '' # slow string appends
# int-based would be better
# <= 1 for each collision-set
for cset in collisions:
constraint_string, used_clauses, used_vars, next_var = \
gen_at_most_n_constraints(X[cset].tolist(), next_var, 1)
n_clauses += used_clauses
cnf += constraint_string
# if field marked: one of covering placements active
for x in range(M):
for y in range(N):
covering_placements = X[np.flatnonzero(covered[:, x, y])] # could reuse collisions
clause = str(-Y[x,y])
for i in covering_placements:
clause += ' ' + str(i)
clause += ' 0\n'
cnf += clause
n_clauses += 1
print('BASE CNF size')
print('clauses: ', n_clauses)
print('vars: ', next_var - 1)
""" SOLVE in loop -> decrease number of placed-fields until SAT """
print('CORE LOOP')
N_FIELD_HIT = M*N
while True:
print(' N_FIELDS >= ', N_FIELD_HIT)
# sum(y) >= N_FIELD_HIT
# == sum(not y) <= M*N - N_FIELD_HIT
cnf_final = copy.copy(cnf)
n_clauses_final = n_clauses
if N_FIELD_HIT == M*N: # awkward special case
constraint_string = ''.join([str(y) + ' 0\n' for y in Y.ravel()])
n_clauses_final += N_FIELD_HIT
else:
constraint_string, used_clauses, used_vars, next_var = \
gen_at_most_n_constraints((-Y).ravel().tolist(), next_var, M*N - N_FIELD_HIT)
n_clauses_final += used_clauses
n_vars_final = next_var - 1
cnf_final += constraint_string
cnf_final = 'p cnf ' + str(n_vars_final) + ' ' + str(n_clauses) + \
' \n' + cnf_final # header
status, sol = solve(cnf_final)
if status:
print(' SOL found: ', N_FIELD_HIT)
""" Print sol """
res = np.zeros((M, N), dtype=int)
counter = 1
for v in sol[:X.shape[0]]:
if v>0:
p, x, y = placements[v-1]
pM, pN = polyominos[p].shape
poly_nnz = np.where(polyominos[p] != 0)
x_inds, y_inds = x+poly_nnz[0], y+poly_nnz[1]
res[x_inds, y_inds] = p+1
counter += 1
print(res)
""" Plot """
# very very ugly code; too lazy
ax1 = plt.subplot2grid((5, 12), (0, 0), colspan=11, rowspan=5)
ax_p0 = plt.subplot2grid((5, 12), (0, 11))
ax_p1 = plt.subplot2grid((5, 12), (1, 11))
ax_p2 = plt.subplot2grid((5, 12), (2, 11))
ax_p3 = plt.subplot2grid((5, 12), (3, 11))
ax_p4 = plt.subplot2grid((5, 12), (4, 11))
ax_p0.imshow(polyominos[0] * 1, vmin=0, vmax=5)
ax_p1.imshow(polyominos[1] * 2, vmin=0, vmax=5)
ax_p2.imshow(polyominos[2] * 3, vmin=0, vmax=5)
ax_p3.imshow(polyominos[3] * 4, vmin=0, vmax=5)
ax_p4.imshow(polyominos[4] * 5, vmin=0, vmax=5)
ax_p0.xaxis.set_major_formatter(plt.NullFormatter())
ax_p1.xaxis.set_major_formatter(plt.NullFormatter())
ax_p2.xaxis.set_major_formatter(plt.NullFormatter())
ax_p3.xaxis.set_major_formatter(plt.NullFormatter())
ax_p4.xaxis.set_major_formatter(plt.NullFormatter())
ax_p0.yaxis.set_major_formatter(plt.NullFormatter())
ax_p1.yaxis.set_major_formatter(plt.NullFormatter())
ax_p2.yaxis.set_major_formatter(plt.NullFormatter())
ax_p3.yaxis.set_major_formatter(plt.NullFormatter())
ax_p4.yaxis.set_major_formatter(plt.NullFormatter())
mask = (res==0)
sns.heatmap(res, cmap='viridis', mask=mask, cbar=False, square=True, linewidths=.1, ax=ax1)
plt.tight_layout()
plt.show()
break
N_FIELD_HIT -= 1 # binary-search could be viable in some cases
# but beware the empirical asymmetry in SAT-solvers:
# finding solution vs. proving there is none!
Output console
BASE CNF size
('clauses: ', 31509)
('vars: ', 13910)
CORE LOOP
(' N_FIELDS >= ', 625)
(' N_FIELDS >= ', 624)
(' SOL found: ', 624)
[[3 2 2 2 2 1 1 1 1 1 1 1 1 2 2 1 1 1 1 1 1 1 1 2 2]
[3 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 1 1 1 1 2 2]
[3 3 3 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 1 1 1 1 2 2]
[2 2 3 1 1 1 1 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 2 2]
[2 2 3 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 2 2 2 2 2 2]
[1 1 1 1 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 2 2]
[1 1 1 1 3 3 3 2 2 1 1 1 1 2 2 2 2 2 2 2 2 1 1 1 1]
[2 2 1 1 1 1 3 2 2 2 2 2 2 2 2 1 1 1 1 2 2 2 2 2 2]
[2 2 2 2 2 2 3 3 3 2 2 2 2 1 1 1 1 2 2 2 2 2 2 2 2]
[2 2 2 2 2 2 2 2 3 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2]
[2 2 1 1 1 1 2 2 3 3 3 2 2 2 2 2 2 1 1 1 1 2 2 2 2]
[1 1 1 1 1 1 1 1 2 2 3 2 2 1 1 1 1 1 1 1 1 1 1 1 1]
[2 2 3 1 1 1 1 3 2 2 3 3 4 1 1 1 1 2 2 1 1 1 1 2 2]
[2 2 3 1 1 1 1 3 1 1 1 1 4 4 3 2 2 2 2 1 1 1 1 2 2]
[2 2 3 3 5 5 5 3 3 1 1 1 1 4 3 2 2 1 1 1 1 1 1 1 1]
[2 2 2 2 4 5 1 1 1 1 1 1 1 1 3 3 3 2 2 1 1 1 1 2 2]
[2 2 2 2 4 4 2 2 1 1 1 1 1 1 1 1 3 2 2 1 1 1 1 2 2]
[2 2 2 2 3 4 2 2 2 2 2 2 1 1 1 1 3 3 3 2 2 2 2 2 2]
[3 4 2 2 3 5 5 5 2 2 2 2 1 1 1 1 2 2 3 2 2 2 2 2 2]
[3 4 4 3 3 3 5 5 5 5 1 1 1 1 2 2 2 2 3 3 3 2 2 2 2]
[3 3 4 3 1 1 1 1 5 1 1 1 1 4 2 2 2 2 2 2 3 2 2 2 2]
[2 2 3 3 3 1 1 1 1 1 1 1 1 4 4 4 2 2 2 2 3 3 0 2 2]
[2 2 3 1 1 1 1 1 1 1 1 5 5 5 4 4 4 1 1 1 1 2 2 2 2]
[2 2 3 3 1 1 1 1 1 1 1 1 5 5 5 5 4 1 1 1 1 2 2 2 2]
[2 2 1 1 1 1 1 1 1 1 1 1 1 1 5 1 1 1 1 1 1 1 1 2 2]]
Output plot
One field cannot be covered in this parameterization!
Some other examples with a bigger set of patterns
Square M=N=61 (prime -> intuition: harder) where the base-CNF has 450.723 clauses and 185.462 variables. There is an optimal packing!
Non-square M,N =83,131 (double prime) where the base-CNF has 1.346.511 clauses and 553.748 variables. There is an optimal packing!
One approach could be using integer programming. I'll implement this using the python pulp package, though packages are available for pretty much any programming language.
The basic idea is to define a decision variable for every possible placement location for every tile. If a decision variable takes value 1, then its associated tile is placed there. If it takes value 0, then it is not placed there. The objective is therefore to maximize the sum of the decision variables times the number of squares in the variable's tile --- this corresponds to placing the maximum number of squares possible on the board.
My code implements two constraints:
Each tile can only be placed once (below we will relax this constraint)
Each square can have at most one tile on it
Here's the output for a set of five fixed tetrominoes on a 4x5 grid:
import itertools
import pulp
import string
def covered(tile, base):
return {(base[0] + t[0], base[1] + t[1]): True for t in tile}
tiles = [[(0,0), (1,0), (0,1), (0,2)],
[(0,0), (1,0), (2,0), (3,0)],
[(1,0), (0,1), (1,1), (2,0)],
[(0,0), (1,0), (0,1), (1,1)],
[(1,0), (0,1), (1,1), (2,1)]]
rows = 25
cols = 25
squares = {x: True for x in itertools.product(range(rows), range(cols))}
vars = list(itertools.product(range(rows), range(cols), range(len(tiles))))
vars = [x for x in vars if all([y in squares for y in covered(tiles[x[2]], (x[0], x[1])).keys()])]
x = pulp.LpVariable.dicts('tiles', vars, lowBound=0, upBound=1, cat=pulp.LpInteger)
mod = pulp.LpProblem('polyominoes', pulp.LpMaximize)
# Objective value is number of squares in tile
mod += sum([len(tiles[p[2]]) * x[p] for p in vars])
# Don't use any shape more than once
for tnum in range(len(tiles)):
mod += sum([x[p] for p in vars if p[2] == tnum]) <= 1
# Each square can be covered by at most one shape
for s in squares:
mod += sum([x[p] for p in vars if s in covered(tiles[p[2]], (p[0], p[1]))]) <= 1
# Solve and output
mod.solve()
out = [['-'] * cols for rep in range(rows)]
chars = string.ascii_uppercase + string.ascii_lowercase
numset = 0
for p in vars:
if x[p].value() == 1.0:
for off in tiles[p[2]]:
out[p[0] + off[0]][p[1] + off[1]] = chars[numset]
numset += 1
for row in out:
print(''.join(row))
It obtains the following optimal solution:
AAAB-
A-BBC
DDBCC
DD--C
If we allow repeats (comment out the constraint limiting to one copy of each shape), then we can completely tile the grid:
ABCDD
ABCDD
ABCEE
ABCEE
It worked near-instantaneously for a 10x10 grid:
ABCCDDEEFF
ABCCDDEEFF
ABGHHIJJKK
ABGHHIJJKK
LLGMMINOPP
LLGMMINOPP
QQRRSTNOUV
QQRRSTNOUV
WWXXSTYYUV
WWXXSTYYUV
The code obtains an optimal solution for the 25x25 grid in 100 seconds of runtime, though unfortunately there aren't enough letter and numbers for my output code to print the solution.
I don't know if its of any use to you but I coded up a small sketchy frame in Python. It doesn't place polyminos yet but the functions are there - checking for dead empty spaces is primitive, though and needs a better approach. Then again, maybe it is all rubbish...
import functools
import itertools
M = 4 # x
N = 5 # y
field = [[9999]*(N+1)]+[[9999]+[0]*N+[9999] for _ in range(M)]+[[9999]*(N+1)]
def field_rd(p2d):
return field[p2d[0]+1][p2d[1]+1]
def field_add(p2d,val):
field[p2d[0]+1][p2d[1]+1] += val
def add2d(p,k):
return p[0]+k[0],p[1]+k[1]
def norm(polymino_2d):
x0,y0 = min(x for x,y in polymino_2d),min(y for x,y in polymino_2d)
return tuple(sorted(map(lambda p: add2d(p,(-x0,-y0)), polymino_2d)))
def create_cutoff(occupied):
"""Receive a polymino and create the outer area of squares which could be cut off by a placement of this polymino"""
cutoff = set(itertools.chain.from_iterable(map(lambda p: add2d(p,(x,y)),occupied) for (x,y) in [(-1,0),(1,0),(0,-1),(0,1)])) #(-1,-1),(-1,0),(-1,1),(0,1),(1,1),(1,0),(1,-1)]))
return tuple(cutoff.difference(occupied))
def is_occupied(p2d):
return field_rd(p2d) == 0
def is_cutoff(p2d):
return not is_occupied(p2d) and all(map(is_occupied,map(lambda p: add2d(p,p2d),[(-1,0),(1,0),(0,-1),(0,1)])))
def polym_colliding(p2d,occupied):
return any(map(is_occupied,map(lambda p: add2d(p,p2d),occupied)))
def polym_cutoff(p2d,cutoff):
return any(map(is_cutoff,map(lambda p: add2d(p,p2d),cutoff)))
def put(p2d,occupied,polym_nr):
for p in occupied:
field_add(add2d(p2d,p),polym_nr)
def remove(p2d,occupied,polym_nr):
for p in polym:
field_add(add2d(p2d,p),-polym_nr)
def place(p2d,polym_nr):
"""Try to place a polymino at point p2d. If it fits without cutting off unreachable single cells return True else False"""
occupied = polym[polym_nr][0]
if polym_colliding(p2d,occupied):
return False
put(p2d,occupied,polym_nr)
cutoff = polym[polym_nr][1]
if polym_cutoff(p2d,cutoff):
remove(p2d,occupied,polym_nr)
return False
return True
def NxM_array(N,M):
return [[0]*N for _ in range(M)]
def generate_all_polyminos(n):
"""Create all polyminos with size n"""
def gen_recur(polymino,i,result):
if i > 1:
new_pts = set(itertools.starmap(add2d,itertools.product(polymino,[(-1,0),(1,0),(0,-1),(0,1)])))
new_pts = new_pts.difference(polymino)
for p in new_pts:
gen_recur(polymino.union({p}),i-1,result)
else:
result.add(norm(polymino))
#---------------------------------------
all_polyminos = set()
gen_recur({(0,0)},n,all_polyminos)
return all_polyminos
print("All possible Tetris blocks (all orientations): ",generate_all_polyminos(4))

Ruby - Sum Results of Select()

I'm doing www.eulerproject.net, the first problem:
If we list all the natural numbers below 10, that are multiples of 3
or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find
the sum of all the multiples of 3 or 5 below 1000.
The following is the code I have so far.
(3..999).to_a.select do |x|
x % 3.0 == 0 || x % 5.0 == 0
end
It would be easy to append the numbers into an array, but how can this be done by how can this be done by chaining a method onto the end of this. Something like
p start loop
do stuff
end.sum
To answer the question - yes, you can chain the method like you've shown.
(3..999).to_a.select do |x|
x % 3 == 0 || x % 5 == 0 # you don't have to use floats here, integers would work
end.inject(:+)
#=> 233168
The rule of a style guides is to NOT to chain methods to multiline do end blocks, but it is a working code.
It's the same as writing
(3..999).to_a.select { |x| x % 3 == 0 || x % 5 == 0 }.inject(:+)
#=>233168
Array#sum is an ActiveSupport method, not Ruby's, but I think you should use Ruby's methods in eulerproject tasks.
You are summing arithmetic series, so there is no need to iterate:
def sum(n,m)
p = n/m
m*p*(1+p)/2
end
n = 999
sum(n,3) + sum(n,5) - sum(n,15)
#=> 233168
Consider:
n = 100
m = 3
p = 100/3 #=> 33
sum(100,3) = 3 + 6 + 9 +...+ 99
= 3 * (1 + 2 +...+ p)
= 3 * p(1+p)/2
We need to subtract sum(100,15) because sum(100,3) + sum(100,5) double-counts:
sum(100,15) = 15 + 30 + 45 + 60 + 75 + 90
if you want to get the sum of array, you can do like this:
(3..999).inject(0) { |sum, e| e % 3 == 0 || e % 5 == 0 ? sum += e : sum }
=> 233168
it just need once loop.
You can omit the to_a, since calling 'select' to (3..999) will still return an array regardless.
Andrey's answer is the most compact one with :
(3..999).select{ |x| x % 3 == 0 || x % 5 == 0 }.inject(:+)

Bomb dropping algorithm

I have an n x m matrix consisting of non-negative integers. For example:
2 3 4 7 1
1 5 2 6 2
4 3 4 2 1
2 1 2 4 1
3 1 3 4 1
2 1 4 3 2
6 9 1 6 4
"Dropping a bomb" decreases by one the number of the target cell and all eight of its neighbours, to a minimum of zero.
x x x
x X x
x x x
What is an algorithm that would determine the minimum number of bombs required to reduce all the cells to zero?
B Option (Due to me not being a careful reader)
Actually the first version of problem is not the one I'm seeking answer for. I didn't carefully read whole task, there's additional constraints, let us say:
What about simple problem, when sequence in row must be non-increasing:
8 7 6 6 5 is possible input sequence
7 8 5 5 2 is not possible since 7 -> 8 growing in a sequence.
Maybe finding answer for "easier" case would help in finding solution for harder one.
PS: I believe that when we have several same situations require minimum bombs to clear upper line, we choose one that use most bombs on "left side" of the row. Still any proof that might be correct?
There is a way to reduce this to a simple sub-problem.
There are 2 parts to the explanation, the algorithm, and the reason the algorithm
provides an optimal solution. The first won't make sense without the second, so I'll
start with the why.
If you think of bombing the rectangle (assume a big rectangle - no edge cases yet)
you can see that the only way to reduce the hollow rectangle of squares on the
perimeter to 0 is to bomb either the perimeter or to bomb the hollow rectangle of
squares just inside the perimeter. I'll call the perimeter layer 1, and the rectangle inside it layer 2.
An important insight is that there is no point bombing layer 1, because the
"blast radius" you get from doing so is always contained within the blast radius of
another square from layer 2. You should be able to easily convince yourself of this.
So, we can reduce the problem to finding an optimal way to bomb away the perimeter, then we can repeat that until all squares are 0.
But of course, that won't always find an optimal solution if it's possible to bomb
away the perimeter in a less than optimal fashion, but by using X extra bombs make
the problem of reducing the inner layer simpler by >X bombs. So, if we call
the permiter layer one, if we place an extra X bombs somewhere in layer 2 (just
inside layer 1), can we reduce the effort of later bombing away layer 2 by more than
X? In other words, we have to prove we can be greedy in reducing the outer
perimeter.
But, we do know we can be greedy. Because no bomb in layer 2 can ever be more
efficient in reducing layer 2 to 0 than a strategically placed bomb in layer 3. And
for the same reason as before - there is always a bomb we can place in layer 3 that
will affect every square of layer 2 that a bomb placed in layer 2 can. So, it can
never harm us to be greedy (in this sense of greedy).
So, all we have to do is find the optimal way to reduce the permiter to 0 by bombing
the next inner layer.
We are never hurt by first bombing the corner to 0, because only the corner of the inner layer can reach it, so we really have no choice (and, any bomb on the perimeter that can reach the corner has a blast radius contained in the blast radius from the corner of the inner layer).
Once we have done so, the squares on the perimeter adjacent to the 0 corner can only be reached by 2 squares from the inner layer:
0 A B
C X Y
D Z
At this point the perimeter is effectively a closed 1 dimensional loop, because any bomb will reduce 3 adjacent squares. Except for some weirdness near the corners - X can "hit" A,B,C,and D.
Now we can't use any blast radius tricks - the situation of each square is symmetric, except for the weird corners, and even there no blast radius is a subset of another. Note that if this were a line (as Colonel Panic discusses) instead of a closed loop the solution is trivial. The end points must be reduced to 0, and it never harms you to bomb the points adjacent to the end points, again because the blast radius is a superset. Once you have made your endpoint 0, you still have a new endpoint, so repeat (until the line is all 0).
So, if we can optimally reduce a single square in the layer to 0 we have an algorithm (because we have cut the loop and now have a straight line with endpoints). I believe bombing adjacent to the square with the lowest value (giving you 2 options) such that the highest value within 2 squares of that lowest value is the minimum possible (you may have to split your bombing to manage this) will be optimal but I don't (yet?) have a proof.
Pólya says "If you can't solve a problem, then there is an easier problem you can solve: find it."
The obvious simpler problem is the 1-dimensional problem (when the grid is a single row). Let's start with the simplest algorithm - greedily bombing the biggest target. When does this go wrong?
Given 1 1 1, the greedy algorithm is indifferent to which cell it bombs first. Of course, the centre cell is better - it zeros all three cells at once. This suggests a new algorithm A, "bomb to minimise the sum remaining". When does this algorithm go wrong?
Given 1 1 2 1 1, algorithm A is indifferent between bombing the 2nd, 3rd or 4th cells. But bombing the 2nd cell to leave 0 0 1 1 1 is better than bombing the 3rd cell to leave 1 0 1 0 1. How to fix that? The problem with bombing the 3rd cell is that it leaves us work to the left and work to the right which must be done separately.
How about "bomb to minimise the sum remaining, but maximise the minimum to the left (of where we bombed) plus the minimum to the right". Call this algorithm B. When does this algorithm go wrong?
Edit: After reading the comments, I agree a much more interesting problem would be the one dimensional problem changed so that the ends join up. Would love to see any progress on that.
I had to stop at only a partial solution since I was out of time, but hopefully even this partial solution provides some insights on one potential approach to solving this problem.
When faced with a hard problem, I like to come up with simpler problems to develop an intuition about the problem space. Here, the first step I took was to reduce this 2-D problem into a 1-D problem. Consider a line:
0 4 2 1 3 0 1
Somehow or another, you know you will need to bomb at or around the 4 spot 4 times to get it down to 0. Since left of the spot is a lower number, there is no benefit to bombing the 0 or the 4 over bombing the 2. In fact, I believe (but lack a rigorous proof) that bombing the 2 until the 4 spot goes down to 0 is at least as good as any other strategy to get that 4 down to 0. One can proceed down the line left to right in a strategy like this:
index = 1
while index < line_length
while number_at_index(index - 1) > 0
bomb(index)
end
index++
end
# take care of the end of the line
while number_at_index(index - 1) > 0
bomb(index - 1)
end
A couple sample bombing orders:
0 4[2]1 3 0 1
0 3[1]0 3 0 1
0 2[0]0 3 0 1
0 1[0]0 3 0 1
0 0 0 0 3[0]1
0 0 0 0 2[0]0
0 0 0 0 1[0]0
0 0 0 0 0 0 0
4[2]1 3 2 1 5
3[1]0 3 2 1 5
2[0]0 3 2 1 5
1[0]0 3 2 1 5
0 0 0 3[2]1 5
0 0 0 2[1]0 5
0 0 0 1[0]0 5
0 0 0 0 0 0[5]
0 0 0 0 0 0[4]
0 0 0 0 0 0[3]
0 0 0 0 0 0[2]
0 0 0 0 0 0[1]
0 0 0 0 0 0 0
The idea of starting with a number that needs to go down some way or another is an appealing one because it suddenly becomes attainable to find a solution that as some claim to being at least as good as all other solutions.
The next step up in complexity where this search of at least as good is still feasible is on the edge of the board. It is clear to me that there is never any strict benefit to bomb the outer edge; you're better off bombing the spot one in and getting three other spaces for free. Given this, we can say that bombing the ring one inside of the edge is at least as good as bombing the edge. Moreover, we can combine this with the intuition that bombing the right one inside of the edge is actually the only way to get edge spaces down to 0. Even more, it is trivially simple to figure out the optimal strategy (in that it is at least as good as any other strategy) to get corner numbers down to 0. We put this all together and can get much closer to a solution in the 2-D space.
Given the observation about corner pieces, we can say for sure that we know the optimal strategy to go from any starting board to a board with zeros on all corners. This is an example of such a board (I borrowed the numbers from the two linear boards above). I've labelled some spaces differently, and I'll explain why.
0 4 2 1 3 0 1 0
4 x x x x x x 4
2 y y y y y y 2
1 y y y y y y 1
3 y y y y y y 3
2 y y y y y y 2
1 y y y y y y 1
5 y y y y y y 5
0 4 2 1 3 0 1 0
One will notice at the top row really closely resembles the linear example we saw earlier. Recalling our earlier observation that the optimal way to get the top row all down to 0 is to bomb the second row (the x row). There is no way to clear the top row by bombing any of the y rows and no additional benefit to bombing the top row over bombing the corresponding space on the x row.
We could apply the linear strategy from above (bombing the corresponding spaces on the x row), concerning ourselves only with the top row and nothing else. It would go something like this:
0 4 2 1 3 0 1 0
4 x[x]x x x x 4
2 y y y y y y 2
1 y y y y y y 1
3 y y y y y y 3
2 y y y y y y 2
1 y y y y y y 1
5 y y y y y y 5
0 4 2 1 3 0 1 0
0 3 1 0 3 0 1 0
4 x[x]x x x x 4
2 y y y y y y 2
1 y y y y y y 1
3 y y y y y y 3
2 y y y y y y 2
1 y y y y y y 1
5 y y y y y y 5
0 4 2 1 3 0 1 0
0 2 0 0 3 0 1 0
4 x[x]x x x x 4
2 y y y y y y 2
1 y y y y y y 1
3 y y y y y y 3
2 y y y y y y 2
1 y y y y y y 1
5 y y y y y y 5
0 4 2 1 3 0 1 0
0 1 0 0 3 0 1 0
4 x[x]x x x x 4
2 y y y y y y 2
1 y y y y y y 1
3 y y y y y y 3
2 y y y y y y 2
1 y y y y y y 1
5 y y y y y y 5
0 4 2 1 3 0 1 0
0 0 0 0 3 0 1 0
4 x x x x x x 4
2 y y y y y y 2
1 y y y y y y 1
3 y y y y y y 3
2 y y y y y y 2
1 y y y y y y 1
5 y y y y y y 5
0 4 2 1 3 0 1 0
The flaw in this approach becomes very obvious in the final two bombings. It is clear, given that the only bomb sites that reduce the 4 figure in the first column in the second row are the first x and the y. The final two bombings are clearly inferior to just bombing the first x, which would have done the exact same (with regard to the first spot in the top row, which we have no other way of clearing). Since we have demonstrated that our current strategy is suboptimal, a modification in strategy is clearly needed.
At this point, I can take a step back down in complexity and focus just one one corner. Let's consider this one:
0 4 2 1
4 x y a
2 z . .
1 b . .
It is clear the only way to get the spaces with 4 down to zero are to bomb some combination of x, y, and z. With some acrobatics in my mind, I'm fairly sure the optimal solution is to bomb x three times and then a then b. Now it's a matter of figuring out how I reached that solution and if it reveals any intuition we can use to even solve this local problem. I notice that there's no bombing of y and z spaces. Attempting to find a corner where bombing those spaces makes sense yields a corner that looks like this:
0 4 2 5 0
4 x y a .
2 z . . .
5 b . . .
0 . . . .
For this one, it is clear to me that the optimal solution is to bomb y 5 times and z 5 times. Let's go one step further.
0 4 2 5 6 0 0
4 x y a . . .
2 z . . . . .
5 b . . . . .
6 . . . . . .
0 . . . . . .
0 . . . . . .
Here, it feels similarly intuitive that the optimal solution is to bomb a and b 6 times and then x 4 times.
Now it becomes a game of how to turn those intuitions into principles we can build on.
Hopefully to be continued!
For updated question a simple greedy algorithm gives optimal result.
Drop A[0,0] bombs to cell A[1,1], then drop A[1,0] bombs to cell A[2,1], and continue this process downwards. To clean bottom left corner, drop max(A[N-1,0], A[N-2,0], A[N-3,0]) bombs to the cell A[N-2,1]. This will completely clean up first 3 columns.
With the same approach clean columns 3,4,5, then columns 6,7,8, etc.
Unfortunately this does not help finding solution for the original problem.
"Larger" problem (without "nonicreasing" constraint) may be proven to be NP-hard. Here is sketch of a proof.
Suppose we have a planar graph of degree up to 3. Let's find minimum vertex cover for this graph. According to Wikipedia article this problem is NP-hard for planar graphs of degree up to 3. This could be proven by reduction from Planar 3SAT. And hardness of Planar 3SAT - by reduction from 3SAT. Both these proofs are presented in recent lectures in "Algorithmic Lower Bounds" by prof. Erik Demaine (lectures 7 and 9).
If we split some edges of the original graph (left graph on the diagram), each one with even number of additional nodes, the resulting graph (right graph on the diagram) should have exactly the same minimum vertex cover for original vertices. Such transformation allows to align graph vertices to arbitrary positions on the grid.
If we place graph vertices only to even rows and columns (in such a way that no two edges incident to one vertex form an acute angle), insert "ones" wherever there is an edge, and insert "zeros" to other grid positions, we could use any solution for the original problem to find minimum vertex cover.
You can represent this problem as integer programming problem. (this is just one of possible solutions to approach this problem)
Having points:
a b c d
e f g h
i j k l
m n o p
one can write 16 equations where for point f for example holds
f <= ai + bi + ci + ei + fi + gi + ii + ji + ki
minimaised over sum of all indexes and integer solution.
Solution is of course sum of this indexes.
This can be further simplified by setting all xi on boundaries 0, so you end up having 4+1 equation in this example.
Problem is that there is no trivial algorhitm for solving such problems. tI am not expert on this, but solving this problem as linear programming is NP hard.
This is a partial answer, I'm trying to find a lower bound and upper bound that could be the possible number of bombs.
In 3x3 and smaller board, the solution is trivially always the largest numbered cell.
In boards larger than 4x4, the first obvious lower bound is the sum of the corners:
*2* 3 7 *1*
1 5 6 2
2 1 3 2
*6* 9 6 *4*
however you arrange the bomb, it is impossible to clear this 4x4 board with less than 2+1+6+4=13 bombs.
It has been mentioned in other answers that placing the bomb on the second-to-corner to eliminate the corner is never worse than placing the bomb on the corner itself, so given the board:
*2* 3 4 7 *1*
1 5 2 6 2
4 3 4 2 1
2 1 2 4 1
3 1 3 4 1
2 1 4 3 2
*6* 9 1 6 *4*
We can zero the corners out by placing bombs on the second-to-corner to give a new board:
0 1 1 6 0
0 3 0 5 1
2 1 1 1 0
2 1 2 4 1
0 0 0 0 0
0 0 0 0 0
0 3 0 2 0
So far so good. We need 13 bombs to clear the corners.
Now observe the number 6, 4, 3, and 2 marked below:
0 1 1 *6* 0
0 3 0 5 1
2 1 1 1 0
*2* 1 2 *4* 1
0 0 0 0 0
0 0 0 0 0
0 *3* 0 2 0
There is no way to bomb any two of those cells using a single bomb, so the minimum bomb has increased by 6+4+3+2, so adding to the number of bombs we used to clear the corners, we get that the minimum number of bombs required for this map has become 28 bombs. It is impossible to clear this map with less than 28 bombs, this is the lower bound for this map.
You can use greedy algorithm to establish an upper bound. Other answers have shown that a greedy algorithm produces a solution that uses 28 bombs. Since we've proven earlier that no optimal solution can have less than 28 bombs, therefore 28 bombs is indeed an optimal solution.
When greedy and the method to find the minimal bound I've mentioned above does not converge though, I guess you do have to go back to checking all combinations.
The algorithm for finding the lower bound is the following:
Pick an element with the highest number, name it P.
Mark all cells two steps away from P and P itself as unpickable.
Add P to the minimums list.
Repeat to step 1 until all cells are unpickable.
Sum the minimums list to get the lower bound.
This would be a greedy approach:
Calculate a "score" matrix of order n X m, where score[i][j] is the total deduction of points in the matrix if position (i,j) is bombed. (Max score of a point is 9 and min score is 0)
Moving row wise, find and pick the first position with highest score (say (i,j)).
Bomb (i,j). Increase bomb count.
If all elements of the original matrix are not zero, then goto 1.
I have my doubts that this is the optimal solution though.
Edit:
The Greedy approach I posted above, while it works, most probably doesn't give us the optimal solution. So I figured should add some elements of DP to it.
I think we can agree that at any point of time, one of the positions with the highest "score" (score[i][j] = total deduction of points if (i,j) is bombed) must be targeted. Starting with this assumption, here's the new approach:
NumOfBombs(M): (returns the minimum number of bombings required)
Given a Matrix M of order n X m. If all elements of M are zero, then return 0.
Calculate the "score" matrix M.
Let the k distinct positions P1,P2,...Pk (1 <= k <= n*m), be the positions in M with the highest scores.
return (1 + min( NumOfBombs(M1), NumOfBombs(M2), ..., NumOfBombs(Mk) ) )
where M1,M2,...,Mk are the resulting matrices if we bomb positions P1, P2, ..., Pk respectively.
Also, if we want the order of positions to nuke in addition to this, we would have to keep track of the results of "min".
Your new problem, with the nondecreasing values across rows, is quite easy to solve.
Observe that the left column contains the highest numbers. Therefore, any optimal solution must first reduce this column to zero. Thus, we can perform a 1-D bombing run over this column, reducing every element in it to zero. We let the bombs fall on the second column so they do maximum damage. There are many posts here dealing with the 1D case, I think, so I feel safe in skipping that case. (If you want me to describe it, I can.). Because of the decreasing property, the three leftmost columns will all be reduced to zero. But, we will provably use a minimum number of bombs here because the left column must be zeroed.
Now, once the left column is zeroed, we just trim off the three leftmost columns that are now zeroed and repeat with the now-reduced matrix. This must give us an optimal solution since at each stage we use a provably minimum number of bombs.
Mathematica Integer Linear Programming using branch-and-bound
As it has already been mentioned, this problem can be solved using integer linear programming (which is NP-Hard). Mathematica already has ILP built in. "To solve an integer linear programming problem Mathematica first solves the equational constraints, reducing the problem to one containing inequality constraints only. Then it uses lattice reduction techniques to put the inequality system in a simpler form. Finally, it solves the simplified optimization problem using a branch-and-bound method." [see Constrained Optimization Tutorial in Mathematica.. ]
I've written the following code that utilizes ILP libraries of Mathematica. It is surprisingly fast.
solveMatrixBombProblem[problem_, r_, c_] :=
Module[{},
bombEffect[x_, y_, m_, n_] :=
Table[If[(i == x || i == x - 1 || i == x + 1) && (j == y ||
j == y - 1 || j == y + 1), 1, 0], {i, 1, m}, {j, 1, n}];
bombMatrix[m_, n_] :=
Transpose[
Table[Table[
Part[bombEffect[(i - Mod[i, n])/n + 1, Mod[i, n] + 1, m,
n], (j - Mod[j, n])/n + 1, Mod[j, n] + 1], {j, 0,
m*n - 1}], {i, 0, m*n - 1}]];
X := x /# Range[c*r];
sol = Minimize[{Total[X],
And ## Thread[bombMatrix[r, c].X >= problem] &&
And ## Thread[X >= 0] && Total[X] <= 10^100 &&
Element[X, Integers]}, X];
Print["Minimum required bombs = ", sol[[1]]];
Print["A possible solution = ",
MatrixForm[
Table[x[c*i + j + 1] /. sol[[2]], {i, 0, r - 1}, {j, 0,
c - 1}]]];]
For the example provided in the problem:
solveMatrixBombProblem[{2, 3, 4, 7, 1, 1, 5, 2, 6, 2, 4, 3, 4, 2, 1, 2, 1, 2, 4, 1, 3, 1, 3, 4, 1, 2, 1, 4, 3, 2, 6, 9, 1, 6, 4}, 7, 5]
Outputs
For anyone reading this with a greedy algorithm
Try your code on the following 10x10 problem:
5 20 7 1 9 8 19 16 11 3
17 8 15 17 12 4 5 16 8 18
4 19 12 11 9 7 4 15 14 6
17 20 4 9 19 8 17 2 10 8
3 9 10 13 8 9 12 12 6 18
16 16 2 10 7 12 17 11 4 15
11 1 15 1 5 11 3 12 8 3
7 11 16 19 17 11 20 2 5 19
5 18 2 17 7 14 19 11 1 6
13 20 8 4 15 10 19 5 11 12
Here it is comma-seperated:
5, 20, 7, 1, 9, 8, 19, 16, 11, 3, 17, 8, 15, 17, 12, 4, 5, 16, 8, 18, 4, 19, 12, 11, 9, 7, 4, 15, 14, 6, 17, 20, 4, 9, 19, 8, 17, 2, 10, 8, 3, 9, 10, 13, 8, 9, 12, 12, 6, 18, 16, 16, 2, 10, 7, 12, 17, 11, 4, 15, 11, 1, 15, 1, 5, 11, 3, 12, 8, 3, 7, 11, 16, 19, 17, 11, 20, 2, 5, 19, 5, 18, 2, 17, 7, 14, 19, 11, 1, 6, 13, 20, 8, 4, 15, 10, 19, 5, 11, 12
For this problem, my solution contains 208 bombs. Here's a possible solution (I was able to solve this in about 12 seconds).
As a way to test the results Mathematica is producing, see if your greedy algorithm can do any better.
There is no need to transform the problem to linear sub-problems.
Instead use a simple greedy heuristic, which is to bomb the corners, starting with the largest one.
In the given example there are four corners, { 2, 1, 6, 4 }. For each corner there is no better move than to bomb the cell diagonal to the corner, so we know for a fact our first 2+1+6+4 = 13 bombings must be in these diagonal cells. After doing the bombing we are left with a new matrix:
2 3 4 7 1 0 1 1 6 0 0 1 1 6 0 1 1 6 0 0 0 5 0 0 0
1 5 2 6 2 0 3 0 5 1 0 3 0 5 1 => 1 0 4 0 => 0 0 3 => 0 0 0
4 3 4 2 1 2 1 1 1 0 2 1 1 1 0 0 0 0 0 0 0 0 0 0 3
2 1 2 4 1 => 2 1 2 4 1 => 2 1 2 4 1 0 0 3 0 0 0 3
3 1 3 4 1 0 0 0 0 0 0 0 0 0 0
2 1 4 3 2 0 0 0 0 0 0 0 0 0 0
6 9 1 6 4 0 3 0 2 0 0 0 0 0 0
After the first 13 bombings we use the heuristic to eliminate 3 0 2 via three bombings. Now, we have 2 new corners, { 2, 1 } in the 4th row. We bomb those, another 3 bombings. We have reduced the matrix to 4 x 4 now. There is one corner, the upper left. We bomb that. Now we have 2 corners left, { 5, 3 }. Since 5 is the largest corner we bomb that first, 5 bombings, then finally bomb the 3 in the other corner. The total is 13+3+3+1+5+3 = 28.
This does a breadth-search for the shortest path (a series of bombings) through this "maze" of positions. No, I cannot prove that there is no faster algorithm, sorry.
#!/usr/bin/env python
M = ((1,2,3,4),
(2,3,4,5),
(5,2,7,4),
(2,3,5,8))
def eachPossibleMove(m):
for y in range(1, len(m)-1):
for x in range(1, len(m[0])-1):
if (0 == m[y-1][x-1] == m[y-1][x] == m[y-1][x+1] ==
m[y][x-1] == m[y][x] == m[y][x+1] ==
m[y+1][x-1] == m[y+1][x] == m[y+1][x+1]):
continue
yield x, y
def bomb(m, (mx, my)):
return tuple(tuple(max(0, m[y][x]-1)
if mx-1 <= x <= mx+1 and my-1 <= y <= my+1
else m[y][x]
for x in range(len(m[y])))
for y in range(len(m)))
def findFirstSolution(m, path=[]):
# print path
# print m
if sum(map(sum, m)) == 0: # empty?
return path
for move in eachPossibleMove(m):
return findFirstSolution(bomb(m, move), path + [ move ])
def findShortestSolution(m):
black = {}
nextWhite = { m: [] }
while nextWhite:
white = nextWhite
nextWhite = {}
for position, path in white.iteritems():
for move in eachPossibleMove(position):
nextPosition = bomb(position, move)
nextPath = path + [ move ]
if sum(map(sum, nextPosition)) == 0: # empty?
return nextPath
if nextPosition in black or nextPosition in white:
continue # ignore, found that one before
nextWhite[nextPosition] = nextPath
def main(argv):
if argv[1] == 'first':
print findFirstSolution(M)
elif argv[1] == 'shortest':
print findShortestSolution(M)
else:
raise NotImplementedError(argv[1])
if __name__ == '__main__':
import sys
sys.exit(main(sys.argv))
This greedy solution seems to be correct:
As pointed in comments, it'll fail in 2D. But maybe you may improve it.
For 1D:
If there is at least 2 numbers you don't need to shoot to the leftmost one because shooting to the second is not worse. So shoot to the second, while first isn't 0, because you have to do it. Move to the next cell. Don't forget about last cell.
C++ code:
void bombs(vector<int>& v, int i, int n){
ans += n;
v[i] -= n;
if(i > 0)
v[i - 1] -= n;
if(i + 1< v.size())
v[i + 1] -= n;
}
void solve(vector<int> v){
int n = v.size();
for(int i = 0; i < n;++i){
if(i != n - 1){
bombs(v, i + 1, v[i]);
}
else
bombs(v, i, v[i])
}
}
So for 2D:
Again: you don't need to shoot in the first row (if there is the second). So shoot to the second one. Solve 1D task for first row. (because you need to make it null). Go down. Don't forget last row.
It seems that a linear programming approach can be very helpful here.
Let Pm x n be the matrix with the values of the positions:
Now let define a bomb matrix B(x, y)m x n,with 1 ≤ x ≤ m, 1 ≤ y ≤ n as below
in such a way that
For example:
So we are looking to a matrix Bm x n = [bij] that
Can be defined as a sum of bomb matrices:
(qij would be then the quantity of bombs we would drop in position pij)
pij - bij ≤ 0 (to be more succint, let us say it as P - B ≤ 0)
Also, B should minimize the sum .
We can also write B as the ugly matrix ahead:
and since P - B ≤ 0 (which means P ≤ B) we have the following pretty linear inequality system below:
Being qmn x 1 defined as
pmn x 1 defined as
We can say we have a system being Smn x mn the matrix to be reversed to solve the system. I did not expand it myself but I believe it should be easy to do it in code.
Now, we have a minimum problem which can be stated as
I believe it is something easy, almost trivial to be solved with something like the simplex algorithm (there is this rather cool doc about it). However, I do know almost no linear programming (I will take a course about it on Coursera but it is just in the future...), I had some headaches trying to understand it and I have a huge freelance job to finish so I just give up here. It can be that I did something wrong at some point, or that it can't go any further, but I believe this path can eventually lead to the solution. Anyway, I am anxious for your feedback.
(Special thanks for this amazing site to create pictures from LaTeX expressions)
To minimize the number of bombs, we have to maximize effect of every bomb. To achive this, on every step we have to select the best target. For each point summing it and it's eight neighbours - could be used as an efficiency quantity of bombing this point. This will provide close to optimal sequence of bombs.
UPD: We should also take number of zeros into account, becouse bombin them is inefficiently. In fact the problem is to minimize number of hitted zeros. But we can not know how any step gets us closer to this aim. I agree with idea that the problem is NP-complete. I sugest a greedy approach, which will give an answer close to real.
I believe that to minimize the amount of bombs you simply need maximize the amount of damage..
for that to happen need to check the area that has the strongest force.. so you first analyze the field with a 3x3 kernel and check where the sum is stronger.. and bomb there.. and do until the field is flat.. for this filed the answer is 28
var oMatrix = [
[2,3,4,7,1],
[1,5,2,6,2],
[4,3,4,2,1],
[2,1,2,4,1],
[3,1,3,4,1],
[2,1,4,3,2],
[6,9,1,6,4]
]
var nBombs = 0;
do
{
var bSpacesLeftToBomb = false;
var nHigh = 0;
var nCellX = 0;
var nCellY = 0;
for(var y = 1 ; y<oMatrix.length-1;y++)
for(var x = 1 ; x<oMatrix[y].length-1;x++)
{
var nValue = 0;
for(var yy = y-1;yy<=y+1;yy++)
for(var xx = x-1;xx<=x+1;xx++)
nValue += oMatrix[yy][xx];
if(nValue>nHigh)
{
nHigh = nValue;
nCellX = x;
nCellY = y;
}
}
if(nHigh>0)
{
nBombs++;
for(var yy = nCellY-1;yy<=nCellY+1;yy++)
{
for(var xx = nCellX-1;xx<=nCellX+1;xx++)
{
if(oMatrix[yy][xx]<=0)
continue;
oMatrix[yy][xx] = --oMatrix[yy][xx];
}
}
bSpacesLeftToBomb = true;
}
}
while(bSpacesLeftToBomb);
alert(nBombs+'bombs');
Here is a solution that generalizes the good properties of the corners.
Let's assume that we could find a perfect drop point for a given field, that is, a best way to decrease the value in it. Then to find the minimum number of bombs to be dropped, a first draft of an algorithm could be (the code is copy-pasted from a ruby implementation):
dropped_bomb_count = 0
while there_are_cells_with_non_zero_count_left
coordinates = choose_a_perfect_drop_point
drop_bomb(coordinates)
dropped_bomb_count += 1
end
return dropped_bomb_count
The challenge is choose_a_perfect_drop_point. First, let's define what a perfect drop point is.
A drop point for (x, y) decreases the value in (x, y). It may also decrease values in other cells.
A drop point a for (x, y) is better than a drop point b for (x, y) if it decreases the values in a proper superset of the cells that b decreases.
A drop point is maximal if there is no other better drop point.
Two drop points for (x, y) are equivalent if they decrease the same set of cells.
A drop point for (x, y) is perfect if it is equivalent to all maximal drop points for (x, y).
If there is a perfect drop point for (x, y), you cannot decrease the value at (x, y) more effectively than to drop a bomb on one of the perfect drop points for (x, y).
A perfect drop point for a given field is a perfect drop point for any of its cells.
Here are few examples:
1 0 1 0 0
0 0 0 0 0
1 0 0 0 0
0 0 0 0 0
0 0 0 0 0
The perfect drop point for the cell (0, 0) (zero-based index) is (1, 1). All other drop points for (1, 1), that is (0, 0), (0, 1), and (1, 0), decrease less cells.
0 0 0 0 0
0 0 0 0 0
0 0 1 0 0
0 0 0 0 0
0 0 0 0 0
A perfect drop point for the cell (2, 2) (zero-based index) is (2, 2), and also all the surrounding cells (1, 1), (1, 2), (1, 3), (2, 1), (2, 3), (3, 1), (3, 2), and (3, 3).
0 0 0 0 1
0 0 0 0 0
0 0 1 0 0
0 0 0 0 0
0 0 0 0 0
a perfect drop points for the cell (2, 2) is (3, 1): It decreases the value in (2, 2), and the value in (4, 0). All other drop points for (2, 2) are not maximal, as they decrease one cell less. The perfect drop point for (2, 2) is also the perfect drop point for (4, 0), and it is the only perfect drop point for the field. It leads to the perfect solution for this field (one bomb drop).
1 0 0 0 0
0 0 0 0 0
0 0 1 0 0
0 0 0 0 0
1 0 0 0 0
There is no perfect drop point for (2, 2): Both (1, 1) and (1, 3) decrease (2, 2) and another cell (they are maximal drop points for (2, 2)), but they are not equivalent. However, (1, 1) is a perfect drop point for (0, 0), and (1, 3) is a perfect drop point for (0, 4).
With that definition of perfect drop points and a certain order of checks, I get the following result for the example in the question:
Drop bomb on 1, 1
Drop bomb on 1, 1
Drop bomb on 1, 5
Drop bomb on 1, 5
Drop bomb on 1, 5
Drop bomb on 1, 6
Drop bomb on 1, 2
Drop bomb on 1, 2
Drop bomb on 0, 6
Drop bomb on 0, 6
Drop bomb on 2, 1
Drop bomb on 2, 5
Drop bomb on 2, 5
Drop bomb on 2, 5
Drop bomb on 3, 1
Drop bomb on 3, 0
Drop bomb on 3, 0
Drop bomb on 3, 0
Drop bomb on 3, 0
Drop bomb on 3, 0
Drop bomb on 3, 4
Drop bomb on 3, 4
Drop bomb on 3, 3
Drop bomb on 3, 3
Drop bomb on 3, 6
Drop bomb on 3, 6
Drop bomb on 3, 6
Drop bomb on 4, 6
28
However, the algorithm only works if there is at least one perfect drop point after each step. It is possible to construct examples where there are no perfect drop points:
0 1 1 0
1 0 0 1
1 0 0 1
0 1 1 0
For these cases, we can modify the algorithm so that instead of a perfect drop point, we choose a coordinate with a minimal choice of maximal drop points, then calculate the minimum for each choice. In the case above, all cells with values have two maximal drop points. For example, (0, 1) has the maximal drop points (1, 1) and (1, 2). Choosing either one and then calcualting the minimum leads to this result:
Drop bomb on 1, 1
Drop bomb on 2, 2
Drop bomb on 1, 2
Drop bomb on 2, 1
2
Here's another idea:
Let's start by assigning a weight to each space on the board for how many numbers would be reduced by dropping a bomb there. So if the space has a non-zero number, it gets a point, and if any space adjacent to it has a non-zero number, it gets an additional point. So if there is a 1000-by-1000 grid, we have a weight assigned to each of the 1 million spaces.
Then sort the list of spaces by weight, and bomb the one with the highest weight. This is getting the most bang for our buck, so to speak.
After that, update the weight of every space whose weight is affected by the bomb. This will be the space you bombed, and any space immediately adjacent to it, and any space immediately adjacent to those. In other words, any space which could have had its value reduced to zero by the bombing, or the value of a neighboring space reduced to zero.
Then, re-sort the list spaces by weight. Since only a small subset of spaces had their weight changed by the bombing, you won't need to resort the whole list, just move those ones around in the list.
Bomb the new highest weight space, and repeat the procedure.
This guarantees that every bombing reduces as many spaces as possible (basically, it hits as few spaces which are already zero as possible), so it would be optimal, except that their can be ties in weights. So you may need to do some back tracking when there is a tie for the top weight. Only a tie for the top weight matters, though, not other ties, so hopefully it's not too much back-tracking.
Edit:
Mysticial's counterexample below demonstrates that in fact this isn't guaranteed to be optimal, regardless of ties in weights. In some cases reducing the weight as much as possible in a given step actually leaves the remaining bombs too spread out to achieve as high a cummulative reduction after the second step as you could have with a slightly less greedy choice in the first step. I was somewhat mislead by the notion that the results are insensitive to the order of bombings. They are insensitive to the order in that you could take any series of bombings and replay them from the start in a different order and end up with the same resulting board. But it doesn't follow from that that you can consider each bombing independently. Or, at least, each bombing must be considered in a way that takes into account how well it sets up the board for subsequent bombings.
Well, suppose we number the board positions 1, 2, ..., n x m. Any sequence of bomb drops can be represented by a sequence of numbers in this set, where numbers can repeat. However, the effect on the board is the same regardless of what order you drop the bombs in, so really any choice of bomb drops can be represented as a list of n x m numbers, where the first number represents the number of bombs dropped on position 1, the second number represents the number of bombs dropped on position 2, etc. Let's call this list of n x m numbers the "key".
You could try first calculating all board states resulting from 1 bomb drop, then use these to calculate all board states resulting from 2 bomb drops, etc until you get all zeros. But at each step you would cache the states using the key I defined above, so you can use these results in calculating the next step (a "dynamic programming" approach).
But depending on the size of n, m, and the numbers in the grid, the memory requirements of this approach might be excessive. You can throw away all the results for N bomb drops once you've calculated all the results for N + 1, so there's some savings there. And of course you could not cache anything at the cost of having it take a lot longer -- the dynamic programming approach trades memory for speed.
If you want the absolute optimal solution to clean the board you will have to use classic backtracking, but if the matrix is very big it will take ages to find the best solution, if you want an "possible" optimal solution you can use greedy algorithm, if you need help writing the algorithm i can help you
Come to think of it that is the best way. Make another matrix there you store the points you remove by dropping a bomb there then chose the cell with maximum points and drop the bomb there update the points matrix and continue. Example:
2 3 5 -> (2+(1*3)) (3+(1*5)) (5+(1*3))
1 3 2 -> (1+(1*4)) (3+(1*7)) (2+(1*4))
1 0 2 -> (1+(1*2)) (0+(1*5)) (2+(1*2))
cell value +1 for every adjacent cell with a value higher than 0
Brute Force !
I know it is not efficient, but even if you find a faster algorithm, you can always test against this result to know how accurate it is.
Use some recursion, like this:
void fn(tableState ts, currentlevel cl)
{
// first check if ts is all zeros yet, if not:
//
// do a for loop to go through all cells of ts,
// for each cell do a bomb, and then
// call:
// fn(ts, cl + 1);
}
You can make this more efficient by caching, if different way lead to same result, you shouldn't repeat the same steps.
To elaborate:
if bombing cell 1,3,5 leads to the same result as bombing cell 5,3,1 , then, you shouldn't re-do all the next steps again for both cases, only 1 is enough, you should store somewhere all table states and use its results.
A hash of table stats can be used to do fast comparison.
Never bomb border (unless square does not have nonborder neighbour)
Zero corner.
To zero corner, drop value of corner one square away diagonaly (the only nonborder neighbour)
This will create new corners. Go to 2
Edit: did not notice that Kostek suggested almost same approach, so now I make stronger claim:
If corners to clear are chosen to be always on outermost layer, then it is optimal.
In OP's example: dropping 2 (as 1+1 or 2) on anything else than on 5 does not leads to hitting any square that dropping on 5 would hit. So we simply must drop 2 on 5 (and 6 on lower left 1 ...)
After this, there is only one way how to clear (in top left) corner what was originaly 1 (now 0), and that is by dropping 0 on B3 (excel like notation).
And so on.
Only after clearing whole A and E columns and 1 and 7 rows, start clearing one layer deeper.
Consider cleared only those intentionaly cleared, clearing 0 value corners costs nothing and simplifies thinking about it.
Because all bombs dropped this way must be dropped and this leads to cleared fields, it is optimal solution.
After good sleep I realized that this is not true.
Consider
ABCDE
1 01000
2 10000
3 00000
4 00000
My approach would drop bombs on B3 and C2, when dropping on B2 would be enough
Here's my solution.. I won't write it out in code yet since I don't have time, but I believe this should produce an optimal number of moves each time - though I'm not sure how efficient it would be at finding the points to bomb.
Firstly, as #Luka Rahne stated in one of the comments, the order in which you bomb is not important- only the combination.
Secondly, as many others have stated, bombing 1-off the diagonal from the corners is optimal because it touches more points than the corners.
This generates the basis for my version of the algorithm:
We can bomb the '1-off from the corners' first or last, it doesn't matter (in theory)
We bomb those first because it makes later decisions easier (in practice)
We bomb the point which affects the most points, while simultaneously bombing those corners.
Let's define Points Of Resistance to be the points in the board with the most non-bombable points + largest number of 0's around them
non-bombable points can be defined as points which don't exist in our current scope of the board we're looking at.
I'll also define 4 bounds which will handle our scope:
Top=0, Left=0, Bottom=k,right=j.
(values to start)
Finally, I'll define optimal bombs as bombs which are dropped on points that are adjacent to points of resistance and are touching (1) the highest valued point of resistance and (2) the largest number of points possible.
Regarding the approach- it's obvious we're working from the outside in. We will be able to work with 4 'bombers' at the same time.
The first points of resistance are obviously our corners. The 'out of bound' points are not bombable (there are 5 points outside the scope for each corner). So we bomb the points diagonally one off the corners first.
Algorithm:
Find the 4 optimal bomb points.
If a bomb point is bombing a resistance point which is touching 2 bounds (i.e. a corner), bomb till that point is 0. Otherwise, bomb each until one of the points of resistance touching the optimal bomb point is 0.
for each bound:
if(sum(bound)==0) advance bound
repeat until TOP=BOTTOM and LEFT=RIGHT
I will try to write the actual code later
You could use state space planning.
For example, using A* (or one of its variants) coupled with an heuristic f = g + h like this:
g: number of bombs dropped so far
h: sum over all values of the grid divided by 9 (which is the best result, meaning we have an admissible heuristics)
I got 28 moves as well. I used two tests for the best next move: first the move producing the minimum sum for the board. Second, for equal sums, the move producing the maximum density, defined as:
number-of-zeros / number-of-groups-of-zeros
This is Haskell. "solve board" shows the engine's solution. You can play the game by typing "main", then enter a target point, "best" for a recommendation, or "quit" to quit.
OUTPUT:
*Main> solve board
[(4,4),(3,6),(3,3),(2,2),(2,2),(4,6),(4,6),(2,6),(3,2),(4,2),(2,6),(3,3),(4,3),(2,6),(4,2),(4,6),(4,6),(3,6),(2,6),(2,6),(2,4),(2,4),(2,6),(3,6),(4,2),(4,2),(4,2),(4,2)]
import Data.List
import Data.List.Split
import Data.Ord
import Data.Function(on)
board = [2,3,4,7,1,
1,5,2,6,2,
4,3,4,2,1,
2,1,2,4,1,
3,1,3,4,1,
2,1,4,3,2,
6,9,1,6,4]
n = 5
m = 7
updateBoard board pt =
let x = fst pt
y = snd pt
precedingLines = replicate ((y-2) * n) 0
bomb = concat $ replicate (if y == 1
then 2
else min 3 (m+2-y)) (replicate (x-2) 0
++ (if x == 1
then [1,1]
else replicate (min 3 (n+2-x)) 1)
++ replicate (n-(x+1)) 0)
in zipWith (\a b -> max 0 (a-b)) board (precedingLines ++ bomb ++ repeat 0)
showBoard board =
let top = " " ++ (concat $ map (\x -> show x ++ ".") [1..n]) ++ "\n"
chunks = chunksOf n board
in putStrLn (top ++ showBoard' chunks "" 1)
where showBoard' [] str count = str
showBoard' (x:xs) str count =
showBoard' xs (str ++ show count ++ "." ++ show x ++ "\n") (count+1)
instances _ [] = 0
instances x (y:ys)
| x == y = 1 + instances x ys
| otherwise = instances x ys
density a =
let numZeros = instances 0 a
groupsOfZeros = filter (\x -> head x == 0) (group a)
in if null groupsOfZeros then 0 else numZeros / fromIntegral (length groupsOfZeros)
boardDensity board = sum (map density (chunksOf n board))
moves = [(a,b) | a <- [2..n-1], b <- [2..m-1]]
bestMove board =
let lowestSumMoves = take 1 $ groupBy ((==) `on` snd)
$ sortBy (comparing snd) (map (\x -> (x, sum $ updateBoard board x)) (moves))
in if null lowestSumMoves
then (0,0)
else let lowestSumMoves' = map (\x -> fst x) (head lowestSumMoves)
in fst $ head $ reverse $ sortBy (comparing snd)
(map (\x -> (x, boardDensity $ updateBoard board x)) (lowestSumMoves'))
solve board = solve' board [] where
solve' board result
| sum board == 0 = result
| otherwise =
let best = bestMove board
in solve' (updateBoard board best) (result ++ [best])
main :: IO ()
main = mainLoop board where
mainLoop board = do
putStrLn ""
showBoard board
putStr "Pt: "
a <- getLine
case a of
"quit" -> do putStrLn ""
return ()
"best" -> do putStrLn (show $ bestMove board)
mainLoop board
otherwise -> let ws = splitOn "," a
pt = (read (head ws), read (last ws))
in do mainLoop (updateBoard board pt)
There seems to be a nonbipartite matching substructure here. Consider the following instance:
0010000
1000100
0000001
1000000
0000001
1000100
0010000
The optimal solution to this case has size 5 since that's the size of a minimum cover of the vertices of a 9-cycle by its edges.
This case, in particular, shows that the linear programming relaxation a few people have posted isn't exact, doesn't work, and all those other bad things. I'm pretty sure I can reduce "cover the vertices of my planar cubic graph by as few edges as possible" to your problem, which makes me doubt whether any of the greedy/hill-climbing solutions are going to work.
I don't see a way to solve this in polynomial time in the worst case. There might be a very clever binary-search-and-DP solution that I'm not seeing.
EDIT: I see that the contest (http://deadline24.pl) is language-agnostic; they send you a bunch of input files and you send them outputs. So you don't need something that runs in worst-case polynomial time. In particular, you get to look at the input!
There are a bunch of small cases in the input. Then there's a 10x1000 case, a 100x100 case, and a 1000x1000 case. The three large cases are all very well-behaved. Horizontally adjacent entries typically have the same value. On a relatively beefy machine, I'm able to solve all of the cases by brute-forcing using CPLEX in just a couple of minutes. I got lucky on the 1000x1000; the LP relaxation happens to have an integral optimal solution. My solutions agree with the .ans files provided in the test data bundle.
I'd bet you can use the structure of the input in a much more direct way than I did if you took a look at it; seems like you can just pare off the first row, or two, or three repeatedly until you've got nothing left. (Looks like, in the 1000x1000, all of the rows are nonincreasing? I guess that's where your "part B" comes from? )
I can't think of a way to calculate the actual number without just computing the bombing campaign using my best heuristic and hope I get a reasonable result.
So my method is to compute a bombing efficiency metric for each cell, bomb the cell with the highest value, .... iterate the process until I've flattened everything. Some have advocated using simple potential damage (i.e. score from 0 to 9) as a metric, but that falls short by pounding high value cells and not making use of damage overlap. I'd calculate cell value - sum of all neighbouring cells, reset any positive to 0 and use the absolute value of anything negative. Intuitively this metric should make a selection that help maximise damage overlap on cells with high counts instead of pounding those directly.
The code below reaches total destruction of the test field in 28 bombs (note that using potential damage as metric yields 31!).
using System;
using System.Collections.Generic;
using System.Linq;
namespace StackOverflow
{
internal class Program
{
// store the battle field as flat array + dimensions
private static int _width = 5;
private static int _length = 7;
private static int[] _field = new int[] {
2, 3, 4, 7, 1,
1, 5, 2, 6, 2,
4, 3, 4, 2, 1,
2, 1, 2, 4, 1,
3, 1, 3, 4, 1,
2, 1, 4, 3, 2,
6, 9, 1, 6, 4
};
// this will store the devastation metric
private static int[] _metric;
// do the work
private static void Main(string[] args)
{
int count = 0;
while (_field.Sum() > 0)
{
Console.Out.WriteLine("Round {0}:", ++count);
GetBlastPotential();
int cell_to_bomb = FindBestBombingSite();
PrintField(cell_to_bomb);
Bomb(cell_to_bomb);
}
Console.Out.WriteLine("Done in {0} rounds", count);
}
// convert 2D position to 1D index
private static int Get1DCoord(int x, int y)
{
if ((x < 0) || (y < 0) || (x >= _width) || (y >= _length)) return -1;
else
{
return (y * _width) + x;
}
}
// Convert 1D index to 2D position
private static void Get2DCoord(int n, out int x, out int y)
{
if ((n < 0) || (n >= _field.Length))
{
x = -1;
y = -1;
}
else
{
x = n % _width;
y = n / _width;
}
}
// Compute a list of 1D indices for a cell neighbours
private static List<int> GetNeighbours(int cell)
{
List<int> neighbours = new List<int>();
int x, y;
Get2DCoord(cell, out x, out y);
if ((x >= 0) && (y >= 0))
{
List<int> tmp = new List<int>();
tmp.Add(Get1DCoord(x - 1, y - 1));
tmp.Add(Get1DCoord(x - 1, y));
tmp.Add(Get1DCoord(x - 1, y + 1));
tmp.Add(Get1DCoord(x, y - 1));
tmp.Add(Get1DCoord(x, y + 1));
tmp.Add(Get1DCoord(x + 1, y - 1));
tmp.Add(Get1DCoord(x + 1, y));
tmp.Add(Get1DCoord(x + 1, y + 1));
// eliminate invalid coords - i.e. stuff past the edges
foreach (int c in tmp) if (c >= 0) neighbours.Add(c);
}
return neighbours;
}
// Compute the devastation metric for each cell
// Represent the Value of the cell minus the sum of all its neighbours
private static void GetBlastPotential()
{
_metric = new int[_field.Length];
for (int i = 0; i < _field.Length; i++)
{
_metric[i] = _field[i];
List<int> neighbours = GetNeighbours(i);
if (neighbours != null)
{
foreach (int j in neighbours) _metric[i] -= _field[j];
}
}
for (int i = 0; i < _metric.Length; i++)
{
_metric[i] = (_metric[i] < 0) ? Math.Abs(_metric[i]) : 0;
}
}
//// Compute the simple expected damage a bomb would score
//private static void GetBlastPotential()
//{
// _metric = new int[_field.Length];
// for (int i = 0; i < _field.Length; i++)
// {
// _metric[i] = (_field[i] > 0) ? 1 : 0;
// List<int> neighbours = GetNeighbours(i);
// if (neighbours != null)
// {
// foreach (int j in neighbours) _metric[i] += (_field[j] > 0) ? 1 : 0;
// }
// }
//}
// Update the battle field upon dropping a bomb
private static void Bomb(int cell)
{
List<int> neighbours = GetNeighbours(cell);
foreach (int i in neighbours)
{
if (_field[i] > 0) _field[i]--;
}
}
// Find the best bombing site - just return index of local maxima
private static int FindBestBombingSite()
{
int max_idx = 0;
int max_val = int.MinValue;
for (int i = 0; i < _metric.Length; i++)
{
if (_metric[i] > max_val)
{
max_val = _metric[i];
max_idx = i;
}
}
return max_idx;
}
// Display the battle field on the console
private static void PrintField(int cell)
{
for (int x = 0; x < _width; x++)
{
for (int y = 0; y < _length; y++)
{
int c = Get1DCoord(x, y);
if (c == cell)
Console.Out.Write(string.Format("[{0}]", _field[c]).PadLeft(4));
else
Console.Out.Write(string.Format(" {0} ", _field[c]).PadLeft(4));
}
Console.Out.Write(" || ");
for (int y = 0; y < _length; y++)
{
int c = Get1DCoord(x, y);
if (c == cell)
Console.Out.Write(string.Format("[{0}]", _metric[c]).PadLeft(4));
else
Console.Out.Write(string.Format(" {0} ", _metric[c]).PadLeft(4));
}
Console.Out.WriteLine();
}
Console.Out.WriteLine();
}
}
}
The resulting bombing pattern is output as follows (field values on the left, metric on the right)
Round 1:
2 1 4 2 3 2 6 || 7 16 8 10 4 18 6
3 5 3 1 1 1 9 || 11 18 18 21 17 28 5
4 [2] 4 2 3 4 1 || 19 [32] 21 20 17 24 22
7 6 2 4 4 3 6 || 8 17 20 14 16 22 8
1 2 1 1 1 2 4 || 14 15 14 11 13 16 7
Round 2:
2 1 4 2 3 2 6 || 5 13 6 9 4 18 6
2 4 2 1 1 [1] 9 || 10 15 17 19 17 [28] 5
3 2 3 2 3 4 1 || 16 24 18 17 17 24 22
6 5 1 4 4 3 6 || 7 14 19 12 16 22 8
1 2 1 1 1 2 4 || 12 12 12 10 13 16 7
Round 3:
2 1 4 2 2 1 5 || 5 13 6 7 3 15 5
2 4 2 1 0 1 8 || 10 15 17 16 14 20 2
3 [2] 3 2 2 3 0 || 16 [24] 18 15 16 21 21
6 5 1 4 4 3 6 || 7 14 19 11 14 19 6
1 2 1 1 1 2 4 || 12 12 12 10 13 16 7
Round 4:
2 1 4 2 2 1 5 || 3 10 4 6 3 15 5
1 3 1 1 0 1 8 || 9 12 16 14 14 20 2
2 2 2 2 2 [3] 0 || 13 16 15 12 16 [21] 21
5 4 0 4 4 3 6 || 6 11 18 9 14 19 6
1 2 1 1 1 2 4 || 10 9 10 9 13 16 7
Round 5:
2 1 4 2 2 1 5 || 3 10 4 6 2 13 3
1 3 1 1 0 [0] 7 || 9 12 16 13 12 [19] 2
2 2 2 2 1 3 0 || 13 16 15 10 14 15 17
5 4 0 4 3 2 5 || 6 11 18 7 13 17 6
1 2 1 1 1 2 4 || 10 9 10 8 11 13 5
Round 6:
2 1 4 2 1 0 4 || 3 10 4 5 2 11 2
1 3 1 1 0 0 6 || 9 12 16 11 8 13 0
2 2 2 2 0 2 0 || 13 16 15 9 14 14 15
5 4 [0] 4 3 2 5 || 6 11 [18] 6 11 15 5
1 2 1 1 1 2 4 || 10 9 10 8 11 13 5
Round 7:
2 1 4 2 1 0 4 || 3 10 4 5 2 11 2
1 3 1 1 0 0 6 || 8 10 13 9 7 13 0
2 [1] 1 1 0 2 0 || 11 [15] 12 8 12 14 15
5 3 0 3 3 2 5 || 3 8 10 3 8 15 5
1 1 0 0 1 2 4 || 8 8 7 7 9 13 5
Round 8:
2 1 4 2 1 0 4 || 1 7 2 4 2 11 2
0 2 0 1 0 0 6 || 7 7 12 7 7 13 0
1 1 0 1 0 2 0 || 8 8 10 6 12 14 15
4 2 0 3 3 [2] 5 || 2 6 8 2 8 [15] 5
1 1 0 0 1 2 4 || 6 6 6 7 9 13 5
Round 9:
2 1 4 2 1 0 4 || 1 7 2 4 2 11 2
0 2 0 1 0 0 6 || 7 7 12 7 6 12 0
1 1 0 1 0 [1] 0 || 8 8 10 5 10 [13] 13
4 2 0 3 2 2 4 || 2 6 8 0 6 9 3
1 1 0 0 0 1 3 || 6 6 6 5 8 10 4
Round 10:
2 1 4 2 1 0 4 || 1 7 2 4 2 10 1
0 2 [0] 1 0 0 5 || 7 7 [12] 7 6 11 0
1 1 0 1 0 1 0 || 8 8 10 4 8 9 10
4 2 0 3 1 1 3 || 2 6 8 0 6 8 3
1 1 0 0 0 1 3 || 6 6 6 4 6 7 2
Round 11:
2 0 3 1 1 0 4 || 0 6 0 3 0 10 1
0 1 0 0 0 [0] 5 || 4 5 5 5 3 [11] 0
1 0 0 0 0 1 0 || 6 8 6 4 6 9 10
4 2 0 3 1 1 3 || 1 5 6 0 5 8 3
1 1 0 0 0 1 3 || 6 6 6 4 6 7 2
Round 12:
2 0 3 1 0 0 3 || 0 6 0 2 1 7 1
0 1 0 0 0 0 4 || 4 5 5 4 1 7 0
1 0 0 0 0 [0] 0 || 6 8 6 4 5 [9] 8
4 2 0 3 1 1 3 || 1 5 6 0 4 7 2
1 1 0 0 0 1 3 || 6 6 6 4 6 7 2
Round 13:
2 0 3 1 0 0 3 || 0 6 0 2 1 6 0
0 1 0 0 0 0 3 || 4 5 5 4 1 6 0
1 [0] 0 0 0 0 0 || 6 [8] 6 3 3 5 5
4 2 0 3 0 0 2 || 1 5 6 0 4 6 2
1 1 0 0 0 1 3 || 6 6 6 3 4 4 0
Round 14:
2 0 3 1 0 [0] 3 || 0 5 0 2 1 [6] 0
0 0 0 0 0 0 3 || 2 5 4 4 1 6 0
0 0 0 0 0 0 0 || 4 4 4 3 3 5 5
3 1 0 3 0 0 2 || 0 4 5 0 4 6 2
1 1 0 0 0 1 3 || 4 4 5 3 4 4 0
Round 15:
2 0 3 1 0 0 2 || 0 5 0 2 1 4 0
0 0 0 0 0 0 2 || 2 5 4 4 1 4 0
0 0 0 0 0 0 0 || 4 4 4 3 3 4 4
3 1 0 3 0 [0] 2 || 0 4 5 0 4 [6] 2
1 1 0 0 0 1 3 || 4 4 5 3 4 4 0
Round 16:
2 [0] 3 1 0 0 2 || 0 [5] 0 2 1 4 0
0 0 0 0 0 0 2 || 2 5 4 4 1 4 0
0 0 0 0 0 0 0 || 4 4 4 3 3 3 3
3 1 0 3 0 0 1 || 0 4 5 0 3 3 1
1 1 0 0 0 0 2 || 4 4 5 3 3 3 0
Round 17:
1 0 2 1 0 0 2 || 0 3 0 1 1 4 0
0 0 0 0 0 0 2 || 1 3 3 3 1 4 0
0 0 0 0 0 0 0 || 4 4 4 3 3 3 3
3 1 [0] 3 0 0 1 || 0 4 [5] 0 3 3 1
1 1 0 0 0 0 2 || 4 4 5 3 3 3 0
Round 18:
1 0 2 1 0 0 2 || 0 3 0 1 1 4 0
0 0 0 0 0 0 2 || 1 3 3 3 1 4 0
0 0 0 0 0 0 0 || 3 3 2 2 2 3 3
3 [0] 0 2 0 0 1 || 0 [4] 2 0 2 3 1
1 0 0 0 0 0 2 || 2 4 2 2 2 3 0
Round 19:
1 0 2 1 0 [0] 2 || 0 3 0 1 1 [4] 0
0 0 0 0 0 0 2 || 1 3 3 3 1 4 0
0 0 0 0 0 0 0 || 2 2 2 2 2 3 3
2 0 0 2 0 0 1 || 0 2 2 0 2 3 1
0 0 0 0 0 0 2 || 2 2 2 2 2 3 0
Round 20:
1 [0] 2 1 0 0 1 || 0 [3] 0 1 1 2 0
0 0 0 0 0 0 1 || 1 3 3 3 1 2 0
0 0 0 0 0 0 0 || 2 2 2 2 2 2 2
2 0 0 2 0 0 1 || 0 2 2 0 2 3 1
0 0 0 0 0 0 2 || 2 2 2 2 2 3 0
Round 21:
0 0 1 1 0 0 1 || 0 1 0 0 1 2 0
0 0 0 0 0 0 1 || 0 1 2 2 1 2 0
0 0 0 0 0 0 0 || 2 2 2 2 2 2 2
2 0 0 2 0 [0] 1 || 0 2 2 0 2 [3] 1
0 0 0 0 0 0 2 || 2 2 2 2 2 3 0
Round 22:
0 0 1 1 0 0 1 || 0 1 0 0 1 2 0
0 0 0 0 0 0 1 || 0 1 2 2 1 2 0
[0] 0 0 0 0 0 0 || [2] 2 2 2 2 1 1
2 0 0 2 0 0 0 || 0 2 2 0 2 1 1
0 0 0 0 0 0 1 || 2 2 2 2 2 1 0
Round 23:
0 0 1 1 0 0 1 || 0 1 0 0 1 2 0
0 0 [0] 0 0 0 1 || 0 1 [2] 2 1 2 0
0 0 0 0 0 0 0 || 1 1 2 2 2 1 1
1 0 0 2 0 0 0 || 0 1 2 0 2 1 1
0 0 0 0 0 0 1 || 1 1 2 2 2 1 0
Round 24:
0 0 0 0 0 0 1 || 0 0 0 0 0 2 0
0 0 0 0 0 0 1 || 0 0 0 0 0 2 0
0 0 [0] 0 0 0 0 || 1 1 [2] 2 2 1 1
1 0 0 2 0 0 0 || 0 1 2 0 2 1 1
0 0 0 0 0 0 1 || 1 1 2 2 2 1 0
Round 25:
0 0 0 0 0 [0] 1 || 0 0 0 0 0 [2] 0
0 0 0 0 0 0 1 || 0 0 0 0 0 2 0
0 0 0 0 0 0 0 || 1 1 1 1 1 1 1
1 0 0 1 0 0 0 || 0 1 1 0 1 1 1
0 0 0 0 0 0 1 || 1 1 1 1 1 1 0
Round 26:
0 0 0 0 0 0 0 || 0 0 0 0 0 0 0
0 0 0 0 0 0 0 || 0 0 0 0 0 0 0
[0] 0 0 0 0 0 0 || [1] 1 1 1 1 0 0
1 0 0 1 0 0 0 || 0 1 1 0 1 1 1
0 0 0 0 0 0 1 || 1 1 1 1 1 1 0
Round 27:
0 0 0 0 0 0 0 || 0 0 0 0 0 0 0
0 0 0 0 0 0 0 || 0 0 0 0 0 0 0
0 0 [0] 0 0 0 0 || 0 0 [1] 1 1 0 0
0 0 0 1 0 0 0 || 0 0 1 0 1 1 1
0 0 0 0 0 0 1 || 0 0 1 1 1 1 0
Round 28:
0 0 0 0 0 0 0 || 0 0 0 0 0 0 0
0 0 0 0 0 0 0 || 0 0 0 0 0 0 0
0 0 0 0 0 0 0 || 0 0 0 0 0 0 0
0 0 0 0 0 [0] 0 || 0 0 0 0 0 [1] 1
0 0 0 0 0 0 1 || 0 0 0 0 0 1 0
Done in 28 rounds
This can be solved using a tree of depth O(3^(n)). Where n is the sum of all of the squares.
First consider that it is trivial to solve the problem with a tree of O(9^n), simply consider all of the possible bombing locations. For an example see Alfe's implementation.
Next realize that we can work to bomb from the bottom up and still get a minimum bombing pattern.
Start from the bottom left corner.
Bomb it to oblivion with the only plays that make sense (up and to the right).
Move one square to the right.
While the target has a value greater than zero, consider each of the 2 plays that make sense (straight up or up and to the right), reduce the value of the target by one, and make a new branch for each possibility.
Move another to the right.
While the target has a value greater than zero, consider each of the 3 plays that make sense (up left, up, and up right), reduce the value of the target by one, and make a new branch for each possibility.
Repeat steps 5 and 6 until the row is eliminated.
Move up a row and repeat steps 1 to 7 until the puzzle is solved.
This algorithm is correct because
It is necessary to complete each row at some point.
Completing a row always requires a play either one above, one below, or within that row.
It is always as good or better to choose a play one above the lowest uncleared row than a play on the row or below the row.
In practice this algorithm will regularly do better than its theoretical maximum because it will regularly bomb out neighbors and reduce the size of the search. If we assume that each bombing decreases the value of 4 additional targets, then our algorithm will run in O(3^(n/4)) or approximately O(1.3^n).
Because this algorithm is still exponential, it would be wise to limit the depth of the search. We might limit the number of branches allowed to some number, X, and once we are this deep we force the algorithm to choose the best path it has identified so far (the one that has the minimum total board sum in one of its terminal leaves). Then our algorithm is guaranteed to run in O(3^X) time, but it is not guaranteed to get the correct answer. However, we can always increase X and test empirically if the trade off between increased computation and better answers is worthwhile.
evaluation function, total sum:
int f (int ** matrix, int width, int height, int x, int y)
{
int m[3][3] = { 0 };
m[1][1] = matrix[x][y];
if (x > 0) m[0][1] = matrix[x-1][y];
if (x < width-1) m[2][1] = matrix[x+1][y];
if (y > 0)
{
m[1][0] = matrix[x][y-1];
if (x > 0) m[0][0] = matrix[x-1][y-1];
if (x < width-1) m[2][0] = matrix[x+1][y-1];
}
if (y < height-1)
{
m[1][2] = matrix[x][y+1];
if (x > 0) m[0][2] = matrix[x-1][y+1];
if (x < width-1) m[2][2] = matrix[x+1][y+1];
}
return m[0][0]+m[0][1]+m[0][2]+m[1][0]+m[1][1]+m[1][2]+m[2][0]+m[2][1]+m[2][2];
}
objective function:
Point bestState (int ** matrix, int width, int height)
{
Point p = new Point(0,0);
int bestScore = 0;
int b = 0;
for (int i=0; i<width; i++)
for (int j=0; j<height; j++)
{
b = f(matrix,width,height,i,j);
if (b > bestScore)
{
bestScore = best;
p = new Point(i,j);
}
}
retunr p;
}
destroy function:
void destroy (int ** matrix, int width, int height, Point p)
{
int x = p.x;
int y = p.y;
if(matrix[x][y] > 0) matrix[x][y]--;
if (x > 0) if(matrix[x-1][y] > 0) matrix[x-1][y]--;
if (x < width-1) if(matrix[x+1][y] > 0) matrix[x+1][y]--;
if (y > 0)
{
if(matrix[x][y-1] > 0) matrix[x][y-1]--;
if (x > 0) if(matrix[x-1][y-1] > 0) matrix[x-1][y-1]--;
if (x < width-1) if(matrix[x+1][y-1] > 0) matrix[x+1][y-1]--;
}
if (y < height-1)
{
if(matrix[x][y] > 0) matrix[x][y+1]--;
if (x > 0) if(matrix[x-1][y+1] > 0) matrix[x-1][y+1]--;
if (x < width-1) if(matrix[x+1][y+1] > 0) matrix[x+1][y+1]--;
}
}
goal function:
bool isGoal (int ** matrix, int width, int height)
{
for (int i=0; i<width; i++)
for (int j=0; j<height; j++)
if (matrix[i][j] > 0)
return false;
return true;
}
linear maximization function:
void solve (int ** matrix, int width, int height)
{
while (!isGoal(matrix,width,height))
{
destroy(matrix,width,height, bestState(matrix,width,height));
}
}
This is not optimal, but can be optimized through finding a better evaluation function ..
.. but thinking about this problem, I was thinking that one of the main issues is getting abandoned figures in the middle of zeroes at some point, so I'd take another approach .. which is dominate minimal values into zero, then try to escape zeroes as possible, which lead to general minimization of minimal existing value(s) or so
All this problem boils down to is computing an edit distance. Simply calculate a variant of the Levenshtein distance between the given matrix and the zero matrix, where edits are replaced with bombings, using dynamic programming to store the distances between intermediate arrays. I suggest using a hash of the matrices as a key. In pseudo-Python:
memo = {}
def bomb(matrix,i,j):
# bomb matrix at i,j
def bombsRequired(matrix,i,j):
# bombs required to zero matrix[i,j]
def distance(m1, i, len1, m2, j, len2):
key = hash(m1)
if memo[key] != None:
return memo[key]
if len1 == 0: return len2
if len2 == 0: return len1
cost = 0
if m1 != m2: cost = m1[i,j]
m = bomb(m1,i,j)
dist = distance(str1,i+1,len1-1,str2,j+1,len2-1)+cost)
memo[key] = dist
return dist
This was an answer to the first asked question. I hadn't noticed that he changed the parameters.
Create a list of all targets. Assign a value to the target based on the number of positive values impacted by a drop (itself, and all neighbors). Highest value would be a nine.
Sort the targets by the number of targets impacted (Descending), with a secondary descending sort on the sum of each impacted target.
Drop a bomb on the highest ranked target, then re-calculate targets and repeat until all target values are zero.
Agreed, this is not always the most optimal. For example,
100011
011100
011100
011100
000000
100011
This approach would take 5 bombs to clear. Optimally, though, you could do it in 4. Still, pretty
darn close and there is no backtracking. For most situations it will be optimal, or very close.
Using the original problem numbers, this approach solves in 28 bombs.
Adding code to demonstrate this approach (using a form with a button):
private void button1_Click(object sender, EventArgs e)
{
int[,] matrix = new int[10, 10] {{5, 20, 7, 1, 9, 8, 19, 16, 11, 3},
{17, 8, 15, 17, 12, 4, 5, 16, 8, 18},
{ 4, 19, 12, 11, 9, 7, 4, 15, 14, 6},
{ 17, 20, 4, 9, 19, 8, 17, 2, 10, 8},
{ 3, 9, 10, 13, 8, 9, 12, 12, 6, 18},
{16, 16, 2, 10, 7, 12, 17, 11, 4, 15},
{ 11, 1, 15, 1, 5, 11, 3, 12, 8, 3},
{ 7, 11, 16, 19, 17, 11, 20, 2, 5, 19},
{ 5, 18, 2, 17, 7, 14, 19, 11, 1, 6},
{ 13, 20, 8, 4, 15, 10, 19, 5, 11, 12}};
int value = 0;
List<Target> Targets = GetTargets(matrix);
while (Targets.Count > 0)
{
BombTarget(ref matrix, Targets[0]);
value += 1;
Targets = GetTargets(matrix);
}
Console.WriteLine( value);
MessageBox.Show("done: " + value);
}
private static void BombTarget(ref int[,] matrix, Target t)
{
for (int a = t.x - 1; a <= t.x + 1; a++)
{
for (int b = t.y - 1; b <= t.y + 1; b++)
{
if (a >= 0 && a <= matrix.GetUpperBound(0))
{
if (b >= 0 && b <= matrix.GetUpperBound(1))
{
if (matrix[a, b] > 0)
{
matrix[a, b] -= 1;
}
}
}
}
}
Console.WriteLine("Dropped bomb on " + t.x + "," + t.y);
}
private static List<Target> GetTargets(int[,] matrix)
{
List<Target> Targets = new List<Target>();
int width = matrix.GetUpperBound(0);
int height = matrix.GetUpperBound(1);
for (int x = 0; x <= width; x++)
{
for (int y = 0; y <= height; y++)
{
Target t = new Target();
t.x = x;
t.y = y;
SetTargetValue(matrix, ref t);
if (t.value > 0) Targets.Add(t);
}
}
Targets = Targets.OrderByDescending(x => x.value).ThenByDescending( x => x.sum).ToList();
return Targets;
}
private static void SetTargetValue(int[,] matrix, ref Target t)
{
for (int a = t.x - 1; a <= t.x + 1; a++)
{
for (int b = t.y - 1; b <= t.y + 1; b++)
{
if (a >= 0 && a <= matrix.GetUpperBound(0))
{
if (b >= 0 && b <= matrix.GetUpperBound(1))
{
if (matrix[ a, b] > 0)
{
t.value += 1;
t.sum += matrix[a,b];
}
}
}
}
}
}
A class you will need:
class Target
{
public int value;
public int sum;
public int x;
public int y;
}

Print all ways to sum n integers so that they total a given sum.

I'm trying to come up with an algorithm that will print out all possible ways to sum N integers so that they total a given value.
Example. Print all ways to sum 4 integers so that they sum up to be 5.
Result should be something like:
5 0 0 0
4 1 0 0
3 2 0 0
3 1 1 0
2 3 0 0
2 2 1 0
2 1 2 0
2 1 1 1
1 4 0 0
1 3 1 0
1 2 2 0
1 2 1 1
1 1 3 0
1 1 2 1
1 1 1 2
This is based off Alinium's code.
I modified it so it prints out all the possible combinations, since his already does all the permutations.
Also, I don't think you need the for loop when n=1, because in that case, only one number should cause the sum to equal value.
Various other modifications to get boundary cases to work.
def sum(n, value):
arr = [0]*n # create an array of size n, filled with zeroes
sumRecursive(n, value, 0, n, arr);
def sumRecursive(n, value, sumSoFar, topLevel, arr):
if n == 1:
if sumSoFar <= value:
#Make sure it's in ascending order (or only level)
if topLevel == 1 or (value - sumSoFar >= arr[-2]):
arr[(-1)] = value - sumSoFar #put it in the n_th last index of arr
print arr
elif n > 0:
#Make sure it's in ascending order
start = 0
if (n != topLevel):
start = arr[(-1*n)-1] #the value before this element
for i in range(start, value+1): # i = start...value
arr[(-1*n)] = i # put i in the n_th last index of arr
sumRecursive(n-1, value, sumSoFar + i, topLevel, arr)
Runing sums(4, 5) returns:
[0, 0, 0, 5]
[0, 0, 1, 4]
[0, 0, 2, 3]
[0, 1, 1, 3]
[1, 1, 1, 2]
In pure math, a way of summing integers to get a given total is called a partition. There is a lot of information around if you google for "integer partition". You are looking for integer partitions where there are a specific number of elements. I'm sure you could take one of the known generating mechanisms and adapt for this extra condition. Wikipedia has a good overview of the topic Partition_(number_theory). Mathematica even has a function to do what you want: IntegerPartitions[5, 4].
The key to solving the problem is recursion. Here's a working implementation in python. It prints out all possible permutations that sum up to the total. You'll probably want to get rid of the duplicate combinations, possibly by using some Set or hashing mechanism to filter them out.
def sum(n, value):
arr = [0]*n # create an array of size n, filled with zeroes
sumRecursive(n, value, 0, n, arr);
def sumRecursive(n, value, sumSoFar, topLevel, arr):
if n == 1:
if sumSoFar > value:
return False
else:
for i in range(value+1): # i = 0...value
if (sumSoFar + i) == value:
arr[(-1*n)] = i # put i in the n_th last index of arr
print arr;
return True
else:
for i in range(value+1): # i = 0...value
arr[(-1*n)] = i # put i in the n_th last index of arr
if sumRecursive(n-1, value, sumSoFar + i, topLevel, arr):
if (n == topLevel):
print "\n"
With some extra effort, this can probably be simplified to get rid of some of the parameters I am passing to the recursive function. As suggested by redcayuga's pseudo code, using a stack, instead of manually managing an array, would be a better idea too.
I haven't tested this:
procedure allSum (int tot, int n, int desiredTotal) return int
if n > 0
int i =
for (int i = tot; i>=0; i--) {
push i onto stack;
allSum(tot-i, n-1, desiredTotal);
pop top of stack
}
else if n==0
if stack sums to desiredTotal then print the stack end if
end if
I'm sure there's a better way to do this.
i've find a ruby way with domain specification based on Alinium's code
class Domain_partition
attr_reader :results,
:domain,
:sum,
:size
def initialize(_dom, _size, _sum)
_dom.is_a?(Array) ? #domain=_dom.sort : #domain= _dom.to_a
#results, #sum, #size = [], _sum, _size
arr = [0]*size # create an array of size n, filled with zeroes
sumRecursive(size, 0, arr)
end
def sumRecursive(n, sumSoFar, arr)
if n == 1
#Make sure it's in ascending order (or only level)
if sum - sumSoFar >= arr[-2] and #domain.include?(sum - sumSoFar)
final_arr=Array.new(arr)
final_arr[(-1)] = sum - sumSoFar #put it in the n_th last index of arr
#results<<final_arr
end
elsif n > 1
#********* dom_selector ********
n != size ? start = arr[(-1*n)-1] : start = domain[0]
dom_bounds=(start*(n-1)..domain.last*(n-1))
restricted_dom=domain.select do |x|
if x < start
false; next
end
if size-n > 0
if dom_bounds.cover? sum-(arr.first(size-n).inject(:+)+x) then true
else false end
else
dom_bounds.cover?(sum+x) ? true : false
end
end # ***************************
for i in restricted_dom
_arr=Array.new(arr)
_arr[(-1*n)] = i
sumRecursive(n-1, sumSoFar + i, _arr)
end
end
end
end
a=Domain_partition.new (-6..6),10,0
p a
b=Domain_partition.new [-4,-2,-1,1,2,3],10,0
p b
If you're interested in generating (lexically) ordered integer partitions, i.e. unique unordered sets of S positive integers (no 0's) that sum to N, then try the following. (unordered simply means that [1,2,1] and [1,1,2] are the same partition)
The problem doesn't need recursion and is quickly handled because the concept of finding the next lexical restricted partition is actually very simple...
In concept: Starting from the last addend (integer), find the first instance where the difference between two addends is greater than 1. Split the partition in two at that point. Remove 1 from the higher integer (which will be the last integer in one part) and add 1 to the lower integer (the first integer of the latter part). Then find the first lexically ordered partition for the latter part having the new largest integer as the maximum addend value. I use Sage to find the first lexical partition because it's lightening fast, but it's easily done without it. Finally, join the two portions and voila! You have the next lexical partition of N having S parts.
e.g. [6,5,3,2,2] -> [6,5],[3,2,2] -> [6,4],[4,2,2] -> [6,4],[4,3,1] -> [6,4,4,3,1]
So, in Python and calling Sage for the minor task of finding the first lexical partition given n and s parts...
from sage.all import *
def most_even_partition(n,s): # The main function will need to recognize the most even partition possible (i.e. last lexical partition) so it can loop back to the first lexical partition if need be
most_even = [int(floor(float(n)/float(s)))]*s
_remainder = int(n%s)
j = 0
while _remainder > 0:
most_even[j] += 1
_remainder -= 1
j += 1
return most_even
def portion(alist, indices):
return [alist[i:j] for i, j in zip([0]+indices, indices+[None])]
def next_restricted_part(p,n,s):
if p == most_even_partition(n,s):return Partitions(n,length=s).first()
for i in enumerate(reversed(p)):
if i[1] - p[-1] > 1:
if i[0] == (s-1):
return Partitions(n,length=s,max_part=(i[1]-1)).first()
else:
parts = portion(p,[s-i[0]-1]) # split p (soup?)
h1 = parts[0]
h2 = parts[1]
next = list(Partitions(sum(h2),length=len(h2),max_part=(h2[0]-1)).first())
return h1+next
If you want zeros (not actual integer partitions), then the functions only need small modifications.
Try this code. I hope it is easier to understand. I tested it, it generate correct sequence.
void partition(int n, int m = 0)
{
int i;
// if the partition is done
if(n == 0){
// Output the result
for(i = 0; i < m; ++i)
printf("%d ", list[i]);
printf("\n");
return;
}
// Do the split from large to small int
for(i = n; i > 0; --i){
// if the number not partitioned or
// willbe partitioned no larger than
// previous partition number
if(m == 0 || i <= list[m - 1]){
// store the partition int
list[m] = i;
// partition the rest
partition(n - i, m + 1);
}
}
}
Ask for clarification, if required.
The is One of the output
6
5 1
4 2
4 1 1
3 3
3 2 1
3 1 1 1
2 2 2
2 2 1 1
2 1 1 1 1
1 1 1 1 1 1
10
9 1
8 2
8 1 1
7 3
7 2 1
7 1 1 1
6 4
6 3 1
6 2 2
6 2 1 1
6 1 1 1 1
5 5
5 4 1
5 3 2
5 3 1 1
5 2 2 1
5 2 1 1 1
5 1 1 1 1 1
4 4 2
4 4 1 1
4 3 3
4 3 2 1
4 3 1 1 1
4 2 2 2
4 2 2 1 1
4 2 1 1 1 1
4 1 1 1 1 1 1
3 3 3 1
3 3 2 2
3 3 2 1 1
3 3 1 1 1 1
3 2 2 2 1
3 2 2 1 1 1
3 2 1 1 1 1 1
3 1 1 1 1 1 1 1
2 2 2 2 2
2 2 2 2 1 1
2 2 2 1 1 1 1
2 2 1 1 1 1 1 1
2 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1

Resources