Checking for termination when converting real to rational - algorithm

Recently I found this in some code I wrote a few years ago. It was used to rationalize a real value (within a tolerance) by determining a suitable denominator and then checking if the difference between the original real and the rational was small enough.
Edit to clarify : I actually don't want to convert all real values. For instance I could choose a max denominator of 14, and a real value that equals 7/15 would stay as-is. It's not as clear that as it's an outside variable in the algorithms I wrote here.
The algorithm to get the denominator was this (pseudocode):
denominator(x)
frac = fractional part of x
recip = 1/frac
if (frac < tol)
return 1
else
return recip * denominator(recip)
end
end
Seems to be based on continued fractions although it became clear on looking at it again that it was wrong. (It worked for me because it would eventually just spit out infinity, which I handled outside, but it would be often really slow.) The value for tol doesn't really do anything except in the case of termination or for numbers that end up close. I don't think it's relatable to the tolerance for the real - rational conversion.
I've replaced it with an iterative version that is not only faster but I'm pretty sure it won't fail theoretically (d = 1 to start with and fractional part returns a positive, so recip is always >= 1) :
denom_iter(x d)
return d if d > maxd
frac = fractional part of x
recip = 1/frac
if (frac = 0)
return d
else
return denom_iter(recip d*recip)
end
end
What I'm curious to know if there's a way to pick the maxd that will ensure that it converts all values that are possible for a given tolerance. I'm assuming 1/tol but don't want to miss something. I'm also wondering if there's an way in this approach to actually limit the denominator size - this allows some denominators larger than maxd.

This can be considered a 2D minimization problem on error:
ArgMin ( r - q / p ), where r is real, q and p are integers
I suggest the use of Gradient Descent algorithm . The gradient in this objective function is:
f'(q, p) = (-1/p, q/p^2)
The initial guess r_o can be q being the closest integer to r, and p being 1.
The stopping condition can be thresholding of the error.
The pseudo-code of GD can be found in wiki: http://en.wikipedia.org/wiki/Gradient_descent
If the initial guess is close enough, the objective function should be convex.
As Jacob suggested, this problem can be better solved by minimizing the following error function:
ArgMin ( p * r - q ), where r is real, q and p are integers
This is linear programming, which can be efficiently solved by any ILP (Integer Linear Programming) solvers. GD works on non-linear cases, but lack efficiency in linear problems.
Initial guesses and stopping condition can be similar to stated above. Better choice can be obtained for individual choice of solver.
I suggest you should still assume convexity near the local minimum, which can greatly reduce cost. You can also try Simplex method, which is great on linear programming problem.
I give credit to Jacob on this.

A problem similar to this is solved in the Approximations section beginning ca. page 28 of Bill Gosper's Continued Fraction Arithmetic document. (Ref: postscript file; also see text version, from line 1984.) The general idea is to compute continued-fraction approximations of the low-end and high-end range limiting numbers, until the two fractions differ, and then choose a value in the range of those two approximations. This is guaranteed to give a simplest fraction, using Gosper's terminology.
The python code below (program "simpleden") implements a similar process. (It probably is not as good as Gosper's suggested implementation, but is good enough that you can see what kind of results the method produces.) The amount of work done is similar to that for Euclid's algorithm, ie O(n) for numbers with n bits, so the program is reasonably fast. Some example test cases (ie the program's output) are shown after the code itself. Note, function simpleratio(vlo, vhi) as shown here returns -1 if vhi is smaller than vlo.
#!/usr/bin/env python
def simpleratio(vlo, vhi):
rlo, rhi, eps = vlo, vhi, 0.0000001
if vhi < vlo: return -1
num = denp = 1
nump = den = 0
while 1:
klo, khi = int(rlo), int(rhi)
if klo != khi or rlo-klo < eps or rhi-khi < eps:
tlo = denp + klo * den
thi = denp + khi * den
if tlo < thi:
return tlo + (rlo-klo > eps)*den
elif thi < tlo:
return thi + (rhi-khi > eps)*den
else:
return tlo
nump, num = num, nump + klo * num
denp, den = den, denp + klo * den
rlo, rhi = 1/(rlo-klo), 1/(rhi-khi)
def test(vlo, vhi):
den = simpleratio(vlo, vhi);
fden = float(den)
ilo, ihi = int(vlo*den), int(vhi*den)
rlo, rhi = ilo/fden, ihi/fden;
izok = 'ok' if rlo <= vlo <= rhi <= vhi else 'wrong'
print '{:4d}/{:4d} = {:0.8f} vlo:{:0.8f} {:4d}/{:4d} = {:0.8f} vhi:{:0.8f} {}'.format(ilo,den,rlo,vlo, ihi,den,rhi,vhi, izok)
test (0.685, 0.695)
test (0.685, 0.7)
test (0.685, 0.71)
test (0.685, 0.75)
test (0.685, 0.76)
test (0.75, 0.76)
test (2.173, 2.177)
test (2.373, 2.377)
test (3.484, 3.487)
test (4.0, 4.87)
test (4.0, 8.0)
test (5.5, 5.6)
test (5.5, 6.5)
test (7.5, 7.3)
test (7.5, 7.5)
test (8.534537, 8.534538)
test (9.343221, 9.343222)
Output from program:
> ./simpleden
8/ 13 = 0.61538462 vlo:0.68500000 9/ 13 = 0.69230769 vhi:0.69500000 ok
6/ 10 = 0.60000000 vlo:0.68500000 7/ 10 = 0.70000000 vhi:0.70000000 ok
6/ 10 = 0.60000000 vlo:0.68500000 7/ 10 = 0.70000000 vhi:0.71000000 ok
2/ 4 = 0.50000000 vlo:0.68500000 3/ 4 = 0.75000000 vhi:0.75000000 ok
2/ 4 = 0.50000000 vlo:0.68500000 3/ 4 = 0.75000000 vhi:0.76000000 ok
3/ 4 = 0.75000000 vlo:0.75000000 3/ 4 = 0.75000000 vhi:0.76000000 ok
36/ 17 = 2.11764706 vlo:2.17300000 37/ 17 = 2.17647059 vhi:2.17700000 ok
18/ 8 = 2.25000000 vlo:2.37300000 19/ 8 = 2.37500000 vhi:2.37700000 ok
114/ 33 = 3.45454545 vlo:3.48400000 115/ 33 = 3.48484848 vhi:3.48700000 ok
4/ 1 = 4.00000000 vlo:4.00000000 4/ 1 = 4.00000000 vhi:4.87000000 ok
4/ 1 = 4.00000000 vlo:4.00000000 8/ 1 = 8.00000000 vhi:8.00000000 ok
11/ 2 = 5.50000000 vlo:5.50000000 11/ 2 = 5.50000000 vhi:5.60000000 ok
5/ 1 = 5.00000000 vlo:5.50000000 6/ 1 = 6.00000000 vhi:6.50000000 ok
-7/ -1 = 7.00000000 vlo:7.50000000 -7/ -1 = 7.00000000 vhi:7.30000000 wrong
15/ 2 = 7.50000000 vlo:7.50000000 15/ 2 = 7.50000000 vhi:7.50000000 ok
8030/ 941 = 8.53347503 vlo:8.53453700 8031/ 941 = 8.53453773 vhi:8.53453800 ok
24880/2663 = 9.34284641 vlo:9.34322100 24881/2663 = 9.34322193 vhi:9.34322200 ok
If, rather than the simplest fraction in a range, you seek the best approximation given some upper limit on denominator size, consider code like the following, which replaces all the code from def test(vlo, vhi) forward.
def smallden(target, maxden):
global pas
pas = 0
tol = 1/float(maxden)**2
while 1:
den = simpleratio(target-tol, target+tol);
if den <= maxden: return den
tol *= 2
pas += 1
# Test driver for smallden(target, maxden) routine
import random
totalpass, trials, passes = 0, 20, [0 for i in range(20)]
print 'Maxden Num Den Num/Den Target Error Passes'
for i in range(trials):
target = random.random()
maxden = 10 + round(10000*random.random())
den = smallden(target, maxden)
num = int(round(target*den))
got = float(num)/den
print '{:4d} {:4d}/{:4d} = {:10.8f} = {:10.8f} + {:12.9f} {:2}'.format(
int(maxden), num, den, got, target, got - target, pas)
totalpass += pas
passes[pas-1] += 1
print 'Average pass count: {:0.3}\nPass histo: {}'.format(
float(totalpass)/trials, passes)
In production code, drop out all the references to pas (etc.), ie, drop out pass-counting code.
The routine smallden is given a target value and a maximum value for allowed denominators. Given maxden possible choices of denominators, it's reasonable to suppose that a tolerance on the order of 1/maxden² can be achieved. The pass-counts shown in the following typical output (where target and maxden were set via random numbers) illustrate that such a tolerance was reached immediately more than half the time, but in other cases tolerances 2 or 4 or 8 times as large were used, requiring extra calls to simpleratio. Note, the last two lines of output from a 10000-number test run are shown following the complete output of a 20-number test run.
Maxden Num Den Num/Den Target Error Passes
1198 32/ 509 = 0.06286837 = 0.06286798 + 0.000000392 1
2136 115/ 427 = 0.26932084 = 0.26932103 + -0.000000185 1
4257 839/2670 = 0.31423221 = 0.31423223 + -0.000000025 1
2680 449/ 509 = 0.88212181 = 0.88212132 + 0.000000486 3
2935 440/1853 = 0.23745278 = 0.23745287 + -0.000000095 1
6128 347/1285 = 0.27003891 = 0.27003899 + -0.000000077 3
8041 1780/4243 = 0.41951449 = 0.41951447 + 0.000000020 2
7637 3926/7127 = 0.55086292 = 0.55086293 + -0.000000010 1
3422 27/ 469 = 0.05756930 = 0.05756918 + 0.000000113 2
1616 168/1507 = 0.11147976 = 0.11147982 + -0.000000061 1
260 62/ 123 = 0.50406504 = 0.50406378 + 0.000001264 1
3775 52/3327 = 0.01562970 = 0.01562750 + 0.000002195 6
233 6/ 13 = 0.46153846 = 0.46172772 + -0.000189254 5
3650 3151/3514 = 0.89669892 = 0.89669890 + 0.000000020 1
9307 2943/7528 = 0.39094049 = 0.39094048 + 0.000000013 2
962 206/ 225 = 0.91555556 = 0.91555496 + 0.000000594 1
2080 564/1975 = 0.28556962 = 0.28556943 + 0.000000190 1
6505 1971/2347 = 0.83979548 = 0.83979551 + -0.000000022 1
1944 472/ 833 = 0.56662665 = 0.56662696 + -0.000000305 2
3244 291/1447 = 0.20110574 = 0.20110579 + -0.000000051 1
Average pass count: 1.85
Pass histo: [12, 4, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
The last two lines of output from a 10000-number test run:
Average pass count: 1.77
Pass histo: [56659, 25227, 10020, 4146, 2072, 931, 497, 233, 125, 39, 33, 17, 1, 0, 0, 0, 0, 0, 0, 0]

Related

Algorithm for equiprobable random square binary matrices with two non-adjacent non-zeros in each row and column

It would be great if someone could point me towards an algorithm that would allow me to :
create a random square matrix, with entries 0 and 1, such that
every row and every column contain exactly two non-zero entries,
two non-zero entries cannot be adjacent,
all possible matrices are equiprobable.
Right now I manage to achieve points 1 and 2 doing the following : such a matrix can be transformed, using suitable permutations of rows and columns, into a diagonal block matrix with blocks of the form
1 1 0 0 ... 0
0 1 1 0 ... 0
0 0 1 1 ... 0
.............
1 0 0 0 ... 1
So I start from such a matrix using a partition of [0, ..., n-1] and scramble it by permuting rows and columns randomly. Unfortunately, I can't find a way to integrate the adjacency condition, and I am quite sure that my algorithm won't treat all the matrices equally.
Update
I have managed to achieve point 3. The answer was actually straight under my nose : the block matrix I am creating contains all the information needed to take into account the adjacency condition. First some properties and definitions:
a suitable matrix defines permutations of [1, ..., n] that can be build like so: select a 1 in row 1. The column containing this entry contains exactly one other entry equal to 1 on a row a different from 1. Again, row a contains another entry 1 in a column which contains a second entry 1 on a row b, and so on. This starts a permutation 1 -> a -> b ....
For instance, with the following matrix, starting with the marked entry
v
1 0 1 0 0 0 | 1
0 1 0 0 0 1 | 2
1 0 0 1 0 0 | 3
0 0 1 0 1 0 | 4
0 0 0 1 0 1 | 5
0 1 0 0 1 0 | 6
------------+--
1 2 3 4 5 6 |
we get permutation 1 -> 3 -> 5 -> 2 -> 6 -> 4 -> 1.
the cycles of such a permutation lead to the block matrix I mentioned earlier. I also mentioned scrambling the block matrix using arbitrary permutations on the rows and columns to rebuild a matrix compatible with the requirements.
But I was using any permutation, which led to some adjacent non-zero entries. To avoid that, I have to choose permutations that separate rows (and columns) that are adjacent in the block matrix. Actually, to be more precise, if two rows belong to a same block and are cyclically consecutive (the first and last rows of a block are considered consecutive too), then the permutation I want to apply has to move these rows into non-consecutive rows of the final matrix (I will call two rows incompatible in that case).
So the question becomes : How to build all such permutations ?
The simplest idea is to build a permutation progressively by randomly adding rows that are compatible with the previous one. As an example, consider the case n = 6 using partition 6 = 3 + 3 and the corresponding block matrix
1 1 0 0 0 0 | 1
0 1 1 0 0 0 | 2
1 0 1 0 0 0 | 3
0 0 0 1 1 0 | 4
0 0 0 0 1 1 | 5
0 0 0 1 0 1 | 6
------------+--
1 2 3 4 5 6 |
Here rows 1, 2 and 3 are mutually incompatible, as are 4, 5 and 6. Choose a random row, say 3.
We will write a permutation as an array: [2, 5, 6, 4, 3, 1] meaning 1 -> 2, 2 -> 5, 3 -> 6, ... This means that row 2 of the block matrix will become the first row of the final matrix, row 5 will become the second row, and so on.
Now let's build a suitable permutation by choosing randomly a row, say 3:
p = [3, ...]
The next row will then be chosen randomly among the remaining rows that are compatible with 3 : 4, 5and 6. Say we choose 4:
p = [3, 4, ...]
Next choice has to be made among 1 and 2, for instance 1:
p = [3, 4, 1, ...]
And so on: p = [3, 4, 1, 5, 2, 6].
Applying this permutation to the block matrix, we get:
1 0 1 0 0 0 | 3
0 0 0 1 1 0 | 4
1 1 0 0 0 0 | 1
0 0 0 0 1 1 | 5
0 1 1 0 0 0 | 2
0 0 0 1 0 1 | 6
------------+--
1 2 3 4 5 6 |
Doing so, we manage to vertically isolate all non-zero entries. Same has to be done with the columns, for instance by using permutation p' = [6, 3, 5, 1, 4, 2] to finally get
0 1 0 1 0 0 | 3
0 0 1 0 1 0 | 4
0 0 0 1 0 1 | 1
1 0 1 0 0 0 | 5
0 1 0 0 0 1 | 2
1 0 0 0 1 0 | 6
------------+--
6 3 5 1 4 2 |
So this seems to work quite efficiently, but building these permutations needs to be done with caution, because one can easily be stuck: for instance, with n=6 and partition 6 = 2 + 2 + 2, following the construction rules set up earlier can lead to p = [1, 3, 2, 4, ...]. Unfortunately, 5 and 6 are incompatible, so choosing one or the other makes the last choice impossible. I think I've found all situations that lead to a dead end. I will denote by r the set of remaining choices:
p = [..., x, ?], r = {y} with x and y incompatible
p = [..., x, ?, ?], r = {y, z} with y and z being both incompatible with x (no choice can be made)
p = [..., ?, ?], r = {x, y} with x and y incompatible (any choice would lead to situation 1)
p = [..., ?, ?, ?], r = {x, y, z} with x, y and z being cyclically consecutive (choosing x or z would lead to situation 2, choosing y to situation 3)
p = [..., w, ?, ?, ?], r = {x, y, z} with xwy being a 3-cycle (neither x nor y can be chosen, choosing z would lead to situation 3)
p = [..., ?, ?, ?, ?], r = {w, x, y, z} with wxyz being a 4-cycle (any choice would lead to situation 4)
p = [..., ?, ?, ?, ?], r = {w, x, y, z} with xyz being a 3-cycle (choosing w would lead to situation 4, choosing any other would lead to situation 4)
Now it seems that the following algorithm gives all suitable permutations:
As long as there are strictly more than 5 numbers to choose, choose randomly among the compatible ones.
If there are 5 numbers left to choose: if the remaining numbers contain a 3-cycle or a 4-cycle, break that cycle (i.e. choose a number belonging to that cycle).
If there are 4 numbers left to choose: if the remaining numbers contain three cyclically consecutive numbers, choose one of them.
If there are 3 numbers left to choose: if the remaining numbers contain two cyclically consecutive numbers, choose one of them.
I am quite sure that this allows me to generate all suitable permutations and, hence, all suitable matrices.
Unfortunately, every matrix will be obtained several times, depending on the partition that was chosen.
Intro
Here is some prototype-approach, trying to solve the more general task of
uniform combinatorial sampling, which for our approach here means: we can use this approach for everything which we can formulate as SAT-problem.
It's not exploiting your problem directly and takes a heavy detour. This detour to the SAT-problem can help in regards to theory (more powerful general theoretical results) and efficiency (SAT-solvers).
That being said, it's not an approach if you want to sample within seconds or less (in my experiments), at least while being concerned about uniformity.
Theory
The approach, based on results from complexity-theory, follows this work:
GOMES, Carla P.; SABHARWAL, Ashish; SELMAN, Bart. Near-uniform sampling of combinatorial spaces using XOR constraints. In: Advances In Neural Information Processing Systems. 2007. S. 481-488.
The basic idea:
formulate the problem as SAT-problem
add randomly generated xors to the problem (acting on the decision-variables only! that's important in practice)
this will reduce the number of solutions (some solutions will get impossible)
do that in a loop (with tuned parameters) until only one solution is left!
search for some solution is being done by SAT-solvers or #SAT-solvers (=model-counting)
if there is more than one solution: no xors will be added but a complete restart will be done: add random-xors to the start-problem!
The guarantees:
when tuning the parameters right, this approach achieves near-uniform sampling
this tuning can be costly, as it's based on approximating the number of possible solutions
empirically this can also be costly!
Ante's answer, mentioning the number sequence A001499 actually gives a nice upper bound on the solution-space (as it's just ignoring adjacency-constraints!)
The drawbacks:
inefficient for large problems (in general; not necessarily compared to the alternatives like MCMC and co.)
need to change / reduce parameters to produce samples
those reduced parameters lose the theoretical guarantees
but empirically: good results are still possible!
Parameters:
In practice, the parameters are:
N: number of xors added
L: minimum number of variables part of one xor-constraint
U: maximum number of variables part of one xor-constraint
N is important to reduce the number of possible solutions. Given N constant, the other variables of course also have some effect on that.
Theory says (if i interpret correctly), that we should use L = R = 0.5 * #dec-vars.
This is impossible in practice here, as xor-constraints hurt SAT-solvers a lot!
Here some more scientific slides about the impact of L and U.
They call xors of size 8-20 short-XORS, while we will need to use even shorter ones later!
Implementation
Final version
Here is a pretty hacky implementation in python, using the XorSample scripts from here.
The underlying SAT-solver in use is Cryptominisat.
The code basically boils down to:
Transform the problem to conjunctive normal-form
as DIMACS-CNF
Implement the sampling-approach:
Calls XorSample (pipe-based + file-based)
Call SAT-solver (file-based)
Add samples to some file for later analysis
Code: (i hope i did warn you already about the code-quality)
from itertools import count
from time import time
import subprocess
import numpy as np
import os
import shelve
import uuid
import pickle
from random import SystemRandom
cryptogen = SystemRandom()
""" Helper functions """
# K-ARY CONSTRAINT GENERATION
# ###########################
# SINZ, Carsten. Towards an optimal CNF encoding of boolean cardinality constraints.
# CP, 2005, 3709. Jg., S. 827-831.
def next_var_index(start):
next_var = start
while(True):
yield next_var
next_var += 1
class s_index():
def __init__(self, start_index):
self.firstEnvVar = start_index
def next(self,i,j,k):
return self.firstEnvVar + i*k +j
def gen_seq_circuit(k, input_indices, next_var_index_gen):
cnf_string = ''
s_index_gen = s_index(next_var_index_gen.next())
# write clauses of first partial sum (i.e. i=0)
cnf_string += (str(-input_indices[0]) + ' ' + str(s_index_gen.next(0,0,k)) + ' 0\n')
for i in range(1, k):
cnf_string += (str(-s_index_gen.next(0, i, k)) + ' 0\n')
# write clauses for general case (i.e. 0 < i < n-1)
for i in range(1, len(input_indices)-1):
cnf_string += (str(-input_indices[i]) + ' ' + str(s_index_gen.next(i, 0, k)) + ' 0\n')
cnf_string += (str(-s_index_gen.next(i-1, 0, k)) + ' ' + str(s_index_gen.next(i, 0, k)) + ' 0\n')
for u in range(1, k):
cnf_string += (str(-input_indices[i]) + ' ' + str(-s_index_gen.next(i-1, u-1, k)) + ' ' + str(s_index_gen.next(i, u, k)) + ' 0\n')
cnf_string += (str(-s_index_gen.next(i-1, u, k)) + ' ' + str(s_index_gen.next(i, u, k)) + ' 0\n')
cnf_string += (str(-input_indices[i]) + ' ' + str(-s_index_gen.next(i-1, k-1, k)) + ' 0\n')
# last clause for last variable
cnf_string += (str(-input_indices[-1]) + ' ' + str(-s_index_gen.next(len(input_indices)-2, k-1, k)) + ' 0\n')
return (cnf_string, (len(input_indices)-1)*k, 2*len(input_indices)*k + len(input_indices) - 3*k - 1)
# K=2 clause GENERATION
# #####################
def gen_at_most_2_constraints(vars, start_var):
constraint_string = ''
used_clauses = 0
used_vars = 0
index_gen = next_var_index(start_var)
circuit = gen_seq_circuit(2, vars, index_gen)
constraint_string += circuit[0]
used_clauses += circuit[2]
used_vars += circuit[1]
start_var += circuit[1]
return [constraint_string, used_clauses, used_vars, start_var]
def gen_at_least_2_constraints(vars, start_var):
k = len(vars) - 2
vars = [-var for var in vars]
constraint_string = ''
used_clauses = 0
used_vars = 0
index_gen = next_var_index(start_var)
circuit = gen_seq_circuit(k, vars, index_gen)
constraint_string += circuit[0]
used_clauses += circuit[2]
used_vars += circuit[1]
start_var += circuit[1]
return [constraint_string, used_clauses, used_vars, start_var]
# Adjacency conflicts
# ###################
def get_all_adjacency_conflicts_4_neighborhood(N, X):
conflicts = set()
for x in range(N):
for y in range(N):
if x < (N-1):
conflicts.add(((x,y),(x+1,y)))
if y < (N-1):
conflicts.add(((x,y),(x,y+1)))
cnf = '' # slow string appends
for (var_a, var_b) in conflicts:
var_a_ = X[var_a]
var_b_ = X[var_b]
cnf += '-' + var_a_ + ' ' + '-' + var_b_ + ' 0 \n'
return cnf, len(conflicts)
# Build SAT-CNF
#############
def build_cnf(N, verbose=False):
var_counter = count(1)
N_CLAUSES = 0
X = np.zeros((N, N), dtype=object)
for a in range(N):
for b in range(N):
X[a,b] = str(next(var_counter))
# Adjacency constraints
CNF, N_CLAUSES = get_all_adjacency_conflicts_4_neighborhood(N, X)
# k=2 constraints
NEXT_VAR = N*N+1
for row in range(N):
constraint_string, used_clauses, used_vars, NEXT_VAR = gen_at_most_2_constraints(X[row, :].astype(int).tolist(), NEXT_VAR)
N_CLAUSES += used_clauses
CNF += constraint_string
constraint_string, used_clauses, used_vars, NEXT_VAR = gen_at_least_2_constraints(X[row, :].astype(int).tolist(), NEXT_VAR)
N_CLAUSES += used_clauses
CNF += constraint_string
for col in range(N):
constraint_string, used_clauses, used_vars, NEXT_VAR = gen_at_most_2_constraints(X[:, col].astype(int).tolist(), NEXT_VAR)
N_CLAUSES += used_clauses
CNF += constraint_string
constraint_string, used_clauses, used_vars, NEXT_VAR = gen_at_least_2_constraints(X[:, col].astype(int).tolist(), NEXT_VAR)
N_CLAUSES += used_clauses
CNF += constraint_string
# build final cnf
CNF = 'p cnf ' + str(NEXT_VAR-1) + ' ' + str(N_CLAUSES) + '\n' + CNF
return X, CNF, NEXT_VAR-1
# External tools
# ##############
def get_random_xor_problem(CNF_IN_fp, N_DEC_VARS, N_ALL_VARS, s, min_l, max_l):
# .cnf not part of arg!
p = subprocess.Popen(['./gen-wff', CNF_IN_fp,
str(N_DEC_VARS), str(N_ALL_VARS),
str(s), str(min_l), str(max_l), 'xored'],
stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
result = p.communicate()
os.remove(CNF_IN_fp + '-str-xored.xor') # file not needed
return CNF_IN_fp + '-str-xored.cnf'
def solve(CNF_IN_fp, N_DEC_VARS):
seed = cryptogen.randint(0, 2147483647) # actually no reason to do it; but can't hurt either
p = subprocess.Popen(["./cryptominisat5", '-t', '4', '-r', str(seed), CNF_IN_fp], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
result = p.communicate()[0]
sat_line = result.find('s SATISFIABLE')
if sat_line != -1:
# solution found!
vars = parse_solution(result)[:N_DEC_VARS]
# forbid solution (DeMorgan)
negated_vars = list(map(lambda x: x*(-1), vars))
with open(CNF_IN_fp, 'a') as f:
f.write( (str(negated_vars)[1:-1] + ' 0\n').replace(',', ''))
# assume solve is treating last constraint despite not changing header!
# solve again
seed = cryptogen.randint(0, 2147483647)
p = subprocess.Popen(["./cryptominisat5", '-t', '4', '-r', str(seed), CNF_IN_fp], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
result = p.communicate()[0]
sat_line = result.find('s SATISFIABLE')
if sat_line != -1:
os.remove(CNF_IN_fp) # not needed anymore
return True, False, None
else:
return True, True, vars
else:
return False, False, None
def parse_solution(output):
# assumes there is one
vars = []
for line in output.split("\n"):
if line:
if line[0] == 'v':
line_vars = list(map(lambda x: int(x), line.split()[1:]))
vars.extend(line_vars)
return vars
# Core-algorithm
# ##############
def xorsample(X, CNF_IN_fp, N_DEC_VARS, N_VARS, s, min_l, max_l):
start_time = time()
while True:
# add s random XOR constraints to F
xored_cnf_fp = get_random_xor_problem(CNF_IN_fp, N_DEC_VARS, N_VARS, s, min_l, max_l)
state_lvl1, state_lvl2, var_sol = solve(xored_cnf_fp, N_DEC_VARS)
print('------------')
if state_lvl1 and state_lvl2:
print('FOUND')
d = shelve.open('N_15_70_4_6_TO_PLOT')
d[str(uuid.uuid4())] = (pickle.dumps(var_sol), time() - start_time)
d.close()
return True
else:
if state_lvl1:
print('sol not unique')
else:
print('no sol found')
print('------------')
""" Run """
N = 15
N_DEC_VARS = N*N
X, CNF, N_VARS = build_cnf(N)
with open('my_problem.cnf', 'w') as f:
f.write(CNF)
counter = 0
while True:
print('sample: ', counter)
xorsample(X, 'my_problem', N_DEC_VARS, N_VARS, 70, 4, 6)
counter += 1
Output will look like (removed some warnings):
------------
no sol found
------------
------------
no sol found
------------
------------
no sol found
------------
------------
sol not unique
------------
------------
FOUND
Core: CNF-formulation
We introduce one variable for every cell of the matrix. N=20 means 400 binary-variables.
Adjancency:
Precalculate all symmetry-reduced conflicts and add conflict-clauses.
Basic theory:
a -> !b
<->
!a v !b (propositional logic)
Row/Col-wise Cardinality:
This is tough to express in CNF and naive approaches need an exponential number
of constraints.
We use some adder-circuit based encoding (SINZ, Carsten. Towards an optimal CNF encoding of boolean cardinality constraints) which introduces new auxiliary-variables.
Remark:
sum(var_set) <= k
<->
sum(negated(var_set)) >= len(var_set) - k
These SAT-encodings can be put into exact model-counters (for small N; e.g. < 9). The number of solutions equals Ante's results, which is a strong indication for a correct transformation!
There are also interesting approximate model-counters (also heavily based on xor-constraints) like approxMC which shows one more thing we can do with the SAT-formulation. But in practice i have not been able to use these (approxMC = autoconf; no comment).
Other experiments
I did also build a version using pblib, to use more powerful cardinality-formulations
for the SAT-CNF formulation. I did not try to use the C++-based API, but only the reduced pbencoder, which automatically selects some best encoding, which was way worse than my encoding used above (which is best is still a research-problem; often even redundant-constraints can help).
Empirical analysis
For the sake of obtaining some sample-size (given my patience), i only computed samples for N=15. In this case we used:
N=70 xors
L,U = 4,6
I also computed some samples for N=20 with (100,3,6), but this takes a few mins and we reduced the lower bound!
Visualization
Here some animation (strengthening my love-hate relationship with matplotlib):
Edit: And a (reduced) comparison to brute-force uniform-sampling with N=5 (NXOR,L,U = 4, 10, 30):
(I have not yet decided on the addition of the plotting-code. It's as ugly as the above one and people might look too much into my statistical shambles; normalizations and co.)
Theory
Statistical analysis is probably hard to do as the underlying problem is of such combinatoric nature. It's even not entirely obvious how that final cell-PDF should look like. In the case of N=odd, it's probably non-uniform and looks like a chess-board (i did brute-force check N=5 to observe this).
One thing we can be sure about (imho): symmetry!
Given a cell-PDF matrix, we should expect, that the matrix is symmetric (A = A.T).
This is checked in the visualization and the euclidean-norm of differences over time is plotted.
We can do the same on some other observation: observed pairings.
For N=3, we can observe the following pairs:
0,1
0,2
1,2
Now we can do this per-row and per-column and should expect symmetry too!
Sadly, it's probably not easy to say something about the variance and therefore the needed samples to speak about confidence!
Observation
According to my simplified perception, current-samples and the cell-PDF look good, although convergence is not achieved yet (or we are far away from uniformity).
The more important aspect are probably the two norms, nicely decreasing towards 0.
(yes; one could tune some algorithm for that by transposing with prob=0.5; but this is not done here as it would defeat it's purpose).
Potential next steps
Tune parameters
Check out the approach using #SAT-solvers / Model-counters instead of SAT-solvers
Try different CNF-formulations, especially in regards to cardinality-encodings and xor-encodings
XorSample is by default using tseitin-like encoding to get around exponentially grow
for smaller xors (as used) it might be a good idea to use naive encoding (which propagates faster)
XorSample supports that in theory; but the script's work differently in practice
Cryptominisat is known for dedicated XOR-handling (as it was build for analyzing cryptography including many xors) and might gain something by naive encoding (as inferring xors from blown-up CNFs is much harder)
More statistical-analysis
Get rid of XorSample scripts (shell + perl...)
Summary
The approach is very general
This code produces feasible samples
It should be not hard to prove, that every feasible solution can be sampled
Others have proven theoretical guarantees for uniformity for some params
does not hold for our params
Others have empirically / theoretically analyzed smaller parameters (in use here)
(Updated test results, example run-through and code snippets below.)
You can use dynamic programming to calculate the number of solutions resulting from every state (in a much more efficient way than a brute-force algorithm), and use those (pre-calculated) values to create equiprobable random solutions.
Consider the example of a 7x7 matrix; at the start, the state is:
0,0,0,0,0,0,0
meaning that there are seven adjacent unused columns. After adding two ones to the first row, the state could be e.g.:
0,1,0,0,1,0,0
with two columns that now have a one in them. After adding ones to the second row, the state could be e.g.:
0,1,1,0,1,0,1
After three rows are filled, there is a possibility that a column will have its maximum of two ones; this effectively splits the matrix into two independent zones:
1,1,1,0,2,0,1 -> 1,1,1,0 + 0,1
These zones are independent in the sense that the no-adjacent-ones rule has no effect when adding ones to different zones, and the order of the zones has no effect on the number of solutions.
In order to use these states as signatures for types of solutions, we have to transform them into a canonical notation. First, we have to take into account the fact that columns with only 1 one in them may be unusable in the next row, because they contain a one in the current row. So instead of a binary notation, we have to use a ternary notation, e.g.:
2,1,1,0 + 0,1
where the 2 means that this column was used in the current row (and not that there are 2 ones in the column). At the next step, we should then convert the twos back into ones.
Additionally, we can also mirror the seperate groups to put them into their lexicographically smallest notation:
2,1,1,0 + 0,1 -> 0,1,1,2 + 0,1
Lastly, we sort the seperate groups from small to large, and then lexicographically, so that a state in a larger matrix may be e.g.:
0,0 + 0,1 + 0,0,2 + 0,1,0 + 0,1,0,1
Then, when calculating the number of solutions resulting from each state, we can use memoization using the canonical notation of each state as a key.
Creating a dictionary of the states and the number of solutions for each of them only needs to be done once, and a table for larger matrices can probably be used for smaller matrices too.
Practically, you'd generate a random number between 0 and the total number of solutions, and then for every row, you'd look at the different states you could create from the current state, look at the number of unique solutions each one would generate, and see which option leads to the solution that corresponds with your randomly generated number.
Note that every state and the corresponding key can only occur in a particular row, so you can store the keys in seperate dictionaries per row.
TEST RESULTS
A first test using unoptimized JavaScript gave very promising results. With dynamic programming, calculating the number of solutions for a 10x10 matrix now takes a second, where a brute-force algorithm took several hours (and this is the part of the algorithm that only needs to be done once). The size of the dictionary with the signatures and numbers of solutions grows with a diminishing factor approaching 2.5 for each step in size; the time to generate it grows with a factor of around 3.
These are the number of solutions, states, signatures (total size of the dictionaries), and maximum number of signatures per row (largest dictionary per row) that are created:
size unique solutions states signatures max/row
4x4 2 9 6 2
5x5 16 73 26 8
6x6 722 514 107 40
7x7 33,988 2,870 411 152
8x8 2,215,764 13,485 1,411 596
9x9 179,431,924 56,375 4,510 1,983
10x10 17,849,077,140 218,038 13,453 5,672
11x11 2,138,979,146,276 801,266 38,314 14,491
12x12 304,243,884,374,412 2,847,885 104,764 35,803
13x13 50,702,643,217,809,908 9,901,431 278,561 96,414
14x14 9,789,567,606,147,948,364 33,911,578 723,306 238,359
15x15 2,168,538,331,223,656,364,084 114,897,838 1,845,861 548,409
16x16 546,386,962,452,256,865,969,596 ... 4,952,501 1,444,487
17x17 155,420,047,516,794,379,573,558,433 12,837,870 3,754,040
18x18 48,614,566,676,379,251,956,711,945,475 31,452,747 8,992,972
19x19 17,139,174,923,928,277,182,879,888,254,495 74,818,773 20,929,008
20x20 6,688,262,914,418,168,812,086,412,204,858,650 175,678,000 50,094,203
(Additional results obtained with C++, using a simple 128-bit integer implementation. To count the states, the code had to be run using each state as a seperate signature, which I was unable to do for the largest sizes. )
EXAMPLE
The dictionary for a 5x5 matrix looks like this:
row 0: 00000 -> 16 row 3: 101 -> 0
1112 -> 1
row 1: 20002 -> 2 1121 -> 1
00202 -> 4 1+01 -> 0
02002 -> 2 11+12 -> 2
02020 -> 2 1+121 -> 1
0+1+1 -> 0
row 2: 10212 -> 1 1+112 -> 1
12012 -> 1
12021 -> 2 row 4: 0 -> 0
12102 -> 1 11 -> 0
21012 -> 0 12 -> 0
02121 -> 3 1+1 -> 1
01212 -> 1 1+2 -> 0
The total number of solutions is 16; if we randomly pick a number from 0 to 15, e.g. 13, we can find the corresponding (i.e. the 14th) solution like this:
state: 00000
options: 10100 10010 10001 01010 01001 00101
signature: 00202 02002 20002 02020 02002 00202
solutions: 4 2 2 2 2 4
This tells us that the 14th solution is the 2nd solution of option 00101. The next step is:
state: 00101
options: 10010 01010
signature: 12102 02121
solutions: 1 3
This tells us that the 2nd solution is the 1st solution of option 01010. The next step is:
state: 01111
options: 10100 10001 00101
signature: 11+12 1112 1+01
solutions: 2 1 0
This tells us that the 1st solution is the 1st solution of option 10100. The next step is:
state: 11211
options: 01010 01001
signature: 1+1 1+1
solutions: 1 1
This tells us that the 1st solutions is the 1st solution of option 01010. The last step is:
state: 12221
options: 10001
And the 5x5 matrix corresponding to randomly chosen number 13 is:
0 0 1 0 1
0 1 0 1 0
1 0 1 0 0
0 1 0 1 0
1 0 0 0 1
And here's a quick'n'dirty code example; run the snippet to generate the signature and solution count dictionary, and generate a random 10x10 matrix (it takes a second to generate the dictionary; once that is done, it generates random solutions in half a millisecond):
function signature(state, prev) {
var zones = [], zone = [];
for (var i = 0; i < state.length; i++) {
if (state[i] == 2) {
if (zone.length) zones.push(mirror(zone));
zone = [];
}
else if (prev[i]) zone.push(3);
else zone.push(state[i]);
}
if (zone.length) zones.push(mirror(zone));
zones.sort(function(a,b) {return a.length - b.length || a - b;});
return zones.length ? zones.join("2") : "2";
function mirror(zone) {
var ltr = zone.join('');
zone.reverse();
var rtl = zone.join('');
return (ltr < rtl) ? ltr : rtl;
}
}
function memoize(n) {
var memo = [], empty = [];
for (var i = 0; i <= n; i++) memo[i] = [];
for (var i = 0; i < n; i++) empty[i] = 0;
memo[0][signature(empty, empty)] = next_row(empty, empty, 1);
return memo;
function next_row(state, prev, row) {
if (row > n) return 1;
var solutions = 0;
for (var i = 0; i < n - 2; i++) {
if (state[i] == 2 || prev[i] == 1) continue;
for (var j = i + 2; j < n; j++) {
if (state[j] == 2 || prev[j] == 1) continue;
var s = state.slice(), p = empty.slice();
++s[i]; ++s[j]; ++p[i]; ++p[j];
var sig = signature(s, p);
var sol = memo[row][sig];
if (sol == undefined)
memo[row][sig] = sol = next_row(s, p, row + 1);
solutions += sol;
}
}
return solutions;
}
}
function random_matrix(n, memo) {
var matrix = [], empty = [], state = [], prev = [];
for (var i = 0; i < n; i++) empty[i] = state[i] = prev[i] = 0;
var total = memo[0][signature(empty, empty)];
var pick = Math.floor(Math.random() * total);
document.write("solution " + pick.toLocaleString('en-US') +
" from a total of " + total.toLocaleString('en-US') + "<br>");
for (var row = 1; row <= n; row++) {
var options = find_options(state, prev);
for (var i in options) {
var state_copy = state.slice();
for (var j in state_copy) state_copy[j] += options[i][j];
var sig = signature(state_copy, options[i]);
var solutions = memo[row][sig];
if (pick < solutions) {
matrix.push(options[i].slice());
prev = options[i].slice();
state = state_copy.slice();
break;
}
else pick -= solutions;
}
}
return matrix;
function find_options(state, prev) {
var options = [];
for (var i = 0; i < n - 2; i++) {
if (state[i] == 2 || prev[i] == 1) continue;
for (var j = i + 2; j < n; j++) {
if (state[j] == 2 || prev[j] == 1) continue;
var option = empty.slice();
++option[i]; ++option[j];
options.push(option);
}
}
return options;
}
}
var size = 10;
var memo = memoize(size);
var matrix = random_matrix(size, memo);
for (var row in matrix) document.write(matrix[row] + "<br>");
The code snippet below shows the dictionary of signatures and solution counts for a matrix of size 10x10. I've used a slightly different signature format from the explanation above: the zones are delimited by a '2' instead of a plus sign, and a column which has a one in the previous row is marked with a '3' instead of a '2'. This shows how the keys could be stored in a file as integers with 2×N bits (padded with 2's).
function signature(state, prev) {
var zones = [], zone = [];
for (var i = 0; i < state.length; i++) {
if (state[i] == 2) {
if (zone.length) zones.push(mirror(zone));
zone = [];
}
else if (prev[i]) zone.push(3);
else zone.push(state[i]);
}
if (zone.length) zones.push(mirror(zone));
zones.sort(function(a,b) {return a.length - b.length || a - b;});
return zones.length ? zones.join("2") : "2";
function mirror(zone) {
var ltr = zone.join('');
zone.reverse();
var rtl = zone.join('');
return (ltr < rtl) ? ltr : rtl;
}
}
function memoize(n) {
var memo = [], empty = [];
for (var i = 0; i <= n; i++) memo[i] = [];
for (var i = 0; i < n; i++) empty[i] = 0;
memo[0][signature(empty, empty)] = next_row(empty, empty, 1);
return memo;
function next_row(state, prev, row) {
if (row > n) return 1;
var solutions = 0;
for (var i = 0; i < n - 2; i++) {
if (state[i] == 2 || prev[i] == 1) continue;
for (var j = i + 2; j < n; j++) {
if (state[j] == 2 || prev[j] == 1) continue;
var s = state.slice(), p = empty.slice();
++s[i]; ++s[j]; ++p[i]; ++p[j];
var sig = signature(s, p);
var sol = memo[row][sig];
if (sol == undefined)
memo[row][sig] = sol = next_row(s, p, row + 1);
solutions += sol;
}
}
return solutions;
}
}
var memo = memoize(10);
for (var i in memo) {
document.write("row " + i + ":<br>");
for (var j in memo[i]) {
document.write(""" + j + "": " + memo[i][j] + "<br>");
}
}
Just few thoughts. Number of matrices satisfying conditions for n <= 10:
3 0
4 2
5 16
6 722
7 33988
8 2215764
9 179431924
10 17849077140
Unfortunatelly there is no sequence with these numbers in OEIS.
There is one similar (A001499), without condition for neighbouring one's. Number of nxn matrices in this case is 'of order' as A001499's number of (n-1)x(n-1) matrices. That is to be expected since number
of ways to fill one row in this case, position 2 one's in n places with at least one zero between them is ((n-1) choose 2). Same as to position 2 one's in (n-1) places without the restriction.
I don't think there is an easy connection between these matrix of order n and A001499 matrix of order n-1, meaning that if we have A001499 matrix than we can construct some of these matrices.
With this, for n=20, number of matrices is >10^30. Quite a lot :-/
This solution use recursion in order to set the cell of the matrix one by one. If the random walk finish with an impossible solution then we rollback one step in the tree and we continue the random walk.
The algorithm is efficient and i think that the generated data are highly equiprobable.
package rndsqmatrix;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.stream.IntStream;
public class RndSqMatrix {
/**
* Generate a random matrix
* #param size the size of the matrix
* #return the matrix encoded in 1d array i=(x+y*size)
*/
public static int[] generate(final int size) {
return generate(size, new int[size * size], new int[size],
new int[size]);
}
/**
* Build a matrix recursivly with a random walk
* #param size the size of the matrix
* #param matrix the matrix encoded in 1d array i=(x+y*size)
* #param rowSum
* #param colSum
* #return
*/
private static int[] generate(final int size, final int[] matrix,
final int[] rowSum, final int[] colSum) {
// generate list of valid positions
final List<Integer> positions = new ArrayList();
for (int y = 0; y < size; y++) {
if (rowSum[y] < 2) {
for (int x = 0; x < size; x++) {
if (colSum[x] < 2) {
final int p = x + y * size;
if (matrix[p] == 0
&& (x == 0 || matrix[p - 1] == 0)
&& (x == size - 1 || matrix[p + 1] == 0)
&& (y == 0 || matrix[p - size] == 0)
&& (y == size - 1 || matrix[p + size] == 0)) {
positions.add(p);
}
}
}
}
}
// no valid positions ?
if (positions.isEmpty()) {
// if the matrix is incomplete => return null
for (int i = 0; i < size; i++) {
if (rowSum[i] != 2 || colSum[i] != 2) {
return null;
}
}
// the matrix is complete => return it
return matrix;
}
// random walk
Collections.shuffle(positions);
for (int p : positions) {
// set '1' and continue recursivly the exploration
matrix[p] = 1;
rowSum[p / size]++;
colSum[p % size]++;
final int[] solMatrix = generate(size, matrix, rowSum, colSum);
if (solMatrix != null) {
return solMatrix;
}
// rollback
matrix[p] = 0;
rowSum[p / size]--;
colSum[p % size]--;
}
// we can't find a valid matrix from here => return null
return null;
}
public static void printMatrix(final int size, final int[] matrix) {
for (int y = 0; y < size; y++) {
for (int x = 0; x < size; x++) {
System.out.print(matrix[x + y * size]);
System.out.print(" ");
}
System.out.println();
}
}
public static void printStatistics(final int size, final int count) {
final int sumMatrix[] = new int[size * size];
for (int i = 0; i < count; i++) {
final int[] matrix = generate(size);
for (int j = 0; j < sumMatrix.length; j++) {
sumMatrix[j] += matrix[j];
}
}
printMatrix(size, sumMatrix);
}
public static void checkAlgorithm() {
final int size = 8;
final int count = 2215764;
final int divisor = 122;
final int sumMatrix[] = new int[size * size];
for (int i = 0; i < count/divisor ; i++) {
final int[] matrix = generate(size);
for (int j = 0; j < sumMatrix.length; j++) {
sumMatrix[j] += matrix[j];
}
}
int total = 0;
for(int i=0; i < sumMatrix.length; i++) {
total += sumMatrix[i];
}
final double factor = (double)total / (count/divisor);
System.out.println("Factor=" + factor + " (theory=16.0)");
}
public static void benchmark(final int size, final int count,
final boolean parallel) {
final long begin = System.currentTimeMillis();
if (!parallel) {
for (int i = 0; i < count; i++) {
generate(size);
}
} else {
IntStream.range(0, count).parallel().forEach(i -> generate(size));
}
final long end = System.currentTimeMillis();
System.out.println("rate="
+ (double) (end - begin) / count + "ms/matrix");
}
public static void main(String[] args) {
checkAlgorithm();
benchmark(8, 10000, true);
//printStatistics(8, 2215764/36);
printStatistics(8, 2215764);
}
}
The output is:
Factor=16.0 (theory=16.0)
rate=0.2835ms/matrix
552969 554643 552895 554632 555680 552753 554567 553389
554071 554847 553441 553315 553425 553883 554485 554061
554272 552633 555130 553699 553604 554298 553864 554028
554118 554299 553565 552986 553786 554473 553530 554771
554474 553604 554473 554231 553617 553556 553581 553992
554960 554572 552861 552732 553782 554039 553921 554661
553578 553253 555721 554235 554107 553676 553776 553182
553086 553677 553442 555698 553527 554850 553804 553444
Here is a very fast approach of generating the matrix row by row, written in Java:
public static void main(String[] args) throws Exception {
int n = 100;
Random rnd = new Random();
byte[] mat = new byte[n*n];
byte[] colCount = new byte[n];
//generate row by row
for (int x = 0; x < n; x++) {
//generate a random first bit
int b1 = rnd.nextInt(n);
while ( (x > 0 && mat[(x-1)*n + b1] == 1) || //not adjacent to the one above
(colCount[b1] == 2) //not in a column which has 2
) b1 = rnd.nextInt(n);
//generate a second bit, not equal to the first one
int b2 = rnd.nextInt(n);
while ( (b2 == b1) || //not the same as bit 1
(x > 0 && mat[(x-1)*n + b2] == 1) || //not adjacent to the one above
(colCount[b2] == 2) || //not in a column which has 2
(b2 == b1 - 1) || //not adjacent to b1
(b2 == b1 + 1)
) b2 = rnd.nextInt(n);
//fill the matrix values and increment column counts
mat[x*n + b1] = 1;
mat[x*n + b2] = 1;
colCount[b1]++;
colCount[b2]++;
}
String arr = Arrays.toString(mat).substring(1, n*n*3 - 1);
System.out.println(arr.replaceAll("(.{" + n*3 + "})", "$1\n"));
}
It essentially generates each a random row at a time. If the row will violate any of the conditions, it is generated again (again randomly). I believe this will satisfy condition 4 as well.
Adding a quick note that it will spin forever for N-s where there is no solutions (like N=3).

Count number of 1 digits in 11 to the power of N

I came across an interesting problem:
How would you count the number of 1 digits in the representation of 11 to the power of N, 0<N<=1000.
Let d be the number of 1 digits
N=2 11^2 = 121 d=2
N=3 11^3 = 1331 d=2
Worst time complexity expected O(N^2)
The simple approach where you compute the number and count the number of 1 digits my getting the last digit and dividing by 10, does not work very well. 11^1000 is not even representable in any standard data type.
Powers of eleven can be stored as a string and calculated quite quickly that way, without a generalised arbitrary precision math package. All you need is multiply by ten and add.
For example, 111 is 11. To get the next power of 11 (112), you multiply by (10 + 1), which is effectively the number with a zero tacked the end, added to the number: 110 + 11 = 121.
Similarly, 113 can then be calculated as: 1210 + 121 = 1331.
And so on:
11^2 11^3 11^4 11^5 11^6
110 1210 13310 146410 1610510
+11 +121 +1331 +14641 +161051
--- ---- ----- ------ -------
121 1331 14641 161051 1771561
So that's how I'd approach, at least initially.
By way of example, here's a Python function to raise 11 to the n'th power, using the method described (I am aware that Python has support for arbitrary precision, keep in mind I'm just using it as a demonstration on how to do this an an algorithm, which is how the question was tagged):
def elevenToPowerOf(n):
# Anything to the zero is 1.
if n == 0: return "1"
# Otherwise, n <- n * 10 + n, once for each level of power.
num = "11"
while n > 1:
n = n - 1
# Make multiply by eleven easy.
ten = num + "0"
num = "0" + num
# Standard primary school algorithm for adding.
newnum = ""
carry = 0
for dgt in range(len(ten)-1,-1,-1):
res = int(ten[dgt]) + int(num[dgt]) + carry
carry = res // 10
res = res % 10
newnum = str(res) + newnum
if carry == 1:
newnum = "1" + newnum
# Prepare for next multiplication.
num = newnum
# There you go, 11^n as a string.
return num
And, for testing, a little program which works out those values for each power that you provide on the command line:
import sys
for idx in range(1,len(sys.argv)):
try:
power = int(sys.argv[idx])
except (e):
print("Invalid number [%s]" % (sys.argv[idx]))
sys.exit(1)
if power < 0:
print("Negative powers not allowed [%d]" % (power))
sys.exit(1)
number = elevenToPowerOf(power)
count = 0
for ch in number:
if ch == '1':
count += 1
print("11^%d is %s, has %d ones" % (power,number,count))
When you run that with:
time python3 prog.py 0 1 2 3 4 5 6 7 8 9 10 11 12 1000
you can see that it's both accurate (checked with bc) and fast (finished in about half a second):
11^0 is 1, has 1 ones
11^1 is 11, has 2 ones
11^2 is 121, has 2 ones
11^3 is 1331, has 2 ones
11^4 is 14641, has 2 ones
11^5 is 161051, has 3 ones
11^6 is 1771561, has 3 ones
11^7 is 19487171, has 3 ones
11^8 is 214358881, has 2 ones
11^9 is 2357947691, has 1 ones
11^10 is 25937424601, has 1 ones
11^11 is 285311670611, has 4 ones
11^12 is 3138428376721, has 2 ones
11^1000 is 2469932918005826334124088385085221477709733385238396234869182951830739390375433175367866116456946191973803561189036523363533798726571008961243792655536655282201820357872673322901148243453211756020067624545609411212063417307681204817377763465511222635167942816318177424600927358163388910854695041070577642045540560963004207926938348086979035423732739933235077042750354729095729602516751896320598857608367865475244863114521391548985943858154775884418927768284663678512441565517194156946312753546771163991252528017732162399536497445066348868438762510366191040118080751580689254476068034620047646422315123643119627205531371694188794408120267120500325775293645416335230014278578281272863450085145349124727476223298887655183167465713337723258182649072572861625150703747030550736347589416285606367521524529665763903537989935510874657420361426804068643262800901916285076966174176854351055183740078763891951775452021781225066361670593917001215032839838911476044840388663443684517735022039957481918726697789827894303408292584258328090724141496484460001, has 105 ones
real 0m0.609s
user 0m0.592s
sys 0m0.012s
That may not necessarily be O(n2) but it should be fast enough for your domain constraints.
Of course, given those constraints, you can make it O(1) by using a method I call pre-generation. Simply write a program to generate an array you can plug into your program which contains a suitable function. The following Python program does exactly that, for the powers of eleven from 1 to 100 inclusive:
def mulBy11(num):
# Same length to ease addition.
ten = num + '0'
num = '0' + num
# Standard primary school algorithm for adding.
result = ''
carry = 0
for idx in range(len(ten)-1, -1, -1):
digit = int(ten[idx]) + int(num[idx]) + carry
carry = digit // 10
digit = digit % 10
result = str(digit) + result
if carry == 1:
result = '1' + result
return result
num = '1'
print('int oneCountInPowerOf11(int n) {')
print(' static int numOnes[] = {-1', end='')
for power in range(1,101):
num = mulBy11(num)
count = sum(1 for ch in num if ch == '1')
print(',%d' % count, end='')
print('};')
print(' if ((n < 0) || (n > sizeof(numOnes) / sizeof(*numOnes)))')
print(' return -1;')
print(' return numOnes[n];')
print('}')
The code output by this script is:
int oneCountInPowerOf11(int n) {
static int numOnes[] = {-1,2,2,2,2,3,3,3,2,1,1,4,2,3,1,4,2,1,4,4,1,5,5,1,5,3,6,6,3,6,3,7,5,7,4,4,2,3,4,4,3,8,4,8,5,5,7,7,7,6,6,9,9,7,12,10,8,6,11,7,6,5,5,7,10,2,8,4,6,8,5,9,13,14,8,10,8,7,11,10,9,8,7,13,8,9,6,8,5,8,7,15,12,9,10,10,12,13,7,11,12};
if ((n < 0) || (n > sizeof(numOnes) / sizeof(*numOnes)))
return -1;
return numOnes[n];
}
which should be blindingly fast when plugged into a C program. On my system, the Python code itself (when you up the range to 1..1000) runs in about 0.6 seconds and the C code, when compiled, finds the number of ones in 111000 in 0.07 seconds.
Here's my concise solution.
def count1s(N):
# When 11^(N-1) = result, 11^(N) = (10+1) * result = 10*result + result
result = 1
for i in range(N):
result += 10*result
# Now count 1's
count = 0
for ch in str(result):
if ch == '1':
count += 1
return count
En c#:
private static void Main(string[] args)
{
var res = Elevento(1000);
var countOf1 = res.Select(x => int.Parse(x.ToString())).Count(s => s == 1);
Console.WriteLine(countOf1);
}
private static string Elevento(int n)
{
if (n == 0) return "1";
//Otherwise, n <- n * 10 + n, once for each level of power.
var num = "11";
while (n > 1)
{
n--;
// Make multiply by eleven easy.
var ten = num + "0";
num = "0" + num;
//Standard primary school algorithm for adding.
var newnum = "";
var carry = 0;
foreach (var dgt in Enumerable.Range(0, ten.Length).Reverse())
{
var res = int.Parse(ten[dgt].ToString()) + int.Parse(num[dgt].ToString()) + carry;
carry = res/10;
res = res%10;
newnum = res + newnum;
}
if (carry == 1)
newnum = "1" + newnum;
// Prepare for next multiplication.
num = newnum;
}
//There you go, 11^n as a string.
return num;
}

How to write a function to generate random number 0/1 use another random function?

If I have a function named rand1() which generates number 0(30% probability) or 1(70% probability), how to write a function rand2() which generates number 0 or 1 equiprobability use rand1() ?
Update:
Finally, I found this is a problem on book Introduction to Algorithms (2nd) (I have bought the Chinese edition of this book ), Excercise 5.1-3, the original problem is :
5.1-3
Suppose that you want to output 0 with probability 1/2 and 1 with probability 1/2.
At your disposal is a procedure BIASED-RANDOM, that outputs either 0 or 1. It
outputs 1 with some probability p and 0 with probability 1− p, where 0 < p < 1,
but you do not know what p is. Give an algorithm that uses BIASED-RANDOM
as a subroutine, and returns an unbiased answer, returning 0 with probability 1/2
and 1 with probability 1/2. What is the expected running time of your algorithm
as a function of p?
the solution is :
(see: http://www.cnblogs.com/meteorgan/archive/2012/05/04/2482317.html)
To get an unbiased random bit, given only calls to BIASED-RANDOM, call
BIASED-RANDOM twice. Repeatedly do so until the two calls return different
values, and when this occurs, return the Þrst of the two bits:
UNBIASED-RANDOM
while TRUE
do
x ← BIASED-RANDOM
y ← BIASED-RANDOM
if x != y
then return x
To see that UNBIASED-RANDOM returns 0 and 1 each with probability 1/2, observe
that the probability that a given iteration returns 0 is
Pr {x = 0 and y = 1} = (1 − p)p ,
and the probability that a given iteration returns 1 is
Pr {x = 1 and y = 0} = p(1 − p) .
(We rely on the bits returned by BIASED-RANDOM being independent.) Thus, the
probability that a given iteration returns 0 equals the probability that it returns 1.
Since there is no other way for UNBIASED-RANDOM to return a value, it returns 0
and 1 each with probability 1/2.
Generate two numbers, a and b.
If a is 0 and b is 1 (21% chance), generate a 0.
If a is 1 and b is 0 (21% chance), generate a 1.
For all other cases (58% chance), just generate a new a and b and try again.
If you call rand1 twice, there is an equal chance of getting [1 0] and [0 1], so if you return the first of each non-matching pair (and discard matching pairs) you will get, on average, 0.5(1 - p2 - (1-p)2) output bits per input bit (where p is the probability of rand1 returning 1; 0.7 in your example) and independently of p, each output bit will be 1 with probability 0.5.
However, we can do better.
Rather than throw away the matching pairs, we can remember them in the hope that they are followed by opposite matching pairs - The sequences [0 0 1 1] and [1 1 0 0] are also equally likely, and again we can return the first bit whenever we see such a sequence (still with output probability 0.5.) We can keep combining them indefinitely, looking for sequences like [0 0 0 0 1 1 1 1] etc.
And we can go even further - consider the input sequences [0 0 0 1] and [0 1 0 0] produce the same output ([0]) as it stands, but these two sequences were also equally likely, so we can extract an extra bit of output from this, returning [0 0] for the first case and [0 1]
for the second. This is where it gets more complicated though, as you would need to start buffering output bits.
Both techniques can be applied recursively, and taken to the limit it becomes lossless (i.e. if rand1 has a probability of 0.5, you get an average of one output bit per input bit.)
Full description (with math) here: http://www.eecs.harvard.edu/~michaelm/coinflipext.pdf
You will need to figure out how close you want to get to 50% 0 50% 1.
If you add results from repeated calls to rand1. if the results is 0 or 2 then the value returned is 0 if it is 1 then return 1. (in code you can use modulo 2)
int val = rand1(); // prob 30% 0, and 70% 1
val=(val+rand1())%2; // prob 58% 0, and 42% 1 (#1 see math bellow)
val=(val+rand1())%2; // prob 46.8% 0, and 53.2% 1 (#2 see math bellow)
val=(val+rand1())%2; // prob 51.28% 0, and 48.72% 1
val=(val+rand1())%2; // prob 49.488% 0, and 50.512% 1
val=(val+rand1())%2; // prob 50.2048% 0, and 49.7952% 1
You get the idea. so it is up to you to figure out how close you want the probabilities. every subsequent call will gets you closer to 50% 50% but it will never be exactly equal.
If you want the math for the probabilities:
1
prob ((val+rand1()%2) = 0) = (prob(val = 0)*prob(rand1() = 0)) + (prob(val = 1)*prob(rand1() = 1)
= (0.3*0.3)+(0.7*0.7)
= 0.09 + 0.49
= 0.58
= 58%
prob ((val+rand1()%2) = 1) = (prob(val = 1)*prob(rand1() = 0)) + (prob(val = 0)*prob(rand1() = 1)
= (0.7*0.3)+(0.3*0.7)
= 0.21 + 0.21
= 0.42
= 42%
2
prob ((val+rand1()%2) = 0) = (prob(val = 0)*prob(rand1() = 0)) + (prob(val = 1)*prob(rand1() = 1)
= (0.58*0.3)+(0.42*0.7)
= 0.174 + 0.294
= 0.468
= 46.8%
prob ((val+rand1()%2) = 1) = (prob(val = 1)*prob(rand1() = 0)) + (prob(val = 0)*prob(rand1() = 1)
= (0.42*0.3)+(0.58*0.7)
= 0.126 + 0.406
= 0.532
= 53.2%
Below rand2 function will provide 50% probability for occurence of zero or one.
#define LIMIT_TO_CALCULATE_PROBABILITY 10 //set any even numbers
int rand2()
{
static int one_occurred = 0;
static int zero_occured = 0;
int rand_value = 0;
int limit = (LIMIT_TO_CALCULATE_PROBABILITY / 2);
if (LIMIT_TO_CALCULATE_PROBABILITY == (one_occured + zero_occured))
{
one_occured = 0;
zero_occured = 0;
}
rand_value = rand1();
if ((1 == rand_value) && (one_occured < limit))
{
one_occured++;
return rand_value;
}
else if ((0 == rand_value) && (zero_occured < limit))
{
zero_occured++;
return rand_value;
}
else if (1 == rand_value)
{
zero_occured++;
return 0;
}
else if (0 == rand_value)
{
one_occured++;
return 1;
}
}

An interview question: About Probability

An interview question:
Given a function f(x) that 1/4 times returns 0, 3/4 times returns 1.
Write a function g(x) using f(x) that 1/2 times returns 0, 1/2 times returns 1.
My implementation is:
function g(x) = {
if (f(x) == 0){ // 1/4
var s = f(x)
if( s == 1) {// 3/4 * 1/4
return s // 3/16
} else {
g(x)
}
} else { // 3/4
var k = f(x)
if( k == 0) {// 1/4 * 3/4
return k // 3/16
} else {
g(x)
}
}
}
Am I right? What's your solution?(you can use any language)
If you call f(x) twice in a row, the following outcomes are possible (assuming that
successive calls to f(x) are independent, identically distributed trials):
00 (probability 1/4 * 1/4)
01 (probability 1/4 * 3/4)
10 (probability 3/4 * 1/4)
11 (probability 3/4 * 3/4)
01 and 10 occur with equal probability. So iterate until you get one of those
cases, then return 0 or 1 appropriately:
do
a=f(x); b=f(x);
while (a == b);
return a;
It might be tempting to call f(x) only once per iteration and keep track of the two
most recent values, but that won't work. Suppose the very first roll is 1,
with probability 3/4. You'd loop until the first 0, then return 1 (with probability 3/4).
The problem with your algorithm is that it repeats itself with high probability. My code:
function g(x) = {
var s = f(x) + f(x) + f(x);
// s = 0, probability: 1/64
// s = 1, probability: 9/64
// s = 2, probability: 27/64
// s = 3, probability: 27/64
if (s == 2) return 0;
if (s == 3) return 1;
return g(x); // probability to go into recursion = 10/64, with only 1 additional f(x) calculation
}
I've measured average number of times f(x) was calculated for your algorithm and for mine. For yours f(x) was calculated around 5.3 times per one g(x) calculation. With my algorithm this number reduced to around 3.5. The same is true for other answers so far since they are actually the same algorithm as you said.
P.S.: your definition doesn't mention 'random' at the moment, but probably it is assumed. See my other answer.
Your solution is correct, if somewhat inefficient and with more duplicated logic. Here is a Python implementation of the same algorithm in a cleaner form.
def g ():
while True:
a = f()
if a != f():
return a
If f() is expensive you'd want to get more sophisticated with using the match/mismatch information to try to return with fewer calls to it. Here is the most efficient possible solution.
def g ():
lower = 0.0
upper = 1.0
while True:
if 0.5 < lower:
return 1
elif upper < 0.5:
return 0
else:
middle = 0.25 * lower + 0.75 * upper
if 0 == f():
lower = middle
else:
upper = middle
This takes about 2.6 calls to g() on average.
The way that it works is this. We're trying to pick a random number from 0 to 1, but we happen to stop as soon as we know whether the number is 0 or 1. We start knowing that the number is in the interval (0, 1). 3/4 of the numbers are in the bottom 3/4 of the interval, and 1/4 are in the top 1/4 of the interval. We decide which based on a call to f(x). This means that we are now in a smaller interval.
If we wash, rinse, and repeat enough times we can determine our finite number as precisely as possible, and will have an absolutely equal probability of winding up in any region of the original interval. In particular we have an even probability of winding up bigger than or less than 0.5.
If you wanted you could repeat the idea to generate an endless stream of bits one by one. This is, in fact, provably the most efficient way of generating such a stream, and is the source of the idea of entropy in information theory.
Given a function f(x) that 1/4 times returns 0, 3/4 times returns 1
Taking this statement literally, f(x) if called four times will always return zero once and 1 3 times. This is different than saying f(x) is a probabalistic function and the 0 to 1 ratio will approach 1 to 3 (1/4 vs 3/4) over many iterations. If the first interpretation is valid, than the only valid function for f(x) that will meet the criteria regardless of where in the sequence you start from is the sequence 0111 repeating. (or 1011 or 1101 or 1110 which are the same sequence from a different starting point). Given that constraint,
g()= (f() == f())
should suffice.
As already mentioned your definition is not that good regarding probability. Usually it means that not only probability is good but distribution also. Otherwise you can simply write g(x) which will return 1,0,1,0,1,0,1,0 - it will return them 50/50, but numbers won't be random.
Another cheating approach might be:
var invert = false;
function g(x) {
invert = !invert;
if (invert) return 1-f(x);
return f(x);
}
This solution will be better than all others since it calls f(x) only one time. But the results will not be very random.
A refinement of the same approach used in btilly's answer, achieving an average ~1.85 calls to f() per g() result (further refinement documented below achieves ~1.75, tbilly's ~2.6, Jim Lewis's accepted answer ~5.33). Code appears lower in the answer.
Basically, I generate random integers in the range 0 to 3 with even probability: the caller can then test bit 0 for the first 50/50 value, and bit 1 for a second. Reason: the f() probabilities of 1/4 and 3/4 map onto quarters much more cleanly than halves.
Description of algorithm
btilly explained the algorithm, but I'll do so in my own way too...
The algorithm basically generates a random real number x between 0 and 1, then returns a result depending on which "result bucket" that number falls in:
result bucket result
x < 0.25 0
0.25 <= x < 0.5 1
0.5 <= x < 0.75 2
0.75 <= x 3
But, generating a random real number given only f() is difficult. We have to start with the knowledge that our x value should be in the range 0..1 - which we'll call our initial "possible x" space. We then hone in on an actual value for x:
each time we call f():
if f() returns 0 (probability 1 in 4), we consider x to be in the lower quarter of the "possible x" space, and eliminate the upper three quarters from that space
if f() returns 1 (probability 3 in 4), we consider x to be in the upper three-quarters of the "possible x" space, and eliminate the lower quarter from that space
when the "possible x" space is completely contained by a single result bucket, that means we've narrowed x down to the point where we know which result value it should map to and have no need to get a more specific value for x.
It may or may not help to consider this diagram :-):
"result bucket" cut-offs 0,.25,.5,.75,1
0=========0.25=========0.5==========0.75=========1 "possible x" 0..1
| | . . | f() chooses x < vs >= 0.25
| result 0 |------0.4375-------------+----------| "possible x" .25..1
| | result 1| . . | f() chooses x < vs >= 0.4375
| | | . ~0.58 . | "possible x" .4375..1
| | | . | . | f() chooses < vs >= ~.58
| | ||. | | . | 4 distinct "possible x" ranges
Code
int g() // return 0, 1, 2, or 3
{
if (f() == 0) return 0;
if (f() == 0) return 1;
double low = 0.25 + 0.25 * (1.0 - 0.25);
double high = 1.0;
while (true)
{
double cutoff = low + 0.25 * (high - low);
if (f() == 0)
high = cutoff;
else
low = cutoff;
if (high < 0.50) return 1;
if (low >= 0.75) return 3;
if (low >= 0.50 && high < 0.75) return 2;
}
}
If helpful, an intermediary to feed out 50/50 results one at a time:
int h()
{
static int i;
if (!i)
{
int x = g();
i = x | 4;
return x & 1;
}
else
{
int x = i & 2;
i = 0;
return x ? 1 : 0;
}
}
NOTE: This can be further tweaked by having the algorithm switch from considering an f()==0 result to hone in on the lower quarter, to having it hone in on the upper quarter instead, based on which on average resolves to a result bucket more quickly. Superficially, this seemed useful on the third call to f() when an upper-quarter result would indicate an immediate result of 3, while a lower-quarter result still spans probability point 0.5 and hence results 1 and 2. When I tried it, the results were actually worse. A more complex tuning was needed to see actual benefits, and I ended up writing a brute-force comparison of lower vs upper cutoff for second through eleventh calls to g(). The best result I found was an average of ~1.75, resulting from the 1st, 2nd, 5th and 8th calls to g() seeking low (i.e. setting low = cutoff).
Here is a solution based on central limit theorem, originally due to a friend of mine:
/*
Given a function f(x) that 1/4 times returns 0, 3/4 times returns 1. Write a function g(x) using f(x) that 1/2 times returns 0, 1/2 times returns 1.
*/
#include <iostream>
#include <cstdlib>
#include <ctime>
#include <cstdio>
using namespace std;
int f() {
if (rand() % 4 == 0) return 0;
return 1;
}
int main() {
srand(time(0));
int cc = 0;
for (int k = 0; k < 1000; k++) { //number of different runs
int c = 0;
int limit = 10000; //the bigger the limit, the more we will approach %50 percent
for (int i=0; i<limit; ++i) c+= f();
cc += c < limit*0.75 ? 0 : 1; // c will be 0, with probability %50
}
printf("%d\n",cc); //cc is gonna be around 500
return 0;
}
Since each return of f() represents a 3/4 chance of TRUE, with some algebra we can just properly balance the odds. What we want is another function x() which returns a balancing probability of TRUE, so that
function g() {
return f() && x();
}
returns true 50% of the time.
So let's find the probability of x (p(x)), given p(f) and our desired total probability (1/2):
p(f) * p(x) = 1/2
3/4 * p(x) = 1/2
p(x) = (1/2) / 3/4
p(x) = 2/3
So x() should return TRUE with a probability of 2/3, since 2/3 * 3/4 = 6/12 = 1/2;
Thus the following should work for g():
function g() {
return f() && (rand() < 2/3);
}
Assuming
P(f[x] == 0) = 1/4
P(f[x] == 1) = 3/4
and requiring a function g[x] with the following assumptions
P(g[x] == 0) = 1/2
P(g[x] == 1) = 1/2
I believe the following definition of g[x] is sufficient (Mathematica)
g[x_] := If[f[x] + f[x + 1] == 1, 1, 0]
or, alternatively in C
int g(int x)
{
return f(x) + f(x+1) == 1
? 1
: 0;
}
This is based on the idea that invocations of {f[x], f[x+1]} would produce the following outcomes
{
{0, 0},
{0, 1},
{1, 0},
{1, 1}
}
Summing each of the outcomes we have
{
0,
1,
1,
2
}
where a sum of 1 represents 1/2 of the possible sum outcomes, with any other sum making up the other 1/2.
Edit.
As bdk says - {0,0} is less likely than {1,1} because
1/4 * 1/4 < 3/4 * 3/4
However, I am confused myself because given the following definition for f[x] (Mathematica)
f[x_] := Mod[x, 4] > 0 /. {False -> 0, True -> 1}
or alternatively in C
int f(int x)
{
return (x % 4) > 0
? 1
: 0;
}
then the results obtained from executing f[x] and g[x] seem to have the expected distribution.
Table[f[x], {x, 0, 20}]
{0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0}
Table[g[x], {x, 0, 20}]
{1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1}
This is much like the Monty Hall paradox.
In general.
Public Class Form1
'the general case
'
'twiceThis = 2 is 1 in four chance of 0
'twiceThis = 3 is 1 in six chance of 0
'
'twiceThis = x is 1 in 2x chance of 0
Const twiceThis As Integer = 7
Const numOf As Integer = twiceThis * 2
Private Sub Button1_Click(ByVal sender As System.Object, _
ByVal e As System.EventArgs) Handles Button1.Click
Const tries As Integer = 1000
y = New List(Of Integer)
Dim ct0 As Integer = 0
Dim ct1 As Integer = 0
Debug.WriteLine("")
''show all possible values of fx
'For x As Integer = 1 To numOf
' Debug.WriteLine(fx)
'Next
'test that gx returns 50% 0's and 50% 1's
Dim stpw As New Stopwatch
stpw.Start()
For x As Integer = 1 To tries
Dim g_x As Integer = gx()
'Debug.WriteLine(g_x.ToString) 'used to verify that gx returns 0 or 1 randomly
If g_x = 0 Then ct0 += 1 Else ct1 += 1
Next
stpw.Stop()
'the results
Debug.WriteLine((ct0 / tries).ToString("p1"))
Debug.WriteLine((ct1 / tries).ToString("p1"))
Debug.WriteLine((stpw.ElapsedTicks / tries).ToString("n0"))
End Sub
Dim prng As New Random
Dim y As New List(Of Integer)
Private Function fx() As Integer
'1 in numOf chance of zero being returned
If y.Count = 0 Then
'reload y
y.Add(0) 'fx has only one zero value
Do
y.Add(1) 'the rest are ones
Loop While y.Count < numOf
End If
'return a random value
Dim idx As Integer = prng.Next(y.Count)
Dim rv As Integer = y(idx)
y.RemoveAt(idx) 'remove the value selected
Return rv
End Function
Private Function gx() As Integer
'a function g(x) using f(x) that 50% of the time returns 0
' that 50% of the time returns 1
Dim rv As Integer = 0
For x As Integer = 1 To twiceThis
fx()
Next
For x As Integer = 1 To twiceThis
rv += fx()
Next
If rv = twiceThis Then Return 1 Else Return 0
End Function
End Class

Tickmark algorithm for a graph axis

I'm looking for an algorithm that places tick marks on an axis, given a range to display, a width to display it in, and a function to measure a string width for a tick mark.
For example, given that I need to display between 1e-6 and 5e-6 and a width to display in pixels, the algorithm would determine that I should put tickmarks (for example) at 1e-6, 2e-6, 3e-6, 4e-6, and 5e-6. Given a smaller width, it might decide that the optimal placement is only at the even positions, i.e. 2e-6 and 4e-6 (since putting more tickmarks would cause them to overlap).
A smart algorithm would give preference to tickmarks at multiples of 10, 5, and 2. Also, a smart algorithm would be symmetric around zero.
As I didn't like any of the solutions I've found so far, I implemented my own. It's in C# but it can be easily translated into any other language.
It basically chooses from a list of possible steps the smallest one that displays all values, without leaving any value exactly in the edge, lets you easily select which possible steps you want to use (without having to edit ugly if-else if blocks), and supports any range of values. I used a C# Tuple to return three values just for a quick and simple demonstration.
private static Tuple<decimal, decimal, decimal> GetScaleDetails(decimal min, decimal max)
{
// Minimal increment to avoid round extreme values to be on the edge of the chart
decimal epsilon = (max - min) / 1e6m;
max += epsilon;
min -= epsilon;
decimal range = max - min;
// Target number of values to be displayed on the Y axis (it may be less)
int stepCount = 20;
// First approximation
decimal roughStep = range / (stepCount - 1);
// Set best step for the range
decimal[] goodNormalizedSteps = { 1, 1.5m, 2, 2.5m, 5, 7.5m, 10 }; // keep the 10 at the end
// Or use these if you prefer: { 1, 2, 5, 10 };
// Normalize rough step to find the normalized one that fits best
decimal stepPower = (decimal)Math.Pow(10, -Math.Floor(Math.Log10((double)Math.Abs(roughStep))));
var normalizedStep = roughStep * stepPower;
var goodNormalizedStep = goodNormalizedSteps.First(n => n >= normalizedStep);
decimal step = goodNormalizedStep / stepPower;
// Determine the scale limits based on the chosen step.
decimal scaleMax = Math.Ceiling(max / step) * step;
decimal scaleMin = Math.Floor(min / step) * step;
return new Tuple<decimal, decimal, decimal>(scaleMin, scaleMax, step);
}
static void Main()
{
// Dummy code to show a usage example.
var minimumValue = data.Min();
var maximumValue = data.Max();
var results = GetScaleDetails(minimumValue, maximumValue);
chart.YAxis.MinValue = results.Item1;
chart.YAxis.MaxValue = results.Item2;
chart.YAxis.Step = results.Item3;
}
Take the longest of the segments about zero (or the whole graph, if zero is not in the range) - for example, if you have something on the range [-5, 1], take [-5,0].
Figure out approximately how long this segment will be, in ticks. This is just dividing the length by the width of a tick. So suppose the method says that we can put 11 ticks in from -5 to 0. This is our upper bound. For the shorter side, we'll just mirror the result on the longer side.
Now try to put in as many (up to 11) ticks in, such that the marker for each tick in the form i*10*10^n, i*5*10^n, i*2*10^n, where n is an integer, and i is the index of the tick. Now it's an optimization problem - we want to maximize the number of ticks we can put in, while at the same time minimizing the distance between the last tick and the end of the result. So assign a score for getting as many ticks as we can, less than our upper bound, and assign a score to getting the last tick close to n - you'll have to experiment here.
In the above example, try n = 1. We get 1 tick (at i=0). n = 2 gives us 1 tick, and we're further from the lower bound, so we know that we have to go the other way. n = 0 gives us 6 ticks, at each integer point point. n = -1 gives us 12 ticks (0, -0.5, ..., -5.0). n = -2 gives us 24 ticks, and so on. The scoring algorithm will give them each a score - higher means a better method.
Do this again for the i * 5 * 10^n, and i*2*10^n, and take the one with the best score.
(as an example scoring algorithm, say that the score is the distance to the last tick times the maximum number of ticks minus the number needed. This will likely be bad, but it'll serve as a decent starting point).
Funnily enough, just over a week ago I came here looking for an answer to the same question, but went away again and decided to come up with my own algorithm. I am here to share, in case it is of any use.
I wrote the code in Python to try and bust out a solution as quickly as possible, but it can easily be ported to any other language.
The function below calculates the appropriate interval (which I have allowed to be either 10**n, 2*10**n, 4*10**n or 5*10**n) for a given range of data, and then calculates the locations at which to place the ticks (based on which numbers within the range are divisble by the interval). I have not used the modulo % operator, since it does not work properly with floating-point numbers due to floating-point arithmetic rounding errors.
Code:
import math
def get_tick_positions(data: list):
if len(data) == 0:
return []
retpoints = []
data_range = max(data) - min(data)
lower_bound = min(data) - data_range/10
upper_bound = max(data) + data_range/10
view_range = upper_bound - lower_bound
num = lower_bound
n = math.floor(math.log10(view_range) - 1)
interval = 10**n
num_ticks = 1
while num <= upper_bound:
num += interval
num_ticks += 1
if num_ticks > 10:
if interval == 10 ** n:
interval = 2 * 10 ** n
elif interval == 2 * 10 ** n:
interval = 4 * 10 ** n
elif interval == 4 * 10 ** n:
interval = 5 * 10 ** n
else:
n += 1
interval = 10 ** n
num = lower_bound
num_ticks = 1
if view_range >= 10:
copy_interval = interval
else:
if interval == 10 ** n:
copy_interval = 1
elif interval == 2 * 10 ** n:
copy_interval = 2
elif interval == 4 * 10 ** n:
copy_interval = 4
else:
copy_interval = 5
first_val = 0
prev_val = 0
times = 0
temp_log = math.log10(interval)
if math.isclose(lower_bound, 0):
first_val = 0
elif lower_bound < 0:
if upper_bound < -2*interval:
if n < 0:
copy_ub = round(upper_bound*10**(abs(temp_log) + 1))
times = copy_ub // round(interval*10**(abs(temp_log) + 1)) + 2
else:
times = upper_bound // round(interval) + 2
while first_val >= lower_bound:
prev_val = first_val
first_val = times * copy_interval
if n < 0:
first_val *= (10**n)
times -= 1
first_val = prev_val
times += 3
else:
if lower_bound > 2*interval:
if n < 0:
copy_ub = round(lower_bound*10**(abs(temp_log) + 1))
times = copy_ub // round(interval*10**(abs(temp_log) + 1)) - 2
else:
times = lower_bound // round(interval) - 2
while first_val < lower_bound:
first_val = times*copy_interval
if n < 0:
first_val *= (10**n)
times += 1
if n < 0:
retpoints.append(first_val)
else:
retpoints.append(round(first_val))
val = first_val
times = 1
while val <= upper_bound:
val = first_val + times * interval
if n < 0:
retpoints.append(val)
else:
retpoints.append(round(val))
times += 1
retpoints.pop()
return retpoints
When passing in the following three data-points to the function
points = [-0.00493, -0.0003892, -0.00003292]
... the output I get (as a list) is as follows:
[-0.005, -0.004, -0.003, -0.002, -0.001, 0.0]
When passing this:
points = [1.399, 38.23823, 8309.33, 112990.12]
... I get:
[0, 20000, 40000, 60000, 80000, 100000, 120000]
When passing this:
points = [-54, -32, -19, -17, -13, -11, -8, -4, 12, 15, 68]
... I get:
[-60, -40, -20, 0, 20, 40, 60, 80]
... which all seem to be a decent choice of positions for placing ticks.
The function is written to allow 5-10 ticks, but that could easily be changed if you so please.
Whether the list of data supplied contains ordered or unordered data it does not matter, since it is only the minimum and maximum data points within the list that matter.
This simple algorithm yields an interval that is multiple of 1, 2, or 5 times a power of 10. And the axis range gets divided in at least 5 intervals. The code sample is in java language:
protected double calculateInterval(double range) {
double x = Math.pow(10.0, Math.floor(Math.log10(range)));
if (range / x >= 5)
return x;
else if (range / (x / 2.0) >= 5)
return x / 2.0;
else
return x / 5.0;
}
This is an alternative, for minimum 10 intervals:
protected double calculateInterval(double range) {
double x = Math.pow(10.0, Math.floor(Math.log10(range)));
if (range / (x / 2.0) >= 10)
return x / 2.0;
else if (range / (x / 5.0) >= 10)
return x / 5.0;
else
return x / 10.0;
}
I've been using the jQuery flot graph library. It's open source and does axis/tick generation quite well. I'd suggest looking at it's code and pinching some ideas from there.

Resources