Function including random number that can be inverted without the random number - algorithm

Given a number x and a random number n, I am looking for two functions F and G so that:
y = F(x, n) where y is different for different values of n
x = G(y)
all numbers are (large, e.g. 256 bit) integers
For instance given a list of numbers k1, k2, k3, f4 generated by applying multiple times F, it is possible to calculate k3 from k4 but not k4 from k3 (the random number prevents the inversion).
The problem is obvious if we allow to use n (or derived) in G (it is basically an asymmetric encryption) but this is not the target.
Any idea?
Update
I found a function that works with infinite precision F = x * pow(coprime(x), n)
x = 29
p = 5
n = 20
def f(x,n):
return x * pow(p,n)
f(x,n) => 2765655517578125
and G becomes
def g(y):
x = y
while x % p == 0:
x = x/p
return x
g(y) = 29
Unfortunately this fails with overflow as soon as numbers become big (limited precision)
Second update: the problem has no solution
In fact let's start from a situation where the problem has a solution, which is when the domain of G and F is R.
In that case choosing a random output from any function F' that has multiple output will work.
For instance if then F(x, n) = acos(x) + 2nπ, where n random is Integer
then G(y) = cos(y). From y is always possible to go back to x, but not the opposite without knowing n.
A similar example can be built with operation with module, which will work with Integer domains without the need of real numbers.
Anyway this will fail when the domain is the same finite set (like on physical memory) for F and G. It can be proved by contradiction.
Let's assume that for finite domains D1=D2 of size N, a function F:D1->D2 exists that produces M outputs where M > 1.
Assuming that the function produces at least one output for each x in D1,
1 either D2 > D1
2 or outputs from F are the same for different values of x (some overlapping must exists)
Now 1 is against the requirement that D1=D2, while 2 is against the requirement that G(y) has a single output value
If we relax 1 and we allow D2 > D1, then we can solve the problem. This can be done by adding n (or a derivation of it) like suggested in some comments. For my specific scenario probably it makes more sense to use a EC public/private key but that is another story.
Many Thanks

Based on your requirements, the following should work. If there is some other requirement that I did not understand from your question, please clarify, because this seems to suffice based on your definition. In that case, I will change or delete this answer.
f(x, n) = x | n;
g(y | n) = y;
where | means concatenation of bits. We can assign a fixed (maximum) number of bits for n and pad with zeros.

there can be no solution for this problem because:
for a constant x1 and variable r you would have an output set with all Integers in it.
for a constant x2 and variable r again you would have an output set with all Integers in it.
so at best you can have a function g which would take a number from the output set of function f and return all possible answers which are infinite.
this is similar to writing a reverse hashing function; which defies logic.

Related

Generate random non repeating pairs of numbers within 2 ranges

I want to create random pairs of numbers within 2 ranges.
So for example if I want 3 random pairs of numbers where 10 < n1 < 20 and 30 < n2 < 50 then an acceptable output would be this: [[11,35],[15,30],[15,42]] but not [[11,35],[11,35],[12,39]]
I would like an efficient (both computationally and memory wise) algorithm to do this. The language doesn't really matter because I can adapt it later (although Python would be preferred).
So far the best idea I have had is to create a dictionary with all the possible numbers in n1 and as values a list of the numbers which have been used in n2. Then I can just pick a random n1 and find a number which hasn't been used in n1[n2] set.
This isn't very efficient space wise though and I'm hoping for something better. It also seems to be computationally inefficient to find a number not in n1[n2] many times.
I could also do the opposite and have the dictionary populated with all the numbers not used and just pop a random number off the list. But this would use much more space.
Is there any efficient way to do this? Is this a common problem?
Edit: It would be good if this could easily be expanded to more dimensions (so sets of N numbers). But this isn't really needed yet.
An integer pair (x, y) in [min_x, min_x + s) X [min_y, min_y + t) can be mapped to an integer m within the 1D space [min_x * t, (min_x + s) * t) by calculating m = x * t + y - min_y. The inverse mapping from m to (x, y) can be achieved by (m // t, min_y + m % t) in Python.
Therefore the problem is transformed to choosing multiple values from [min_x * t, (min_x + s) * t) without replacement (i.e. no duplicates in the returned sequence). This can be done by simply calling the random.sample function in Python. According to the doc, the underlying implementation is space efficient for sequence inputs. So the entire problem can be done in Python as shown in the following:
from random import sample
# max_x and max_y are exclusive while min_x and min_y are inclusive
t = max_y - min_y
sampled_pairs = [(m//t, min_y + m%t) for m in sample(range(min_x * t, max_x * t), k=3)]

Conditional sampling of binary vectors (?)

I'm trying to find a name for my problem, so I don't have to re-invent wheel when coding an algorithm which solves it...
I have say 2,000 binary (row) vectors and I need to pick 500 from them. In the picked sample I do column sums and I want my sample to be as close as possible to a pre-defined distribution of the column sums. I'll be working with 20 to 60 columns.
A tiny example:
Out of the vectors:
110
010
011
110
100
I need to pick 2 to get column sums 2, 1, 0. The solution (exact in this case) would be
110
100
My ideas so far
one could maybe call this a binary multidimensional knapsack, but I did not find any algos for that
Linear Programming could help, but I'd need some step by step explanation as I got no experience with it
as exact solution is not always feasible, something like simulated annealing brute force could work well
a hacky way using constraint solvers comes to mind - first set the constraints tight and gradually loosen them until some solution is found - given that CSP should be much faster than ILP...?
My concrete, practical (if the approximation guarantee works out for you) suggestion would be to apply the maximum entropy method (in Chapter 7 of Boyd and Vandenberghe's book Convex Optimization; you can probably find several implementations with your favorite search engine) to find the maximum entropy probability distribution on row indexes such that (1) no row index is more likely than 1/500 (2) the expected value of the row vector chosen is 1/500th of the predefined distribution. Given this distribution, choose each row independently with probability 500 times its distribution likelihood, which will give you 500 rows on average. If you need exactly 500, repeat until you get exactly 500 (shouldn't take too many tries due to concentration bounds).
Firstly I will make some assumptions regarding this problem:
Regardless whether the column sum of the selected solution is over or under the target, it weighs the same.
The sum of the first, second, and third column are equally weighted in the solution (i.e. If there's a solution whereas the first column sum is off by 1, and another where the third column sum is off by 1, the solution are equally good).
The closest problem I can think of this problem is the Subset sum problem, which itself can be thought of a special case of Knapsack problem.
However both of these problem are NP-Complete. This means there are no polynomial time algorithm that can solve them, even though it is easy to verify the solution.
If I were you the two most arguably efficient solution of this problem are linear programming and machine learning.
Depending on how many columns you are optimising in this problem, with linear programming you can control how much finely tuned you want the solution, in exchange of time. You should read up on this, because this is fairly simple and efficient.
With Machine learning, you need a lot of data sets (the set of vectors and the set of solutions). You don't even need to specify what you want, a lot of machine learning algorithms can generally deduce what you want them to optimise based on your data set.
Both solution has pros and cons, you should decide which one to use yourself based on the circumstances and problem set.
This definitely can be modeled as (integer!) linear program (many problems can). Once you have it, you can use a program such as lpsolve to solve it.
We model vector i is selected as x_i which can be 0 or 1.
Then for each column c, we have a constraint:
sum of all (x_i * value of i in column c) = target for column c
Taking your example, in lp_solve this could look like:
min: ;
+x1 +x4 +x5 >= 2;
+x1 +x4 +x5 <= 2;
+x1 +x2 +x3 +x4 <= 1;
+x1 +x2 +x3 +x4 >= 1;
+x3 <= 0;
+x3 >= 0;
bin x1, x2, x3, x4, x5;
If you are fine with a heuristic based search approach, here is one.
Go over the list and find the minimum squared sum of the digit wise difference between each bit string and the goal. For example, if we are looking for 2, 1, 0, and we are scoring 0, 1, 0, we would do it in the following way:
Take the digit wise difference:
2, 0, 1
Square the digit wise difference:
4, 0, 1
Sum:
5
As a side note, squaring the difference when scoring is a common method when doing heuristic search. In your case, it makes sense because bit strings that have a 1 in as the first digit are a lot more interesting to us. In your case this simple algorithm would pick first 110, then 100, which would is the best solution.
In any case, there are some optimizations that could be made to this, I will post them here if this kind of approach is what you are looking for, but this is the core of the algorithm.
You have a given target binary vector. You want to select M vectors out of N that have the closest sum to the target. Let's say you use the eucilidean distance to measure if a selection is better than another.
If you want an exact sum, have a look at the k-sum problem which is a generalization of the 3SUM problem. The problem is harder than the subset sum problem, because you want an exact number of elements to add to a target value. There is a solution in O(N^(M/2)). lg N), but that means more than 2000^250 * 7.6 > 10^826 operations in your case (in the favorable case where vectors operations have a cost of 1).
First conclusion: do not try to get an exact result unless your vectors have some characteristics that may reduce the complexity.
Here's a hill climbing approach:
sort the vectors by number of 1's: 111... first, 000... last;
use the polynomial time approximate algorithm for the subset sum;
you have an approximate solution with K elements. Because of the order of elements (the big ones come first), K should be a little as possible:
if K >= M, you take the M first vectors of the solution and that's probably near the best you can do.
if K < M, you can remove the first vector and try to replace it with 2 or more vectors from the rest of the N vectors, using the same technique, until you have M vectors. To sumarize: split the big vectors into smaller ones until you reach the correct number of vectors.
Here's a proof of concept with numbers, in Python:
import random
def distance(x, y):
return abs(x-y)
def show(ls):
if len(ls) < 10:
return str(ls)
else:
return ", ".join(map(str, ls[:5]+("...",)+ls[-5:]))
def find(is_xs, target):
# see https://en.wikipedia.org/wiki/Subset_sum_problem#Pseudo-polynomial_time_dynamic_programming_solution
S = [(0, ())] # we store indices along with values to get the path
for i, x in is_xs:
T = [(x + t, js + (i,)) for t, js in S]
U = sorted(S + T)
y, ks = U[0]
S = [(y, ks)]
for z, ls in U:
if z == target: # use the euclidean distance here if you want an approximation
return ls
if z != y and z < target:
y, ks = z, ls
S.append((z, ls))
ls = S[-1][1] # take the closest element to target
return ls
N = 2000
M = 500
target = 1000
xs = [random.randint(0, 10) for _ in range(N)]
print ("Take {} numbers out of {} to make a sum of {}", M, xs, target)
xs = sorted(xs, reverse = True)
is_xs = list(enumerate(xs))
print ("Sorted numbers: {}".format(show(tuple(is_xs))))
ls = find(is_xs, target)
print("FIRST TRY: {} elements ({}) -> {}".format(len(ls), show(ls), sum(x for i, x in is_xs if i in ls)))
splits = 0
while len(ls) < M:
first_x = xs[ls[0]]
js_ys = [(i, x) for i, x in is_xs if i not in ls and x != first_x]
replace = find(js_ys, first_x)
splits += 1
if len(replace) < 2 or len(replace) + len(ls) - 1 > M or sum(xs[i] for i in replace) != first_x:
print("Give up: can't replace {}.\nAdd the lowest elements.")
ls += tuple([i for i, x in is_xs if i not in ls][len(ls)-M:])
break
print ("Replace {} (={}) by {} (={})".format(ls[:1], first_x, replace, sum(xs[i] for i in replace)))
ls = tuple(sorted(ls[1:] + replace)) # use a heap?
print("{} elements ({}) -> {}".format(len(ls), show(ls), sum(x for i, x in is_xs if i in ls)))
print("AFTER {} splits, {} -> {}".format(splits, ls, sum(x for i, x in is_xs if i in ls)))
The result is obviously not guaranteed to be optimal.
Remarks:
Complexity: find has a polynomial time complexity (see the Wikipedia page) and is called at most M^2 times, hence the complexity remains polynomial. In practice, the process is reasonably fast (split calls have a small target).
Vectors: to ensure that you reach the target with the minimum of elements, you can improve the order of element. Your target is (t_1, ..., t_c): if you sort the t_js from max to min, you get the more importants columns first. You can sort the vectors: by number of 1s and then by the presence of a 1 in the most important columns. E.g. target = 4 8 6 => 1 1 1 > 0 1 1 > 1 1 0 > 1 0 1 > 0 1 0 > 0 0 1 > 1 0 0 > 0 0 0.
find (Vectors) if the current sum exceed the target in all the columns, then you're not connecting to the target (any vector you add to the current sum will bring you farther from the target): don't add the sum to S (z >= target case for numbers).
I propose a simple ad hoc algorithm, which, broadly speaking, is a kind of gradient descent algorithm. It seems to work relatively well for input vectors which have a distribution of 1s “similar” to the target sum vector, and probably also for all “nice” input vectors, as defined in a comment of yours. The solution is not exact, but the approximation seems good.
The distance between the sum vector of the output vectors and the target vector is taken to be Euclidean. To minimize it means minimizing the sum of the square differences off sum vector and target vector (the square root is not needed because it is monotonic). The algorithm does not guarantee to yield the sample that minimizes the distance from the target, but anyway makes a serious attempt at doing so, by always moving in some locally optimal direction.
The algorithm can be split into 3 parts.
First of all the first M candidate output vectors out of the N input vectors (e.g., N=2000, M=500) are put in a list, and the remaining vectors are put in another.
Then "approximately optimal" swaps between vectors in the two lists are done, until either the distance would not decrease any more, or a predefined maximum number of iterations is reached. An approximately optimal swap is one where removing the first vector from the list of output vectors causes a maximal decrease or minimal increase of the distance, and then, after the removal of the first vector, adding the second vector to the same list causes a maximal decrease of the distance. The whole swap is avoided if the net result is not a decrease of the distance.
Then, as a last phase, "optimal" swaps are done, again stopping on no decrease in distance or maximum number of iterations reached. Optimal swaps cause a maximal decrease of the distance, without requiring the removal of the first vector to be optimal in itself. To find an optimal swap all vector pairs have to be checked. This phase is much more expensive, being O(M(N-M)), while the previous "approximate" phase is O(M+(N-M))=O(N). Luckily, when entering this phase, most of the work has already been done by the previous phase.
from typing import List, Tuple
def get_sample(vects: List[Tuple[int]], target: Tuple[int], n_out: int,
max_approx_swaps: int = None, max_optimal_swaps: int = None,
verbose: bool = False) -> List[Tuple[int]]:
"""
Get a sample of the input vectors having a sum close to the target vector.
Closeness is measured in Euclidean metrics. The output is not guaranteed to be
optimal (minimum square distance from target), but a serious attempt is made.
The max_* parameters can be used to avoid too long execution times,
tune them to your needs by setting verbose to True, or leave them None (∞).
:param vects: the list of vectors (tuples) with the same number of "columns"
:param target: the target vector, with the same number of "columns"
:param n_out: the requested sample size
:param max_approx_swaps: the max number of approximately optimal vector swaps,
None means unlimited (default: None)
:param max_optimal_swaps: the max number of optimal vector swaps,
None means unlimited (default: None)
:param verbose: print some info if True (default: False)
:return: the sample of n_out vectors having a sum close to the target vector
"""
def square_distance(v1, v2):
return sum((e1 - e2) ** 2 for e1, e2 in zip(v1, v2))
n_vec = len(vects)
assert n_vec > 0
assert n_out > 0
n_rem = n_vec - n_out
assert n_rem > 0
output = vects[:n_out]
remain = vects[n_out:]
n_col = len(vects[0])
assert n_col == len(target) > 0
sumvect = (0,) * n_col
for outvect in output:
sumvect = tuple(map(int.__add__, sumvect, outvect))
sqdist = square_distance(sumvect, target)
if verbose:
print(f"sqdist = {sqdist:4} after"
f" picking the first {n_out} vectors out of {n_vec}")
if max_approx_swaps is None:
max_approx_swaps = sqdist
n_approx_swaps = 0
while sqdist and n_approx_swaps < max_approx_swaps:
# find the best vect to subtract (the square distance MAY increase)
sqdist_0 = None
index_0 = None
sumvect_0 = None
for index in range(n_out):
tmp_sumvect = tuple(map(int.__sub__, sumvect, output[index]))
tmp_sqdist = square_distance(tmp_sumvect, target)
if sqdist_0 is None or sqdist_0 > tmp_sqdist:
sqdist_0 = tmp_sqdist
index_0 = index
sumvect_0 = tmp_sumvect
# find the best vect to add,
# but only if there is a net decrease of the square distance
sqdist_1 = sqdist
index_1 = None
sumvect_1 = None
for index in range(n_rem):
tmp_sumvect = tuple(map(int.__add__, sumvect_0, remain[index]))
tmp_sqdist = square_distance(tmp_sumvect, target)
if sqdist_1 > tmp_sqdist:
sqdist_1 = tmp_sqdist
index_1 = index
sumvect_1 = tmp_sumvect
if sumvect_1:
tmp = output[index_0]
output[index_0] = remain[index_1]
remain[index_1] = tmp
sqdist = sqdist_1
sumvect = sumvect_1
n_approx_swaps += 1
else:
break
if verbose:
print(f"sqdist = {sqdist:4} after {n_approx_swaps}"
f" approximately optimal swap{'s'[n_approx_swaps == 1:]}")
diffvect = tuple(map(int.__sub__, sumvect, target))
if max_optimal_swaps is None:
max_optimal_swaps = sqdist
n_optimal_swaps = 0
while sqdist and n_optimal_swaps < max_optimal_swaps:
# find the best pair to swap,
# but only if the square distance decreases
best_sqdist = sqdist
best_diffvect = diffvect
best_pair = None
for i0 in range(M):
tmp_diffvect = tuple(map(int.__sub__, diffvect, output[i0]))
for i1 in range(n_rem):
new_diffvect = tuple(map(int.__add__, tmp_diffvect, remain[i1]))
new_sqdist = sum(d * d for d in new_diffvect)
if best_sqdist > new_sqdist:
best_sqdist = new_sqdist
best_diffvect = new_diffvect
best_pair = (i0, i1)
if best_pair:
tmp = output[best_pair[0]]
output[best_pair[0]] = remain[best_pair[1]]
remain[best_pair[1]] = tmp
sqdist = best_sqdist
diffvect = best_diffvect
n_optimal_swaps += 1
else:
break
if verbose:
print(f"sqdist = {sqdist:4} after {n_optimal_swaps}"
f" optimal swap{'s'[n_optimal_swaps == 1:]}")
return output
from random import randrange
C = 30 # number of columns
N = 2000 # total number of vectors
M = 500 # number of output vectors
F = 0.9 # fill factor of the target sum vector
T = int(M * F) # maximum value + 1 that can be appear in the target sum vector
A = 10000 # maximum number of approximately optimal swaps, may be None (∞)
B = 10 # maximum number of optimal swaps, may be None (unlimited)
target = tuple(randrange(T) for _ in range(C))
vects = [tuple(int(randrange(M) < t) for t in target) for _ in range(N)]
sample = get_sample(vects, target, M, A, B, True)
Typical output:
sqdist = 2639 after picking the first 500 vectors out of 2000
sqdist = 9 after 27 approximately optimal swaps
sqdist = 1 after 4 optimal swaps
P.S.: As it stands, this algorithm is not limited to binary input vectors, integer vectors would work too. Intuitively I suspect that the quality of the optimization could suffer, though. I suspect that this algorithm is more appropriate for binary vectors.
P.P.S.: Execution times with your kind of data are probably acceptable with standard CPython, but get better (like a couple of seconds, almost a factor of 10) with PyPy. To handle bigger sets of data, the algorithm would have to be translated to C or some other language, which should not be difficult at all.

Doubts regarding this pseudocode for the perceptron algorithm

I am trying to implement the perceptron algorithm above. But I have two questions:
Why do we just update w (weight) variable once? Shouldn't there be separate w variables for each Xi? Also, not sure what w = 0d means mathematically in the initialization.
What is the mathematical meaning of
yi(< xi,w >+b)
I kinda know what the meaning inside the bracket is but not sure about the yi() part.
(2) You can think of 'yi' as a function that depends on w, xi and b.
let's say for a simple example, y is a line that separates two different classes. In that case, y can be represented as y = wx+b. Now, if you use
w = 0,x = 1 and b = 0 then y = 0.
For your given algorithm, you need to update your weight w, when the output of y is less than or equal to 0.
So, if you look carefully, you are not updating w once, as it is inside an if statement which is inside a for loop.
For your algorithm, you will get n numbers of output y based on n numbers of input x for each iteration of t. Here 'i' is used for indexing both input as xi and output as yi.
So, long story short, out of n numbers of input x, you only need to update the w when the output y for the corresponding input x will be less than or equal to zero (for each iteration of t).
(1) I have already mentioned w is not updated once.
Let's say you know that any output value greater(<) than 0 is the correct answer. So if you get an output which is less than or equal to zero then there is a mistake in your algorithm and you need to fix it. This is what your algorithm is doing by updating the w when the output is not matching the desired one.
Here w is represented as a vector and it is initialized as zero.

algorithm to find products of a set of primes, in order, greater than x

Consider the finite set {2,3,5,...,n}. I am interested in primes but the question could apply to any set of numbers. I want to find all possible products of these numbers in ascending order, and in particular greater than or equal to some number x. Does anyone know a nice algorithm for this?
EDIT to clarify:
Each factor in the input set may be used any number of times. If the input were {2,3,5,7} the output would be {2,3,4,5,6,7,8,9,10,12,14,15,16,18,...}. The algorithm can stop as soon as it produces a result greater than or equal to some number x.
A Haskell code, as seen in this answer,
hamm :: [Integer] -> [Integer]
hamm [] = []
hamm (p:ps) = xs -- e.g. hamm [2,3,5]
where xs = merge (hamm ps) -- H({p} ∪ ps) = S,
(p : map (p*) xs) -- S ⊇ {p} ∪ H(ps) ∪ { p*x | x ∊ S }
merge a#(x:xs) b#(y:ys) | x < y = x : merge xs b
| otherwise = y : merge a ys
merge [] b = b
merge a [] = a
merge here doesn't try to eliminate multiples, because there won't be any -- but only in case you're using only the primes in the input:
~> take 20 $ hamm [2,3,5,7]
[2,3,4,5,6,7,8,9,10,12,14,15,16,18,20,21,24,25,27,28]
If not, you need to use union instead,
union a#(x:xs) b#(y:ys) | x < y = x : union xs b
| x > y = y : union a ys
| otherwise = x : union xs ys
union [] b = b
union a [] = a
Starting from (above) a given value efficiently might be an interesting challenge. A directly slice-generating code at the bottom of this answer could be taken as a starting point.
In general it is easy to skip along the ordered sequence until a value is passed over. In Haskell, it is done with a built-in dropWhile (< n),
~> take 10 $ dropWhile (< 100) $ hamm [2,3,5,7]
[100,105,108,112,120,125,126,128,135,140]
(Edit: made it produce all products in ascending order; let users filter them as they wish. This is a generalised Hamming numbers problem)
genHamming :: Integral a => [a] -> [a]
genHamming zs = hmng where
hmng = 1 : foldr (||) [] [map (z*) hmng | z <- zs]
[] || ys = ys
xs || [] = xs
(x:xs) || (y:ys) | x==y = x : (xs || ys)
| x<y = x : (xs || (y:ys))
| y<x = y : (ys || (x:xs))
Example usage
Prelude Hamming> take 10 $ dropWhile (< 1000) $ genHamming [2,3,5]
[1000,1024,1080,1125,1152,1200,1215,1250,1280,1296]
Prelude Hamming>
You probably also want to include 2^0 * 3^0 * 5^0 * 7^0 = 1 in your output.
The way to do this is with a priority queue. If k is in the sequence, so are 2k, 3k, 5k and 7k. Start your output with 1, then add 2, 3, 5, and 7 to the priority queue. Pop 2 from the top of the queue and add 2*2=4, 2*3=6, 2*5=10 and 2*7=14 to the queue; the queue at that point will contain 3, 4, 5, 6, 7, 10 and 14. Pop 3 from the top of the queue and add 3*2=6, 3*3=9, 3*5=15 and 3*7=21 to the queue. And so on.
You will discover that many elements are duplicated; for instance, we added 6 to the priority queue twice in the example above. You can either add duplicates, and check each time you pop the queue if the element is the same as the prior member of the sequence, or you can keep a separate list of items in the queue and refrain from adding duplicates in the first place.
I discuss a priority queue that contains only distinct elements at my blog.
Every integer greater than 1 is the product of a 'set of primes' because it is the product of its prime factors. It might be easier to start at your desired minimum number and strike out all numbers that have a prime factor not in your initial set. Continue the process until your result set is large enough. In effect you are doing a modified Sieve of Eratosthenes, removing all multiples of the primes not in your initial set.
Because our application is written in python I came up with the following implementation which I wanted to share:
def powers(x):
y = x
while True:
yield y
y *= x
def products(factors):
y0 = factors[0]
if len(factors) == 1:
yield from powers(y0)
else:
yield y0
g1 = products(factors)
y1 = y0 * next(g1)
g2 = products(factors[1:])
y2 = next(g2)
while True:
if y1 < y2:
yield y1
y1 = y0 * next(g1)
else:
yield y2
y2 = next(g2)
if __name__ == "__main__":
import itertools
for n in itertools.islice(products([2, 3, 5, 7]), 10**6):
print(n)
No doubt the use of recursion with generators could be improved upon, but in practice the performance is good enough for our application. Beyond that, I'm still interested in how to start at a given minimum value efficiently, as mentioned in Will Ness' answer. Thanks to all those who contributed.
There are two algorithms that come to mind for this. First, you could calculate all possible products between the numbers, and then sorts these. Although this seems to be the naive approach, you could speed this up by 'remembering' the last product, dividing out one number, and multiplying a different number in its place. This would greatly reduce the number of operations necessary, if with the correct ordering of your permutations (look into Gray Codes) you could minimize the total multiplications.
On the other hand, what you could do is compute the sets of all the original numbers, the set of the products of pairs (of two) original numbers, the set of products of 3... so on. Then you can sort each individual set (which shouldn't be hard to ensure they are nearly sorted anyways), and the mergesort the sets together into one sorted list of products. This would take more operations, but would result in a nearly sorted list at the end, which may take less time to construct overall.
The other algorithm would be to take the product of all the primes of interest, and call this P. Construct another list of all the original primes squared. Now, loop over all numbers up to P, and test if those are divisible by any of the values in the primes-squared array. If they are, toss them. If not, then add them to the output array. You can optimize this by only testing divisibility up to sqrt(i), where i is the iteration in your for loop. This is still likely slower than the above method though.
As every prime factor is allowed to appear many times, the sequence is infinite. So we can't generate all products and then sort them. We have to generate the sequence iteratively.
If a is a member of the sequence, then {2*a, 3*a, 5*a ... n*a} will also be members of the sequence, coming later.
So, the algorithm that comes to my mind is to have a (sorted, duplicate-free) buffer of the next sequence members. We extract and present the smallest value and insert all of its multiples into the buffer.
As it's difficult to predict the buffer content for your starting number x, this algorithm should start from the beginning and ignore the results until they reach the x threshold.

convert real number to radicals

Suppose I have a real number. I want to approximate it with something of the form a+sqrt(b) for integers a and b. But I don't know the values of a and b. Of course I would prefer to get a good approximation with small values of a and b. Let's leave it undefined for now what is meant by "good" and "small". Any sensible definitions of those terms will do.
Is there a sane way to find them? Something like the continued fraction algorithm for finding fractional approximations of decimals. For more on the fractions problem, see here.
EDIT: To clarify, it is an arbitrary real number. All I have are a bunch of its digits. So depending on how good of an approximation we want, a and b might or might not exist. Brute force is naturally not a particularly good algorithm. The best I can think of would be to start adding integers to my real, squaring the result, and seeing if I come close to an integer. Pretty much brute force, and not a particularly good algorithm. But if nothing better exists, that would itself be interesting to know.
EDIT: Obviously b has to be zero or positive. But a could be any integer.
No need for continued fractions; just calculate the square-root of all "small" values of b (up to whatever value you feel is still "small" enough), remove everything before the decimal point, and sort/store them all (along with the b that generated it).
Then when you need to approximate a real number, find the radical whose decimal-portion is closet to the real number's decimal-portion. This gives you b - choosing the correct a is then a simple matter of subtraction.
This is actually more of a math problem than a computer problem, but to answer the question I think you are right that you can use continued fractions. What you do is first represent the target number as a continued fraction. For example, if you want to approximate pi (3.14159265) then the CF is:
3: 7, 15, 1, 288, 1, 2, 1, 3, 1, 7, 4 ...
The next step is create a table of CFs for square roots, then you compare the values in the table to the fractional part of the target value (here: 7, 15, 1, 288, 1, 2, 1, 3, 1, 7, 4...). For example, let's say your table had square roots for 1-99 only. Then you would find the closest match would be sqrt(51) which has a CF of 7: 7,14 repeating. The 7,14 is the closest to pi's 7,15. Thus your answer would be:
sqrt(51)-4
As the closest approximation given a b < 100 which is off by 0.00016. If you allow larger b's then you could get a better approximation.
The advantage of using CFs is that it is faster than working in, say, doubles or using floating point. For example, in the above case you only have to compare two integers (7 and 15), and you can also use indexing to make finding the closest entry in the table very fast.
This can be done using mixed integer quadratic programming very efficiently (though there are no run-time guarantees as MIQP is NP-complete.)
Define:
d := the real number you wish to approximate
b, a := two integers such that a + sqrt(b) is as "close" to d as possible
r := (d - a)^2 - b, is the residual of the approximation
The goal is to minimize r. Setup your quadratic program as:
x := [ s b t ]
D := | 1 0 0 |
| 0 0 0 |
| 0 0 0 |
c := [0 -1 0]^T
with the constraint that s - t = f (where f is the fractional part of d)
and b,t are integers (s is not)
This is a convex (therefore optimally solvable) mixed integer quadratic program since D is positive semi-definite.
Once s,b,t are computed, simply derive the answer using b=b, s=d-a and t can be ignored.
Your problem may be NP-complete, it would be interesting to prove if so.
Some of the previous answers use methods that are of time or space complexity O(n), where n is the largest “small number” that will be accepted. By contrast, the following method is O(sqrt(n)) in time, and O(1) in space.
Suppose that positive real number r = x + y, where x=floor(r) and 0 ≤ y < 1. We want to approximate r by a number of the form a + √b. If x+y ≈ a+√b then x+y-a ≈ √b, so √b ≈ h+y for some integer offset h, and b ≈ (h+y)^2. To make b an integer, we want to minimize the fractional part of (h+y)^2 over all eligible h. There are at most √n eligible values of h. See following python code and sample output.
import math, random
def findb(y, rhi):
bestb = loerror = 1;
for r in range(2,rhi):
v = (r+y)**2
u = round(v)
err = abs(v-u)
if round(math.sqrt(u))**2 == u: continue
if err < loerror:
bestb, loerror = u, err
return bestb
#random.seed(123456) # set a seed if testing repetitively
f = [math.pi-3] + sorted([random.random() for i in range(24)])
print (' frac sqrt(b) error b')
for frac in f:
b = findb(frac, 12)
r = math.sqrt(b)
t = math.modf(r)[0] # Get fractional part of sqrt(b)
print ('{:9.5f} {:9.5f} {:11.7f} {:5.0f}'.format(frac, r, t-frac, b))
(Note 1: This code is in demo form; the parameters to findb() are y, the fractional part of r, and rhi, the square root of the largest small number. You may wish to change usage of parameters. Note 2: The
if round(math.sqrt(u))**2 == u: continue
line of code prevents findb() from returning perfect-square values of b, except for the value b=1, because no perfect square can improve upon the accuracy offered by b=1.)
Sample output follows. About a dozen lines have been elided in the middle. The first output line shows that this procedure yields b=51 to represent the fractional part of pi, which is the same value reported in some other answers.
frac sqrt(b) error b
0.14159 7.14143 -0.0001642 51
0.11975 4.12311 0.0033593 17
0.12230 4.12311 0.0008085 17
0.22150 9.21954 -0.0019586 85
0.22681 11.22497 -0.0018377 126
0.25946 2.23607 -0.0233893 5
0.30024 5.29150 -0.0087362 28
0.36772 8.36660 -0.0011170 70
0.42452 8.42615 0.0016309 71
...
0.93086 6.92820 -0.0026609 48
0.94677 8.94427 -0.0024960 80
0.96549 11.95826 -0.0072333 143
0.97693 11.95826 -0.0186723 143
With the following code added at the end of the program, the output shown below also appears. This shows closer approximations for the fractional part of pi.
frac, rhi = math.pi-3, 16
print (' frac sqrt(b) error b bMax')
while rhi < 1000:
b = findb(frac, rhi)
r = math.sqrt(b)
t = math.modf(r)[0] # Get fractional part of sqrt(b)
print ('{:11.7f} {:11.7f} {:13.9f} {:7.0f} {:7.0f}'.format(frac, r, t-frac, b,rhi**2))
rhi = 3*rhi/2
frac sqrt(b) error b bMax
0.1415927 7.1414284 -0.000164225 51 256
0.1415927 7.1414284 -0.000164225 51 576
0.1415927 7.1414284 -0.000164225 51 1296
0.1415927 7.1414284 -0.000164225 51 2916
0.1415927 7.1414284 -0.000164225 51 6561
0.1415927 120.1415831 -0.000009511 14434 14641
0.1415927 120.1415831 -0.000009511 14434 32761
0.1415927 233.1415879 -0.000004772 54355 73441
0.1415927 346.1415895 -0.000003127 119814 164836
0.1415927 572.1415909 -0.000001786 327346 370881
0.1415927 911.1415916 -0.000001023 830179 833569
I do not know if there is any kind of standard algorithm for this kind of problem, but it does intrigue me, so here is my attempt at developing an algorithm that finds the needed approximation.
Call the real number in question r. Then, first I assume that a can be negative, in that case we can reduce the problem and now only have to find a b such that the decimal part of sqrt(b) is a good approximation of the decimal part of r. Let us now write r as r = x.y with x being the integer and y the decimal part.
Now:
b = r^2
= (x.y)^2
= (x + .y)^2
= x^2 + 2 * x * .y + .y^2
= 2 * x * .y + .y^2 (mod 1)
We now only have to find an x such that 0 = .y^2 + 2 * x * .y (mod 1) (approximately).
Filling that x into the formulas above we get b and can then calculate a as a = r - b. (All of these calculations have to be carefully rounded of course.)
Now, for the time being I am not sure if there is a way to find this x without brute forcing it. But even then, one can simple use a simple loop to find an x good enough.
I am thinking of something like this(semi pseudo code):
max_diff_low = 0.01 // arbitrary accuracy
max_diff_high = 1 - max_diff_low
y = r % 1
v = y^2
addend = 2 * y
x = 0
while (v < max_diff_high && v > max_diff_low)
x++;
v = (v + addend) % 1
c = (x + y) ^ 2
b = round(c)
a = round(r - c)
Now, I think this algorithm is fairly efficient, while even allowing you to specify the wished accuracy of the approximation. One thing that could be done that would turn it into an O(1) algorithm is calculating all the x and putting them into a lookup table. If one only cares about the first three decimal digits of r(for example), the lookup table would only have 1000 values, which is only 4kb of memory(assuming that 32bit integers are used).
Hope this is helpful at all. If anyone finds anything wrong with the algorithm, please let me know in a comment and I will fix it.
EDIT:
Upon reflection I retract my claim of efficiency. There is in fact as far as I can tell no guarantee that the algorithm as outlined above will ever terminate, and even if it does, it might take a long time to find a very large x that solves the equation adequately.
One could maybe keep track of the best x found so far and relax the accuracy bounds over time to make sure the algorithm terminates quickly, at the possible cost of accuracy.
These problems are of course non-existent, if one simply pre-calculates a lookup table.

Resources