[Related to https://codegolf.stackexchange.com/questions/12664/implement-superoptimizer-for-addition from Sep 27, 2013]
I am interested in how to write superoptimizers. In particular to find small logical formulae for sums of bits. This was previously set this as a challenge on codegolf but it seems a lot harder than one might imagine.
I would like to write code that finds the smallest possible propositional logical formula to check if the sum of y binary 0/1 variables equals some value x. Let us call the variables x1, x2, x3, x4 etc. In the simplest approach the logical formula should be equivalent to the sum. That is, the logical formula should be true if and only if the sum equals x.
Here is a naive way to do that. Say y=15 and x = 5. Pick all 3003 different ways of choosing 5 variables and for each make a new clause with the AND of those variables AND the AND of the negation of the remaining variables. You end up with 3003 clauses each of length exactly 15 for a total cost of 45054.
However, if you are allowed to introduce new variables into your solution then you can potentially reduce this a lot by eliminating common subformulae. So in this case your logical formula consists of the y binary variables, x and some new variables. The whole formula would be satisfiable if and only if the sum of the y variables equals x. The only allowed operators are and, or and not.
It turns out there is a clever method for solving this problem when x =1, at least in theory . However, I am looking for a computational intensive method to search for small solutions.
How can you make a superoptimizer for this problem?
Examples. Take as an example two variables where we want a logical formula that is True exactly when they sum to 1. One possible answer is:
(((not y0) and (y1)) or ((y0) and (not y1)))
To introduce a new variable into a formula such as z0 to represent y0 and not y1 then we can introduce a new clause (y0 and not y1) or not z0 and replace y0 and not y1 by z0 throughout the rest of the formula . Of course this is pointless in this example as it makes the expression longer.
Write your desired sum in binary. First look at the least important bit, y0 . Clearly,
x1 xor x2 xor ... xor xn = y0 - that's your first formula. The final formula will be a conjunction of formulae for each bit of the desired sum.
Now, do you know how an adder is implemented? http://en.wikipedia.org/wiki/Adder_(electronics) . Take inspiration from it, group your input into pairs/triples of bits, calculate the carry bits, and use them to make formulae for y1...yk . If you need further hints, let me know.
If I understand what you're asking, you'll want to look into the general topics of logic minimization and/or Boolean function simplification. The references are mostly about general methods for eliminating redundancy in Boolean formulas that are disjunctions ("or"s) of terms that are conjunctions ("and"s).
By hand, the standard method is called a Karnaugh map. The equivalent algorithm expressed in a way that's more amenable to computer implementation is Quine-McKlosky (also called the method of prime implicants). The minimization problem is NP-hard, and QM solves it exactly.
Therefore I think QM is what you want for the "super-optimizer" you're trying to build.
But the combination of NP-hard and exact solution means that QM is impractical for large problems, at least non-trivial ones.
The QM Algorithm lays out the conjunctive terms (called minterms in this context) in a table and conducts searches for 1-bit differences between pairs of terms. These terms can be combined and the factor for the differing bit labeled "don't care" in further combinations. This is repeated with 2-bit, 4-bit, etc. subsets of bits. The exponential behavior results because choices are involved for the combinations of larger bit sets: choosing one rules out another. Therefore it is essentially a search problem.
There is an enormous literature on heuristics to trim the search space, yet find "good" solutions that aren't necessarily optimal. A famous one is Espresso. However, since algorithm improvements translate directly to dollars in semiconductor manufacture, it's entirely possible that the best are proprietary and closely held.
Related
Superposition calculus is used for reasoning with equations; it reduces the size of the search space by applying an order to equations, based on an ordering of terms.
A suitable ordering of terms, such as Knuth-Bendix, must sometimes answer 'unordered'. For example, f(x) vs f(y) where x and y are variables; a suitable order must be stable under substitution of terms for variables, so no matter which answer you might give for f(x) vs f(y) (less, same, greater), some substitution of terms for the two variables, would turn out to be inconsistent with the initial answer. In this domain, comparison needs a fourth possible answer, 'unordered'.
Superposition calculus orders equations relative to each other, based on the constituent terms and the polarity. There are ways of constructing this based on the multiset extension of term order, but perhaps the simplest correct algorithm is:
Compare the larger terms; if they are unequal, that's the answer.
Compare the polarities; if they are unequal, negative is greater than positive.
Compare the smaller terms.
It is tempting to implement this by first sorting each equation, larger term first, then implementing the above algorithm directly. The problem is that each equation may not have a larger term; it is quite possible that the component terms of one or both equations are unordered relative to each other, so a correct algorithm for comparing equations, must take that into account.
This could be derived from first principles by going through all the possibilities, but it also looks like there would be many opportunities to make a subtle error that would take a while to track down.
Is there a known/canonical algorithm already worked out, for comparing equations in this context?
Many papers are using SAT, but few mentioned how to convert an addition to CNF.
Since CNF only allows AND OR NOT operation, it is difficult to describe addition operation. For example,
x1 + x2 + x3 + ... +x1599 < 30, xi is binary.
Map these equations into a Boolean circuit.
Apply Tseitin's transformation to the circuit and convert it into DIMACS format.
But is there any way to read the results? I do think it is possible to read the results if all the variables are defined by ourself, so figuring out how to convert a linear constraint to SAT problem is necessary.
If there are 3 or 4 variables, i.e. x1+x2+x3 <3, we can use truth table to solve this conversion. Also, a direct way is that chose 29 (any number smaller than 30) variables from 1600 variables to be 1, the others to be 0. But there are too many possibilities which makes this problem hard to solve.
I have used STP, but it can only give 1 answer. As the increasing number of variables and clauses, it costs a long time for STP to run.
So I tried to use SAT to solve the cnf given by STP, it can give out answers in a minutes. But the results cannot be read.
In the end, I found some paper,
1. Encoding Linear Constraints with Implication Chains to CNF,
2. SAT-Based Techniques for Integer Linear Constraints. This may be helpful.
What you're describing is known as a cardinality constraint. There are many ways of encoding these in CNF. As a starting point, some of these encodings are explained in
http://www.carstensinz.de/papers/CP-2005.pdf and
https://arxiv.org/pdf/1012.3853.pdf.
Many are implemented in the PySAT python toolkit
https://pysathq.github.io/docs/html/api/card.html
I do the project on different matching algorithms, and with this one I can't understand quite clearly - does one really can get pair of corresponding features for train and test image or it just shows the degree of similarity between two images and you can't exactly match them? There are pictures in the article about it claiming some "partial matching", but is is a real matching indeed or not?
Here is a summary based mostly on remembering a paper in CACM, with a few quick looks at http://userweb.cs.utexas.edu/%7Egrauman/papers/grauman_cacm_extended.pdf
Given sets of points Xi and Yi representing features, you can produce a distance as SUM_i d(X_i, Y_p(i)) where p(i) matches each i with its own unique p(i), and is the p(x) producing the minimum such distance. You can find p(x) with the Hungarian algorithm, but this is expensive
The paper shows that you can approximate this distance much more cheaply. The approximation does not provide a p(x) for the original problem, but you could (I think) think of it as solving the matching problem for a simplified distance function f(X_i, Y_q(i)) where f(X, Y) only cares about whether X and Y fall into the bin of the histogram at some granularity, and, if so, which granularity that is. The algorithm does not produce an explicit q(x) but I suspect that you could produce one fairly easily if you wanted to, by pairing up points that fell into the same bin. If you did so, I suspect that it wouldn't do too badly with the original distance function d(X, Y), but I don't know what not too badly means here.
The function also has other nice properties, so that it plays well with Support Vector Machines, and fast approximate search algorithms.
Say the input will always be the same number N of numbers (e.g., 5) and assume the integers actually have a mathematical relation (no lengths of the numbers 'one', 'two', days in the nth month, etc.). The output would be either the next integer and the rule discovered or a message that no rule could be detected.
I was thinking to have in one-two-three order, a module that tries to find arithmetic sequence rules by doing sums and/or differences between numbers adjacent, one away, two away, etc. looking for patterns, then having a module focused on geometric sequences by multiplying and/or dividing in the same way, and then, if there is a general approach, a module for detecting recursive sequences.
Thanks!
The On-Line Encyclopedia of Integer Sequences solves precisely this problem :-)
Given any sequence of numbers, we can come up with a formula which 'fits'!
Given a1, a2, ..., an
All you need to do is find an n-1 degree polynomial (using Polynomial interpolation) so that
P(i) = ai
and that's it, you have a formula. Polynomial interpolation can be as easy as solving a matrix equation Ax = b (with A being a Vandermonde matrix).
Check out: http://en.wikipedia.org/wiki/Polynomial_interpolation
That is one of the reasons I find these 'guess the next number' problems a bit silly (read: pathetic IQ tests). Not everyone thinks the same way.
Does anyone have experience with algorithms for evaluating hypergeometric functions? I would be interested in general references, but I'll describe my particular problem in case someone has dealt with it.
My specific problem is evaluating a function of the form 3F2(a, b, 1; c, d; 1) where a, b, c, and d are all positive reals and c+d > a+b+1. There are many special cases that have a closed-form formula, but as far as I know there are no such formulas in general. The power series centered at zero converges at 1, but very slowly; the ratio of consecutive coefficients goes to 1 in the limit. Maybe something like Aitken acceleration would help?
I tested Aitken acceleration and it does not seem to help for this problem (nor does Richardson extrapolation). This probably means Pade approximation doesn't work either. I might have done something wrong though, so by all means try it for yourself.
I can think of two approaches.
One is to evaluate the series at some point such as z = 0.5 where convergence is rapid to get an initial value and then step forward to z = 1 by plugging the hypergeometric differential equation into an ODE solver. I don't know how well this works in practice; it might not, due to z = 1 being a singularity (if I recall correctly).
The second is to use the definition of 3F2 in terms of the Meijer G-function. The contour integral defining the Meijer G-function can be evaluated numerically by applying Gaussian or doubly-exponential quadrature to segments of the contour. This is not terribly efficient, but it should work, and it should scale to relatively high precision.
Is it correct that you want to sum a series where you know the ratio of successive terms and it is a rational function?
I think Gosper's algorithm and the rest of the tools for proving hypergeometric identities (and finding them) do exactly this, right? (See Wilf and Zielberger's A=B book online.)