Z3 taking too long for a non-linear real formula - z3py

Why does it take an unspecified large time for Z3 to solve the following Non-linear real formula? It instantly solves it if the first constraint changes for < 0 to > 0.
The formula:
s.add (0.9993612667+0.0014*x^2+0.0014*y^2+0.0014*z^2-0.0023*x^2*y^2-0.0023*x^2*z^2-0.0023*y^2*z^2+0.0010*x^4*y^2+0.0011*x^2*y^4+0.0011*x^3*y^2*z+0.0011*x^4*z^2+0.0034*x^2*y^2*z^2+0.0011*x*y^3*z^2+0.0010*y^4*z^2+0.0011*x^2*y*z^3+0.0010*x^2*z^4+0.0011*y^2*z^4 < 0,x>=-1,x<=-0.1,y>=-1,y>=-0.1,z>=-1,z<=-0.1 )

Ok, I have sorted out the problem by running an iterative loop on the ranges of x,y and z.That means,instead of asking Z3 to solve the problem for the whole ranges of x,y,z, i used partitioning of these ranges instead, and iteratively found the sat/unsat of the inequality.

Related

Combinatoric Vector Addition Minimization Problem

I'm working on a problem, and it feels like it might be analogous to an existing problem in mathematical programming, but I'm having trouble finding any such problem.
The problem goes like this: We have n sets of d dimensional vectors, such that each set contains exactly d+1 vectors. Within each set, all vectors have the same length (furthermore, the angle between any two vectors in a set is the same for any set, but I'm not sure whether this relevant). We then need to choose exactly one vector out of every set, and compute the sum of these vectors. Our objective is to make our choices so that the sum of the vectors is minimized.
It feels like the problem is sort of related to the Shortest Vector Problem, or a variant of job scheduling, where scheduling a job on a machine affects all machines, or a partition problem.
Does this problem ring a bell? Specifically, I'm looking for research into solving this problem, as currently my best bet is using an ILP, but I feel there must be something more clever that can be done.
I think this is an MIQP (Mixed Integer Quadratic Programming) or MISOCP (mixed integer second-order cone) problem:
Let
v(i,j) be i vectors in group j (data)
x(i,j) in {0,1}: binary decision variables
w: sum of selected vectors (decision variable)
Then the problem can be stated as:
min ||w||
sum(i, x(i,j)) = 1 for all j
w = sum((i,j), x(i,j)*v(i,j))
If you want you can substitute out w. Indeed I don't use your angle restriction (this is a restriction on the data and not on the decision variables of the model). The x variables are chosen such that we select exactly one vector from each group.
Minimizing the 2-norm can be replaced by minimizing the sum of the squares of the elements (i.e. minimizing the square of the norm).
Assuming the 2-norm, this is a MISOCP problem or convex MIQP problem for which quite a few solvers are available. For 1-norm and infinity-norms we can formulate a linear MIP problem. MIP solvers are widely available.

Is there an algorithm for determining the smallest set of solvable linear equations

I have a set of N (N is very large) linear equations with W variables.
For efficiency sake, I need to find the smallest number of linear equations that are solvable (have a unique solution). It can be assumed that a set of X equations containing Y variables has a unique solution when X == Y.
For example, if I have the following as input:
2a = b - c
a = 0.5b
b = 2 + a
I want to return the equation set:
a = 0.5b
b = 2 + a
Currently, I have an implementation that uses some heuristics. I create a matrix, columns are variables and rows are equations. I search the matrix to find a set of fully connected equations, and then one-by-one try removing equations to see if the remaining set of equations is still solvable, if it is continue, if not, return the set of equations.
Is there a known algorithm for this, and am I trying got reinvent the wheel?
Does anyone have input on how to better approach this?
Thanks.
Short answer is "yes", there are known algorithms. For example, you could add a single equation and then compute the rank of the matrix. Then add the next equation and compute the rank. If it hasn't gone up that new equation isn't helping any and you can get rid of it. Once the rank == the number of variables you have a unique solution and you're done. There are libraries (e.g. Colt, JAMA, la4j, etc.) that will do this for you.
Longer answer is that this is surprisingly difficult to do correctly, especially if your matrix gets big. You end up with lots of numerical stability issues and so on. I'm not a numerical linear algebra expert but I know enough to know there are dragons here if you're not careful. Having said that, if your matrices are small and "well conditioned" (the rows/columns aren't almost parallel) then you should be in good shape. It depends on your application.

incremental least squares differing with only one row

I have to solve multiple least squares problem sequentially - that is one by one. Every least square problem from the previous one changes by only one row. The right hand side is same for all. For eg., Problem 1 : ||Ax-b|| and Problem 2 : ||Cy-b|| where C and A changes by only one row. That is, it is equivalent to deleting a row from A and including a new row in A. When solving problem 2, I also have x. Is there a fast way for solving y of Problem 2?
You can use the Sherman-Morrison formula.
The key piece of the linear regression solution is computing the inverse of A'A.
If b is the old row from A and a is the new row in C, then
C'C=A'A-bb'+aa'=A'A+(a-b)(a+b)'
This expression can be plugged into the Sherman-Morrison formula to compute (C'C)^{-1} given (A'A)^{-1}.
Unfortunately the answer may be NO...
Changing one row of a matrix will lead to completely different spectrum of the matrix. All the eigenvalues and eigenvectors are changed with both magnitude and orientation. As a result, the gradient of problem 1 won't remain in problem 2. You can try to use your x from problem 1 as a initial guess for y in problem 2, but it is not guaranteed to reduce your searching time in optimization.
Yet a linear matrix equation solving is not that hard with the powerful packages. You can use LU decomposition or QR decomposition to improve the computing efficiency very much.

Lookup table and dynamic programming

In old games era, we are used to have a look-up table of pre-computed values of sin and cos,..etc, due to the slowness of computing those values in that old CPUs.
Is that considered a dynamic programming technique ? or dynamic programming must solve a recursive function that is always computed or sort of ?
Update:
In dynamic programming the key is to have a memoization table, which is the solution for the sin,cos look up table, so what is really the difference in the technique ?
I'd say for what I see in your question no it's not dynamic programming. Dynamic programming is more about solving problems by solving smaller subproblem and create way to get solution of problem from smaller subproblem.
Your situation looks more like memoization.
For me it could be considered DP if your problem was to compute cos N and you have formula to calculate cos i from array of cos 0, cos 1, ..., cos i - 1, so you calculate cos 1, sin 1 and run you calculation for i from 0 to N.
May be somebody will correct me :)
There's also interesting quote about how dynamic programming differ from divide-and-conquer paradigm:
There are two key attributes that a problem must have in order for
dynamic programming to be applicable: optimal substructure and
overlapping subproblems. If a problem can be solved by combining
optimal solutions to non-overlapping subproblems, the strategy is
called "divide and conquer" instead. This is why mergesort and
quicksort are not classified as dynamic programming problems.
Dynamic programming is the programming technique where you solve a difficult problem by splitting it in smaller problems, which are not independent (this is important!).
Even if you could compute cos i from cos i -1, this would still not be dynamic programming, just recursion.
Dynamic programming classic example is the knapsack problem: http://en.wikipedia.org/wiki/Knapsack_problem
You want to fill a knapsack of size W, with N objects, each one with its size and value.
Since you don't know which permutation of objects will be the best, you "try" everyone.
Recurrence equation will be something like:
OPT(m,w) = MAX ( OPT(m-1, w), //if I don't take this object
OPT(m-1, w - w(m)) //If i take it
Adding the initial case, this is how you solve the problem. Of course you should build the solution starting with m = 0, w = 0 and iterating until m = N and w = W, so that you can reuse previously calculated values.
Using this technique, you can find the optimal combination of objects to bring into the knapsack in just N*W time (which is not polynomial in the input size, of course, otherwise P = NP and no one wants that!), instead of an exponential number of computation steps.
No I don't think this is dynamic programming. Due to limited computing power the values of sine and cosine were fed as pre-computed values which are just like other numeric constants.
For a problem to be solved in dynamic programming technique there are many essential conditions. One of the important condition is that we should be able to break problem into recursive solvable sub-problem, the result these sub-problem of which can be then used as look-up table to replace higher chain in recursion. So it is both recursion and memory.
For more info you can refer Wikipedia link.
http://en.wikipedia.org/wiki/Dynamic_programming
Also the lecture 19 of this course will give you an overview of dynamic programming.
http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-006-introduction-to-algorithms-fall-2011/lecture-videos/lecture-19-dynamic-programming-i-fibonacci-shortest-paths/

what is big-O in hamming distance?

I've implemented code in MATLAB that similar to hamming distance. for input i have one matix .I want to apply my formula that use hamming distance. my formula like this:
way is Considers two row(x,y) and apply formula. |x-y| is hamming distance two row. and then obtain max item-item of these row. like
x=(1,0.3 , 0 )
y=(0 , 0.1, 1)
for every two row of matrix obtain S,
cod is in matlab :
for j=1:4
x=fin(j,:)
for i=j+1:5
y=fin(i,:)
s1= 1-hamming1
end
end
my question is : what is complexity or big-o in my code and formula?
what is complexity hamming distance?
The algorithm is linear in the product of lengths of x and y - O(len(x)*len(y)) - as indicated by the double sum.
Note, however, that it is very hard to be absolutely sure because of so many typos in your question, as well as hard-coded constants in your code (which, technically, make the algorithm complexity constant).

Resources