Why is an empty list returned when solving for this linear system in Maxima? - solver

I'm trying to solve a simple linear system in Maxima using solve like so:
/*Standard form*/
eq1 : x1 + 3*x2 + s1 = 6;
eq2 : 3*x1 + 2*x2 + s2 = 6;
base1 : solve([eq1,eq2],[s1,s2]);
This however returns an empty list and I don't know why. Any ideas? I'm pretty sure the system has a solution, so that shouldn't be the issue.
EDIT:
I attempted to insert the equations explicitly into solve in place of eq1 and eq2, and now it works. Now the question is, why do I need to explicitly insert the equations to be solved for into the first argument of solve. An in-depth answer about how Maxima works in this case would be welcome.

This happened to me when one of the variables in the equation was previously defined.
E.g., if z was previously defined:
Then simply changing z to, e.g., p returns the solutions:

Related

How to find least square fit for two combined functions

I have a curvefit problem
I have two functions
y = ax+b
y = ax^2+bx-2.3
I have one set of data each for the above functions
I need to find a and b using least square method combining both the functions
I was using fminsearch function to minimize the sum of squares of errors of these two functions.
I am unable to use this method in lsqcurvefit
Kindly help me
Regards
Ram
I think you'll need to worry less about which library routine to use and more about the math. Assuming you mean vertical offset least squares, then you'll want
D = sum_{i=1..m}(y_Li - a x_Li + b)^2 + sum_{i=j..n}(y_Pj - a x_Pj^2 - b x_Pj + 2.3)^2
where there are m points (x_Li, y_Li) on the line and n points (x_Pj, y_Pj) on the parabola. Now find partial derivatives of D with respect to a and b. Setting them to zero provides two linear equations in 2 unknowns, a and b. Solve this linear system.
y = ax+b
y = ax^2+bx-2.3
In order to not confuse y of the first equation with y of the second equation we use distinct notations :
u = ax+b
v = ax^2+bx+c
The method of linear regression combined for the two functions is shown on the joint page :
HINT : If you want to find by yourself the matrixial equation appearing above, follow the Gene's answer.

Solving a Nonlinear equation with Julia

I am trying to solve a nonlinear equation with Julia,
I have the following nonlinear equation
Nfoc(k,k1,z,n)=(1-α)*exp(z)*(k/n)^α/(exp(z)*(k^α)*(n^(1-α))+k*(1-δ)-k1) - A/(1-n)
and I have a grid of values for k,k1 and z and I am trying to find the values of x that are the roots of this equation for each k,k1, and z, by using this loop:
MatrixN=zeros(nkk,M,nkk)
for i=1:nkk,j=1:M
for i2=1:nkk
MatrixN[i,j,i2]=roots(Nfoc[K[i],K[i2],z(j),n])
end
end
However, its obvious that the command roots its not functioning.
I would deeply appreciate any help in the less techical way possible!
I don't have enough knowledge to work on your use case, but in general, one way to find roots of a parametric function could be:
using FastAnonymous # Creating efficient "anonymous functions" in Julia
using Roots
f(x,k,k1,z,n) = exp(x) - x^4 + k + k1 + z + n
function f_gen(k,k1,z,n)
#anon x -> f(x,k,k1,z,n)
end
fzero(f_gen(0,0,0,0), 1) # => finds x so f(x,0,0,0,0) = 0 using a derivative free method

How is `(d*a)mod(b)=1` written in Ruby?

How should I write this:
(d*a)mod(b)=1
in order to make it work properly in Ruby? I tried it on Wolfram, but their solution:
(da(b, d))/(dd) = -a/d
doesn't help me. I know a and b. I need to solve (d*a)mod(b)=1 for d in the form d=....
It's not clear what you're asking, and, depending on what you mean, a solution may be impossible.
First off, (da(b, d))/(dd) = -a/d, is not a solution to that equation; rather, it's a misinterpretation of the notation used for partial derivatives. What Wolfram Alpha actually gave you was:
, which is entirely unrelated.
Secondly, if you're trying to solve (d*a)mod(b)=1 for d, you may be out of luck. For any value of a and b, where a and b have a common prime factor, there are an infinite number of values of d that satisfy the equation. If a and b are coprime, you can use the formula given in LutzL's answer.
Additionally, if you're looking to perform symbolic manipulation of equations, Ruby is likely not the proper tool. Consider using a CAS, like Python's SymPy or Wolfram Mathematica.
Finally, if you're just trying to compute (d*a)mod(b), the modulo operator in Ruby is %, so you'd write (d*a)%(b).
You are looking for the modular inverse of a modulo b.
For any two numbers a,b the extended euclidean algorithm
g,u,v = xgcd(a, b)
gives coefficients u,v such that
u*a+v*b = g
and g is the greatest common divisor. You need a,b co-prime, preferably by ensuring that b is a prime number, to get g=1 and then you can set d=u.
xgcd(a,b)
if b = 0
return (a,1,0)
q,r = a divmod b
// a = q*b + r
g,u,v = xgcd(b, r)
// g = u*b + v*r = u*b + v*(a-q*b) = v*a+(u-q*v)*b
return g,v,u - q*v

SCIP infeasibility detection with a MINLP

I'm using SCIPAMPL to solve mixed integer nonlinear programming problems (MINLPs). For the most part it's been working well, but I found an instance where the solver detects infeasibility erroneously.
set K default {};
var x integer >= 0;
var y integer >= 0;
var z;
var v1{K} binary;
param yk{K} integer default 0;
param M := 300;
param eps := 0.5;
minimize upperobjf:
16*x^2 + 9*y^2;
subject to
ll1: 4*x + y <= 50;
ul1: -4*x + y <= 0;
vf1{k in K}: z + eps <= (x + yk[k] - 20)^4 + M*(1 - v1[k]);
vf2: z >= (x + y - 20)^4;
aux1{k in K}: -(4*x + yk[k] - 50) <= M*v1[k] - eps;
# fix1: x = 4;
# fix2: y = 12;
let K := {1,2,3,4,5,6,7,8,9,10,11};
for {k in K} let yk[k] := k - 1;
solve;
display x,y,z,v1;
The solver is detecting infeasibility at the presolve phase. However, if you uncomment the two constraints that fix x and y to 4 and 12, the solver works and outputs the correct v and z values.
I'm curious about why this might be happening and whether I can formulate the problem in a different way to avoid it. One suggestion I got was that infeasibility detection is usually not very good with non-convex problems.
Edit: I should mention that this isn't just a SCIP issue. SCIP just hits the issue with this particular set K. If for instance I use bonmin, another global MINLP solver, I can solve the problem for this particular K, but if you expand K to go up to 15, then bonmin detects infeasibility when the problem remains feasible. For that K, I'm yet to find a solver that actually works. I've also tried minlp solvers based on FILTER. I'm yet to try BARON since it only takes GAMS input.
There are very good remarks about modeling issues regarding, e.g., big-M constraints in the comments to your original question. Numerical issues can indeed cause troubles, especially when nonlinear constraints are present.
Depending on how deep you would like to dive into that matter, I see 3 options for you:
You can decrease numeric precision by tuning the parameters numerics/feastol, numerics/epsilon, and numerics/lpfeastol. You can save the following lines in a file "scip.set" and save it to the working directory from where you call scipampl:
# absolute values smaller than this are considered zero
# [type: real, range: [1e-20,0.001], default: 1e-09]
numerics/epsilon = 1e-07
# absolute values of sums smaller than this are considered zero
# [type: real, range: [1e-17,0.001], default: 1e-06]
numerics/sumepsilon = 1e-05
# feasibility tolerance for constraints
# [type: real, range: [1e-17,0.001], default: 1e-06]
numerics/feastol = 1e-05
# primal feasibility tolerance of LP solver
# [type: real, range: [1e-17,0.001], default: 1e-06]
numerics/lpfeastol = 1e-05
You can now test different numerical precisions within scipampl by modifying the file scip.set
Save the solution you obtain by fixing your x and y-variables. If you pass this solution to the model without fixings, you get a message what caused the infeasibility. Usually, you will get a message that some variable bound or constraint is violated slightly outside a tolerance.
If you want to know precisely through which presolver a solution becomes infeasible, or if the former approach does not show any violation, SCIP offers the functionality to read in a debug solution; Specify the solution file "debug.sol" by uncommenting the line in src/scip/debug.h
/* #define SCIP_DEBUG_SOLUTION "debug.sol" */
and recompile SCIP and SCIPAmpl by using
make DBG=true
SCIP checks the debug-solution against every presolving reduction and outputs the presolver which causes the trouble.
I hope this is useful for you.
Looking deeper into this instance, SCIP seems to do something wrong in presolve.
In cons_nonlinear.c:7816 (function consPresolNonlinear), remove the line
if( nrounds == 0 )
so that SCIPexprgraphPropagateVarBounds is executed in any case.
That seems to fix the issue.

Algorithm for multidimensional optimization / root-finding / something

I have five values, A, B, C, D and E.
Given the constraint A + B + C + D + E = 1, and five functions F(A), F(B), F(C), F(D), F(E), I need to solve for A through E such that F(A) = F(B) = F(C) = F(D) = F(E).
What's the best algorithm/approach to use for this? I don't care if I have to write it myself, I would just like to know where to look.
EDIT: These are nonlinear functions. Beyond that, they can't be characterized. Some of them may eventually be interpolated from a table of data.
There is no general answer to this question. A solver finding the solution to any equation does not exist. As Lance Roberts already says, you have to know more about the functions. Just a few examples
If the functions are twice differentiable, and you can compute the first derivative, you might try a variant of Newton-Raphson
Have a look at the Lagrange Multiplier Method for implementing the constraint.
If the function F is continuous (which it probably is, if it is an interpolant), you could also try the Bisection Method, which is a lot like binary search.
Before you can solve the problem, you really need to know more about the function you're studying.
As others have already posted, we do need some more information on the functions. However, given that, we can still try to solve the following relaxation with a standard non-linear programming toolbox.
min k
st.
A + B + C + D + E = 1
F1(A) - k = 0
F2(B) - k = 0
F3(C) -k = 0
F4(D) - k = 0
F5(E) -k = 0
Now we can solve this in any manner we wish, such as penalty method
min k + mu*sum(Fi(x_i) - k)^2
st
A+B+C+D+E = 1
or a straightforward SQP or interior-point method.
More details and I can help advise as to a good method.
m
The functions are all monotonically increasing with their argument. Beyond that, they can't be characterized. The approach that worked turned out to be:
1) Start with A = B = C = D = E = 1/5
2) Compute F1(A) through F5(E), and recalculate A through E such that each function equals that sum divided by 5 (the average).
3) Rescale the new A through E so that they all sum to 1, and recompute F1 through F5.
4) Repeat until satisfied.
It converges surprisingly fast - just a few iterations. Of course, each iteration requires 5 root finds for step 2.
One solution of the equations
A + B + C + D + E = 1
F(A) = F(B) = F(C) = F(D) = F(E)
is to take A, B, C, D and E all equal to 1/5. Not sure though whether that is what you want ...
Added after John's comment (thanks!)
Assuming the second equation should read F1(A) = F2(B) = F3(C) = F4(D) = F5(E), I'd use the Newton-Raphson method (see Martijn's answer). You can eliminate one variable by setting E = 1 - A - B - C - D. At every step of the iteration you need to solve a 4x4 system. The biggest problem is probably where to start the iteration. One possibility is to start at a random point, do some iterations, and if you're not getting anywhere, pick another random point and start again.
Keep in mind that if you really don't know anything about the function then there need not be a solution.
ALGENCAN (part of TANGO) is really nice. There are Python bindings, too.
http://www.ime.usp.br/~egbirgin/tango/codes.php - " general nonlinear programming that does not use matrix manipulations at all and, so, is able to solve extremely large problems with moderate computer time. The general algorithm is of Augmented Lagrangian type ... "
http://pypi.python.org/pypi/TANGO%20Project%20-%20ALGENCAN/1.0
Google OPTIF9 or ALLUNC. We use these for general optimization.
You could use standard search technic as the others mentioned. There are a few optimization you could make use of it while doing the search.
First of all, you only need to solve A,B,C,D because 1-E = A+B+C+D.
Second, you have F(A) = F(B) = F(C) = F(D), then you can search for A. Once you get F(A), you could solve B, C, D if that is possible. If it is not possible to solve the functions, you need to continue search each variable, but now you have a limited range to search for because A+B+C+D <= 1.
If your search is discrete and finite, the above optimizations should work reasonable well.
I would try Particle Swarm Optimization first. It is very easy to implement and tweak. See the Wiki page for it.

Resources