I used MODES = 7 to sequentially solve a simulation problem with BPOPT solver. The solver can solve the first few problems but report the following error:
*** WARNING MESSAGE FROM SUBROUTINE MA27BD *** INFO(1) = 3
MATRIX IS SINGULAR. RANK= 581
Problem with linear solver, INFO: 3
With solver=0, I can see the GEKKO has 6 different solvers. I wonder how can I specify a solver (like MINOS).
You can change solvers with m.options.SOLVER=1 for APOPT or m.options.SOLVER=3 for IPOPT. The other solvers aren't available for public use because they require a license.
The error message that you are receiving is because the solver could not find a search direction. I recommend including variable bounds such as lower bound of zero for some variables to prevent singular solutions.
If you'd like more specific help, please post minimal, verifiable code.
Related
I am trying to solve Maxwell Stefan's equation over a membrane to get the transient mole fraction distribution over the membrane thickness 'z'. But somehow I am not able to code it using ODE45, more likely I am not able to write the system to solve using ODE45. It will be really great if someone can help me with the primary syntaxes and function. The equation I am trying to solve is
(dy_i)/dz=1/(cD_(i,j) ) [y_i (N_i+N_j )-N_i ]
where c is concentration and D_(i,j) is binary diffusion coefficient.
I have convergence issues for a mixed effects linear model. I would like to dig what is happening during the optimization. Is there a way to get the iteration log? The best I can do now is get a summary of the optimization by setting disp=True
mdf = md.fit(full_output=True,reml=True,method ='cg',disp=True)
which gives me
Warning: Desired error not necessarily achieved due to precision loss.
Current function value: 2.989371
Iterations: 5
Function evaluations: 73
Gradient evaluations: 62
Thanks
UPDATE: this is not answering my question but with a different solver, I managed to get some convergence. However, this is raising another question. I would expect the score, i.e., the gradient of the log-likehood function, to be small and for my example, this is not the case. Hence, another question is:
what can be trusted? Intituively, my answer would be the score
When running CPLEX on the same ILP problem (exactly the same input file):
With MIPEmphasis = 3 I get an objective value of 6.81613e-06
With MIPEmphasis = 4 I get an objective value of 1.03858
In both cases, CPLEX returns an OPTIMAL status.
From the CPLEX user manual:
To make clear a point that has been alluded to so far: every choice of MIPEmphasis results in the search algorithm proceeding in a manner that eventually will find and prove an optimal solution, or will prove that no integer feasible solution exists. The choice of emphasis only guides CPLEX to produce feasible solutions in a way that is in keeping with the user's particular purposes, but the accuracy and completeness of the algorithm is not sacrificed in the process.
Am I missing something here? I am facing this problem not only with the MIPEmphasis parameter, but with other parameters as well (ScaInd for example), where by varying the parameter I get different OPTIMAL solutions that greatly vary in quality.
Here's some more info which I can't seem to decipher.
For MIPEmphasis = 3:
Maximum condition number = 5.03484e+12,
Attention level = 0.290111,
Suspicious bases: 0.0111111,
Unstable bases = 0.966667,
Ill-posed bases = 0,
CPLEX Status = `OptimalTol`
For MIPEmphasis = 4:
Maximum condition number = 4.73342e+08,
Attention level = 0.00925,
Suspicious bases: 0.925,
Unstable bases = 0,
Ill-posed bases = 0,
CPLEX Status = `Optimal`
This looks like numerical-trouble which is common and depends greatly on your modelling (e.g. usage of big-M constants).
I never used CPLEX, but this official page talks about ill-conditioned MIP models.
Small excerpt relevant here:
You should reconsider your model if CPLEX reports any ill-posed bases or more than 5% unstable bases.
In your case A, you got more than 95% unstable bases:
For MIPEmphasis = 3: .... Unstable bases = 0.966667 ...
So it's quite possible, that the result of A can't be trusted. Furthermore i would try to reformulate my model.
If we look at B, you got > 92.5% suspicious bases, so maybe even in this case the model is asking for trouble.
As i'm not familiar with all the tunings and defaults, i can't give any insight on the source of these pretty different computational results in regards to your MIPEmphasis and co. (maybe generating more cutting-planes due to MIPEmphasis result in a more stable problem; just guessing).
I have a large scale multi objective optimization problem to solve with fmincon solver of Matlab. I tried different solver to get a better and faster output. Here is the challenge:
I am getting Exit Flag: 1,0,4,5 for different Pareto points ,as it is a multi-objective optimization problem, with Active-set algorithm. Then I tried to check different algorithms like interior-point and sqp for generating the Pareto points. I observed that sqp returns few exit flags 1, some 2 and few 0 but not any 4 or 5 flag. Also, I should note that, its 0 and 2 flagged solutions are correct answers . However, When it comes to return any exit flag except 1, it takes a long time to solve the Pareto point.
As interior-point algorithm is designed for large scale program, it's very faster than sqp in generating the Pareto solutions. However, it only returns solutions with Exit flag 0. Unfortunately, its 0 flagged solutions are wrong solutions despite sqp which its 0 and 2 flagged solutions are correct answers.
0) Is there anyway to config the fmincon to solve my problem with interior-point and also get the correct solutions? In the literature I saw some problems similar to mine have been solved with interior-point algorithm.
1) Is there any settings (TolX,TolCon,...) that I can use to get more exit flag 1 ?
2) Is there any setting that speeds up the optimization process with the cost of lower accuracy?
3) For 2 Pareto points I am getting exit flag -2 , which means the problem is not feasible for them. It is expected from the nature of the problem. But it takes ages for fmincon to determine the Exit-Flag -2. Is there any option that I can set to satisfy 1,2 and also leave this infeasible point faster?
I couldn't do this , because I can only set options for one time and all Pareto points should use the same option.
To describe the problem I should say:
I have several linear and nonlinear (.^2,Sin...) for both equality and inequality constraints (about 300) and also having 400 optimization variables. All objective functions of this multi-objective optimization problem is linear.
these are the options that I currently use. Please help me to modify it
options = optimset('Algorithm', 'sqp', 'Display', 'off');
options = optimset('Algorithm', 'sqp', 'Display', 'off', 'TolX',1e-6,...
'TolFun',1e-6,'MaxIter',1e2, 'MaxFunEvals', 1e4);
First option takes about 500 sec for generating 15 Pareto points. Meaning that each optimization of fmincon expend 33 sec.
The second option takes 200 sec, which is 13 sec for each optimization of fmincon.
Your help will be highly appreciated.
Why the termination condition of value-iteration algorithm
( example http://aima-java.googlecode.com/svn/trunk/aima-core/src/main/java/aima/core/probability/mdp/search/ValueIteration.java )
In the MDP (Markov Desicion Process) is
||Ui+1-Ui||< error*(1-gamma)/gamma, where
Ui is vector of utilities
Ui+1 updated vector of utilities
error -error bound used in algorithm
gamma-discount factor used in algorithm
Where does "error*(1-gamma)/gamma" come from?
"divided by gamma" is because every step is discounted by gamma?
But error*(1-gamma)?
And how big must be an error?
That's called a Bellman Error or a Bellman Residual.
See Williams and Baird, 1993 for use in MDPs.
See Littman, 1994 for use in POMDPs.