Does GEKKO support any gradient-free methods for discrete integer space problem? - gekko

dear experts of GEKKO,
Since the problem I'm going to solve has no closed math form and the solution falls in discrete integer space like, the gradient-free method should work. Thus, as far as you know, do you know if GEKKO support any gradient-free method with the related example for now?
Thank you

Gekko itself could support any solution method but there is currently no linked solver that takes that solution approach. Currently all are Nonlinear Programming (NLP) or Mixed Integer Nonlinear Programming (MINLP) solvers that use derivatives from Gekko automatic differentiation.
There is a list of derivative-free optimizers. Perhaps there is another Python package that could better handle your problem. There is also additional information in the Design Optimization Course and the online Design Optimization Book (Chap 5 and 6) on gradient-free methods such as Simulated Annealing and Genetic Algorithms.

Related

Are there Linear Programming libraries with the Simplex algorithm for Clojure?

The Stigler Diet problem is a Linear Programming problem. It takes a list of foods and their nutritional values and solves for an optimized selection and quantities that meet objectives and constraints. Are there clojure libraries for Linear Programming - Simplex Algorithm, other than levand/prolin to work this?
Actually there is a clojure library: prolin uses the Simplex implementation provided by Apache Commons Math. It's probably the most idiomatic api in clojure for linear programming. Current version in github uses org.apache.commons.math3 v3.2, however according to this JIRA entry the simplex implementation has significantly been improved in v3.3, so it may be worth upgrading (see prolin issue #1).
Also of interest is the Java Constraint Programming API (JSR 331). There's a clojure project using that API. Although its name hints towards constraint programming (CP), this blog post talks about using it for accessing linear programming (LP) solvers such as GLPK, lp_solve, gurobi, etc.
The java JaCoP constraint programming library implements among others the Simplex algorithm. For Clojure, there's the CloCoP clojure wrapper over JaCoP.
Clojure's core.logic also has options for constraint programming.

Searching for Genetic Programming framework/library

I am looking for framework, or library that could enable working with genetic programming (koza's style) not only by using mathematical functions, but also with loops, variable or constant assignment, object creations, or functions calling. I am not sure if there exists such branch of genetic algorithms and if it has a name.
I did my best in looking for informations, though the internet is poor with information on that specific topic.
HeuristicLab has a powerful implementation of Genetic Programming. It includes problems such as Symbolic Regression, Symbolic Classification, Time Series, Santa Fe Ant Trail, and there is a tutorial to implement custom problems such as the Lawn Mower (which is similar to the Santa Fe Ant Trail). HeuristicLab is implemented in C# and runs on Windows. It's released under GPL and can be freely downloaded.
The implementation of GP is very flexible and extensible, but also performance optimized using online calculations to avoid array allocation and memory overheads. We do include several benchmark problem instances for symbolic regression and classification. There are also more algorithms available such as Random Forests, Neural Networks, k-NN, SVM (if you're doing regression or classification).

quality vs inequality contraints for Large scale LP

Inequality vs equality constrains.
Is there a big computational advantage by explicitly converting your problem to standard form via slack variables before passing it to solver instead of solver doing it for you?
I wouldn't do any transformation myself. Any reasonable implementation should do the necessary transformations for you automatically, and do it in a way that is the best from the implementation's point of view.
In short, pose your problem the way it is natural for you and leave the rest up to the solver. Even the performance of the solver is likely to be the best this way.

Algorithm to do Minimization in Integer Programming

I understand that doing minimization in integer programming is a very complex problem. But what makes this problem so difficult?
If I were to (attempt) to write an algorithm to solve it, what would I need to take into account? I'm only familiar with the branch-and-bound technique for solving it and I'm wondering what sort of roadblocks I will face when attempting to apply this technique programatically.
I'm wondering what sort of roadblocks I will face when attempting to apply this technique programatically.
None in particular (assuming a fairly straightforward implementation without a lot of tricks). The algorithms aren’t complicated – they are complex, that’s a fundamental difference.
Techniques such as branch and bound or branch and cut try to prune the search tree and thus speed up the running time. But the whole problem tree is nevertheless exponentially large, hence the problem.
Like the other said, those problem are very hard and there are no simple solution nor simple algorithm that apply to all classes of problems.
The "classic" way of solving those problem is to do a branch-and-bound and apply the simplex algorithm at each node, as you say in your question. However, I would not recommand implementing this yourself if you are not an expert.
As for a lot of numerical methods, it is very hard to get it right (good parameter values, good optimisations), and a lot have been done (see CPLEX, COIN_OR, etc).
It's not that you can't do it: the branch-and-bound part is pretty straigtforward, but without all the tricks your program will be really slow.
Also, you will need a simplex implementation and this is not something you want to do yourself: you will have to use a third-part lib anyway.
Most likely, wether
if your data set is not that big (try it !), and you are not interested in solving it really fast: use something like COIN-OR or lp_solve with the default method, it will work;
if your data set is really big (and/or you need to find a solution quickly each time), you need to work with an expert in this field.
My main point is that only experienced people will know which algorithm will perform better on your problem, wich form of the model will be the easiest to solve, which method to apply and what kind of optimisations you can try.
If you are interested in those problems, I would recommend this book for an introduction to the math behind all this (with a lot of examples). It is incredibly expansive, so you may want to go to a library instead of buying it: Nemhauser and Wolsey.
Integer programming is NP-hard. That's why it is so difficult.
There is a tutorial that you might be interested.
The first thing you do before you solve any mathematical optimization problem is you categorize it. Except special cases, most of the time, integer programming problems will be np-hard. So instead of using an "algorithm", you will use a "heuristic". The final solution you will find will not be a guaranteed optimum, but it will be a pretty good solution for real life problems.
Your main roadblock will your programming skills. Heuristic programming requires a good level of programming understanding. So instead of programming your own heuristic you are better of using well known package (eg, COIN-OR, free). This way you can focus on your problem instead of the heuristic.

Algebraic logic

Both Wolfram Alpha and Bing are now providing the ability to solve complex, algebraic logic problems (ie "solve for x, given this equation"), and not just evaluate simple arithmetic expressions (eg "what's 5+5?"). How is this done?
I can read most types of code that might get thrown at me, so it doesn't really make a difference what you use to explain and represent the algorithm. I find that bash makes a really good pseudo-code, not to mention its actually functional, so that'd be ideal. Also, I'm fairly familiar with its in's and out's. Sorry to go ranting on a tangent, but it really irritates me to see people spend effort on crunching out "pseudocode" when they could be getting something 100% functional for just slightly more effort. Anyways, thanks so much for advance.
There are 2 main methods to solve:
Numeric methods. Numerical methods mean, basically, that the solver tries to change the value of x until the equation is satisfied. More info on numerical methods.
Symbolic math. The solver manipulates the equation as a string of symbols, by a number of formal rules. It's not that different from algebra we learn in school, the solver just knows a lot of different rules. More info on computer algebra.
Wolfram|Alpha (W|A) is based on the Mathematica kernel, combined with a natural language parser (which is also built primarily with Mathematica). They have a whole heap of curated data and associated formula that can be used once the question has been interpreted.
There's a blog post describing some of this which came out at the same time as W|A.
Finally, Bing simply uses the (non-free) API to answer questions via W|A.

Resources