Solving a LP feasibility using CGAL - algorithm

Can we solve linear programming feasibility problem of form mentioned below using CGAL(if not, please suggest alternatives):
v.x_a > c and,
v.x_b = c
where v,x_a,x_b,c are vector, vector, vector and scalar respectively. I want to find a tuple (v,c) for a given set of x( x_a and x_b are elements of x) which satisfies this inequality.
I have seen the documentation but allowable form is of type Ax(relation operator)b where relation operator can be >=,<= or =, where both A and b are known and x is unknown but my requirement is opposite, that is I have x but I want to determine if there exists a tuple (A,b) which satisfies the inequality.
Context:
I am trying to implement a 3D mesh generator for which I need to test whether an edge(joins two 3D vertices) is Delaunay. Delaunay edge is defined as: An edge is Delaunay, iff there exists a circumsphere of its endpoints not containing any other vertex inside it.
My question is based on the approach described here

According to the construction that David Eppstein describes in the linked question, i and j are fixed and we have the additional restriction that v.xi = v.xj = c. So the problem becomes:
Find a vector v != 0 such that v.xk >= v.xi for all k and v.xi = v.xj.
This can be transformed to
Find a vector v != 0 such that (xk - xi).v >= 0 for all k and (xi - xj).v >= 0 and -(xi - xj).v >= 0
By defining A as the matrix with rows xk - xi for all k, xi - xj and xj - xi, we get
Find a vector v != 0 such that Av >= 0
which has the form you need. You can enforce the v != 0 by brute-forcing the non-zero component. For each component i and, trying adding the condition vi >= 1 or vi <= -1 and check the resulting system for solvability. Since the normal vector of the plane can be scaled arbitrarily, there is a solution iif any of the resulting programs is solvable (there are 2d of them if d is the dimensionality of v).

Related

Maximum the result from multiple functions which share an input amount

I have multiple functions as shown in the image. For a fixed x value, I need to distribute it into f, g, and h functions for getting the maximum output (y). In other words, having a fixed x value, find a, b, and c in which these conditions are satisfied:
a + b + c = x
a >= 0 and b >= 0 and c >= 0
f(a) + g(b) + h(c) has max value.
Given the functions are continuous and monotonic. How should I write code to find out a, b, and c? Thanks in advance!
Under appropriate assumptions, if the maximum has a > 0 and b > 0 and c > 0, then a necessary condition is f'(a) = g'(b) = h'(c). Intuitively, if one of these derivatives was greater than the others, then we could effect an improvement by increasing the corresponding variable a little bit and decreasing another variable by the same amount. Otherwise, the maximum has a = 0 or b = 0 or c = 0, and we have a lower dimensional problem of the same type.
The algorithm is to loop over all seven possibilities for whether a, b, c are zero (assuming x > 0 to avoid the trivial case), then solve the equations a + b + c = x and f'(a) = g'(b) = h'(c) (omitting the variables that are zero) to find the candidate solutions, then return the maximum.
Even if you only had 2 functions f and g, you would be looking for the x that maximises the maximum of a :-> f(a) + g(x-a) on [0,x], which is a sum of an increasing and a decreasing function, so you can't have any guarantee about it.
Still if these functions are given to you as closed form expressions, you can compute u(a)=f(a)+g(x-a) and try to find the maximum (under sufficient assumptions, you will have u'(a) = 0 and u''(a) <= 0 for instance).
Going back to the 3 functions case, if it's possible you can compute for every a, v(a) = max_{b in [0, x-a]} ( g(b)+h(x-a-b) ), and then compute the max of (f+v)(a), or do with b or c first if it works better, but in the general case there is no efficient algorithm.

Genetic Algorithm - Best crossover operator for a weights assignment

According to your experience, what is the best crossover operator for weights assignment problem.
In particular, I am facing a constraint that force to be 1 the sum of the all weights. Currently, I am using the uniform crossover operator and then I divide all the parameters by the sum to get 1. The crossover works, but I am not sure that in this way I can save the good part of my solution and go to converge to a better solution.
Do you have any suggestion? No problem, if I need to build a custom operator.
If your initial population is made up of feasible individuals you could try a differential evolution-like approach.
The recombination operator needs three (random) vectors and adds the weighted difference between two population vectors to a third vector:
offspring = A + f (B - C)
You could try a fixed weighting factor f in the [0.6 ; 2.0] range or experimenting selecting f randomly for each generation or for each difference vector (a technique called dither, which should improve convergence behaviour significantly, especially for noisy objective functions).
This should work quite well since the offspring will automatically be feasible.
Special care should be taken to avoid premature convergence (e.g. some niching algorithm).
EDIT
With uniform crossover you are exploring the entire n-dimensional space, while the above recombination limits individuals to a subspace H (the hyperplane Σi wi = 1, where wi are the weights) of the original search space.
Reading the question I assumed that the sum-of-the-weights was the only constraint. Since there are other constraints, it's not true that the offspring is automatically feasible.
Anyway any feasible solution must be on H:
If A = (a1, a2, ... an), B = (b1, ... bn), C = (c1, ... cn) are feasible:
Σi ai = 1
Σi bi = 1
Σi ci = 1
so
Σi (ai + f (bi - ci)) =
Σi ai + f (Σi bi - Σi ci) =
1 + f (1 - 1) = 1
The offspring is on the H hyperplane.
Now depending on the number / type of additional constraints you could modify the proposed recombination operator or try something based on a penalty function.
EDIT2
You could determine analytically the "valid" range of f, but probably something like this is enough:
f = random(0.6, 2.0);
double trial[] = {f, f/2, f/4, -f, -f/2, -f/4, 0};
i = 0;
do
{
offspring = A + trial[i] * (B - C);
i = i + 1;
} while (unfeasible(offspring));
return offspring;
This is just a idea, I'm not sure how it works.

Fitting a curve to some segment of another curve

Maybe I should ask this in Mathoverflow but here it goes:
I have 2 data sets (sets of x and y coordinates) with different number of elements. I have to find the best match between them by stretching one of the data sets (multiplying all x with a factor of m and all y with a factor of n) and moving it around (adding p and q to all x and y respectively).
Basically these 2 sets represent different curves and i have to fit curve B (which has less elements) to some segment of curve A (which has many more elements).
How can I find the values m, n, p, and q for the closest match?
Answers can be pseudo code, C, Java or Python. Thanks.
Following is a solution for finding values m, n, p and q when after transforming the first curve it matches exactly with a part of the second curve:
Basically, we have to solve the following matrix equation:
[m n][x y]' + [p q]' = [X Y]' ...... (1)
where [x y]' and [X Y]' are the coordinates of first and second curves respectively. Let's assume first curve has total l number of coordinates and second curve has total h number of coordinates.
(1) implies,
[mx+p ny+1]' = [X Y]'
i.e we have to solve:
mx_1+p = X_k, mx_2+p = X_{k+1}, ..., mx_l+p = X_{k+l-1}
ny_1+q = Y_k, ny_2+q = Y_{k+1}, ..., ny_l+q = Y_{k+l-1}
where k+l-1 <= h-l
We can solve it in the following naive way:
for (i=1 to h-l){
(m,p) = SOLVE(x1, X_i, x2, X_{i+1})// 2 unknowns, 2 equations
(n,q) = SOLVE(y1, Y_i, y2, Y_{i+1})// 2 unknowns, 2 equations
for (j=3 to l){
if(m*x[j]+p != m*X[i+2]+p)break;//value of m, p found from 1st 2 doesn't work for rest
if(n*y[j]+q != n*Y[i+2]+q)break;//value of n, q found from 1st 2 doesn't work for rest
}
if(j==l){//match found
return i;//returns the smallest index of 2nd curves' coordinates where we found a match
}
}
return -1;//no match found
I am not sure if there can be an optimized version of this.

Weights Optimization in matlab

I have to do optimization in supervised learning to get my weights.
I have to learn the values (w1,w2,w3,w4) such that whenever my vector A = [a1 a2 a3 a4] is 1 the sum w1*a1 + w2*a2 + w3*a3 + w4*a4 becomes greater than 0.5 and when its -1 ( labels ) then it becomes less than 0.5.
Can somebody tell me how I can approach this problem in Matlab ? One way that I know is to do it using evolutionary algorithms, taking a random value vector and then changing to pick the best n values.
Is there any other way that this can be approached ?
You can do it using linprog.
Let A be a matrix of size n by 4 consisting of all n training 4-vecotrs you have. You should also have a vector y with n elements (each either plus or minus 1), representing the label of each training 4-vecvtor.
Using A and y we can write a linear program (look at the doc for the names of the parameters I'm using). Now, you do not have an objective function, so you can simply set f to be f = zeros(4,1);.
The only thing you have is an inequality constraint (< a_i , w > - .5) * y_i >= 0 (where <.,.> is a dot-product between 4-vector a_i and weight vector w).
If my calculations are correct, this constraint can be written as
cmat = bsxfun( #times, A, y );
Overall you get
w = linprog( zeros(4,1), -cmat, .5*y );

Combinatorial Optimization - Variation on Knapsack

Here is a real-world combinatorial optimization problem.
We are given a large set of value propositions for a certain product. The value propositions are of different types but each type is independent and adds equal benefit to the overall product. In building the product, we can include any non-negative integer number of "units" of each type. However, after adding the first unit of a certain type, the marginal benefit of additional units of that type continually decreases. In fact, the marginal benefit of a new unit is the inverse of the number of units of that type, after adding the new unit. Our product must have a least one unit of some type, and there is a small correction that we must make to the overall value because of this requirement.
Let T[] be an array representing the number of each type in a certain production run of the product. Then the overall value V is given by (pseudo code):
V = 1
For Each t in T
V = V * (t + 1)
Next t
V = V - 1 // correction
On cost side, units of the same type have the same cost. But units of different types each have unique, irrational costs. The number of types is large, but we are given an array of type costs C[] that is sorted from smallest to largest. Let's further assume that the type quantity array T[] is also sorted by cost from smallest to largest. Then the overall cost U is simply the sum of each unit cost:
U = 0
For i = 0, i < NumOfValueTypes
U = U + T[i] * C[i]
Next i
So far so good. So here is the problem: Given product P with value V and cost U, find the product Q with the cost U' and value V', having the minimal U' such that U' > U, V'/U' > V/U.
The problem you've described is nonlinear integer programming problem because it contains a product of integer variables t. Its feasibility set is not closed because of strict inequalities which can be worked around by using non-strict inequalities and adding a small positive number (epsilon) to the right hand sides. Then the problem can be formulated in AMPL as follows:
set Types;
param Costs{Types}; # C
param GivenProductValue; # V
param GivenProductCost; # U
param Epsilon;
var units{Types} integer >= 0; # T
var productCost = sum {t in Types} units[t] * Costs[t];
minimize cost: productCost;
s.t. greaterCost: productCost >= GivenProductCost + Epsilon;
s.t. greaterValuePerCost:
prod {t in Types} (units[t] + 1) - 1 >=
productCost * GivenProductValue / GivenProductCost + Epsilon;
This problem can be solved using a nonlinear integer programming solver such as Couenne.
Honestly I don't think there is an easy way to solve this. The best thing would be to write the system and solve it with a solver ( Excel solver will do the tricks, but you can use Ampl to solve this non lienar program.)
The Program:
Define: U;
V;
C=[c1,...cn];
Variables: T=[t1,t2,...tn];
Objective Function: SUM(ti.ci)
Constraints:
For all i: ti integer
SUM(ti.ci) > U
(PROD(ti+1)-1).U > V.SUM(ti.ci)
It works well with excel, (you just replace >U by >=U+d where d is the significative number of the costs- (i.e if C=[1.1, 1.8, 3.0, 9.3] d =0.1) since excel doesn't allow stric inequalities in the solver.)
I guess with a real solver like Ampl it will work perfectly.
Hope it helps,

Resources