AMPL: define constraints in specific elements of a set - set

I have this structure:
set U;
param d {i in U};
How can I add constraints in the first, second and third element of d?
I'm abstracting the size of U because I guess it's better, but in fact, for my problem, U has only 3 elements and so do d.
I wouldn't like to create 3 params for U and 3 vars for d.

As you've implemented this, U is an unordered set. This means that the "first element of U" isn't clearly defined. Since U is the index set for d, it follows that "the first element of d" also isn't clearly defined.
For example, consider the following code:
set U := {"fish","chips","salt"};
var d{U} := 0;
solve;
display d;
The answer is displayed as:
d [*] :=
chips 0
fish 0
salt 0
;
Note that AMPL has sorted the elements of U in a different order to the order in which I declared them. (Specifically, it's alphabetised them.)
So the answer to this question depends on what exactly you mean by "add constraints in the first, second and third element of d".
If you just want to apply the same constraint to every member of d, you can use a single constraint indexed over U:
set U := {"fish","chips","salt"};
var d{U};
s.t. constrain_all{u in U}: d[u] >= 0;
If you want to apply a specific constraint to each member of d by name, you can use a similar format:
set U := {"fish","chips","salt"};
var d{U};
s.t. constrain_fish: d["fish"] >= 0;
s.t. constrain_chips: d["chips"] >= 5;
s.t. constrain_salt: d["salt"] >= 10;
If you do have a specific ordering on U, then you need to declare U as an ordered set. Each element of U then has a specific cardinality within that set and you can use the "member" and "ord" functions to reference elements of U by position. For example:
set U := {"fish","chips","salt"} ordered;
var d{U};
s.t. constrain_fish: d["fish"] >= 0;
s.t. constrain_chips: d["chips"] >= 5;
s.t. constrain_salt: d["salt"] >= 10;
s.t. constrain_second_member_of_U: d[member(2,U)] >= 25;
minimize of: sum{u in U} d[u];
solve;
display d;
As written, this requires d["chips"] to be greater than or equal to 25. However, if I changed the declaration of U from {"fish","chips","salt"} to {"chips","fish","salt"}, that constraint would now apply to d["fish"] instead of d["chips"].
If I wanted to set a constraint on (say) the 5th-10th members of d, I could write something like:
s.t. constrain_5th_to_10th{u in U: ord(u,U) >= 5, ord(u,U) <= 10}: d[u] >= 100;
See chapter 5 of the AMPL Book for more info on ordered sets.

Related

Maximum the result from multiple functions which share an input amount

I have multiple functions as shown in the image. For a fixed x value, I need to distribute it into f, g, and h functions for getting the maximum output (y). In other words, having a fixed x value, find a, b, and c in which these conditions are satisfied:
a + b + c = x
a >= 0 and b >= 0 and c >= 0
f(a) + g(b) + h(c) has max value.
Given the functions are continuous and monotonic. How should I write code to find out a, b, and c? Thanks in advance!
Under appropriate assumptions, if the maximum has a > 0 and b > 0 and c > 0, then a necessary condition is f'(a) = g'(b) = h'(c). Intuitively, if one of these derivatives was greater than the others, then we could effect an improvement by increasing the corresponding variable a little bit and decreasing another variable by the same amount. Otherwise, the maximum has a = 0 or b = 0 or c = 0, and we have a lower dimensional problem of the same type.
The algorithm is to loop over all seven possibilities for whether a, b, c are zero (assuming x > 0 to avoid the trivial case), then solve the equations a + b + c = x and f'(a) = g'(b) = h'(c) (omitting the variables that are zero) to find the candidate solutions, then return the maximum.
Even if you only had 2 functions f and g, you would be looking for the x that maximises the maximum of a :-> f(a) + g(x-a) on [0,x], which is a sum of an increasing and a decreasing function, so you can't have any guarantee about it.
Still if these functions are given to you as closed form expressions, you can compute u(a)=f(a)+g(x-a) and try to find the maximum (under sufficient assumptions, you will have u'(a) = 0 and u''(a) <= 0 for instance).
Going back to the 3 functions case, if it's possible you can compute for every a, v(a) = max_{b in [0, x-a]} ( g(b)+h(x-a-b) ), and then compute the max of (f+v)(a), or do with b or c first if it works better, but in the general case there is no efficient algorithm.

Alpha-beta prunning with transposition table, iterative deepening

I'm trying to implement alpha-beta min-max prunning enhanced with transposition tables. I use this pseudocode as reference:
http://people.csail.mit.edu/plaat/mtdf.html#abmem
function AlphaBetaWithMemory(n : node_type; alpha , beta , d : integer) : integer;
if retrieve(n) == OK then /* Transposition table lookup */
if n.lowerbound >= beta then return n.lowerbound;
if n.upperbound <= alpha then return n.upperbound;
alpha := max(alpha, n.lowerbound);
beta := min(beta, n.upperbound);
if d == 0 then g := evaluate(n); /* leaf node */
else if n == MAXNODE then
g := -INFINITY; a := alpha; /* save original alpha value */
c := firstchild(n);
while (g < beta) and (c != NOCHILD) do
g := max(g, AlphaBetaWithMemory(c, a, beta, d - 1));
a := max(a, g);
c := nextbrother(c);
else /* n is a MINNODE */
g := +INFINITY; b := beta; /* save original beta value */
c := firstchild(n);
while (g > alpha) and (c != NOCHILD) do
g := min(g, AlphaBetaWithMemory(c, alpha, b, d - 1));
b := min(b, g);
c := nextbrother(c);
if g <= alpha then
n.upperbound := g;
store n.upperbound;
if g > alpha and g < beta then
n.lowerbound := g;
n.upperbound := g;
store n.lowerbound, n.upperbound;
if g >= beta then
n.lowerbound := g;
store n.lowerbound;
return g;
Three questions to this algorithm:
I belive that I should store depth (=distance to leaf level) with each saved transposition table entry and use entry only when entry.depth>=currentDepth (= entry is more or equal distant from leaves level). That is not shown in above pseudocode and is not discussed there, I wanted to make sure I understand that correctly.
I would like to store best move for each position to use it for move ordering AND extracting best move after the search stops. In pure min-max it's obvious which move is the best, but which move is the best when iterating with alpha-beta cutoffs? Can I assume that the best move for given position is the best move found when the loop ends (with cut-off or without)?
When executing this algorithm in iterative deepening scheme - should I clear transposition table before each depth increase? I think not, I'd like tu use stored position from previous iteration, but I'm not sure if the information is adequate for deeper searches (It should be when checking table entry depth)?
You're right. entry.depth stores the number of plies the information in the transposition table entry are based on. So you can use those information only when entry.depth >= remaining_depth.
The logic is that we don't want to use a result weaker than the "normal" search.
Sometimes, for debugging purpose, the condition is changed to:
entry.depth == remaining_depth
this avoids some search instabilities. Anyway it doesn't guarantee the same result of a search without transposition table.
There isn't always a best move to store.
When the search fails low, there isn't a "best move". The only thing we know is that no move is good enough to produce a score bigger than alpha. There is no way to guess which move is best.
So you should store a move in the hash table only for lower bounds (beta-cutoff i.e. a refutation move) and exact scores (PV node).
No, you shouldn't. With iterative deepening the same position is reached again and again and the transposition table can speed up the search.
You should clear the transposition table between moves (or, better, use an additional entry.age field).

Solving a LP feasibility using CGAL

Can we solve linear programming feasibility problem of form mentioned below using CGAL(if not, please suggest alternatives):
v.x_a > c and,
v.x_b = c
where v,x_a,x_b,c are vector, vector, vector and scalar respectively. I want to find a tuple (v,c) for a given set of x( x_a and x_b are elements of x) which satisfies this inequality.
I have seen the documentation but allowable form is of type Ax(relation operator)b where relation operator can be >=,<= or =, where both A and b are known and x is unknown but my requirement is opposite, that is I have x but I want to determine if there exists a tuple (A,b) which satisfies the inequality.
Context:
I am trying to implement a 3D mesh generator for which I need to test whether an edge(joins two 3D vertices) is Delaunay. Delaunay edge is defined as: An edge is Delaunay, iff there exists a circumsphere of its endpoints not containing any other vertex inside it.
My question is based on the approach described here
According to the construction that David Eppstein describes in the linked question, i and j are fixed and we have the additional restriction that v.xi = v.xj = c. So the problem becomes:
Find a vector v != 0 such that v.xk >= v.xi for all k and v.xi = v.xj.
This can be transformed to
Find a vector v != 0 such that (xk - xi).v >= 0 for all k and (xi - xj).v >= 0 and -(xi - xj).v >= 0
By defining A as the matrix with rows xk - xi for all k, xi - xj and xj - xi, we get
Find a vector v != 0 such that Av >= 0
which has the form you need. You can enforce the v != 0 by brute-forcing the non-zero component. For each component i and, trying adding the condition vi >= 1 or vi <= -1 and check the resulting system for solvability. Since the normal vector of the plane can be scaled arbitrarily, there is a solution iif any of the resulting programs is solvable (there are 2d of them if d is the dimensionality of v).

Random Forest Query

I am working on a project based on random forest. I saw one ppt (Rec08_Oct21.ppt)(www.cs.cmu.edu/~ggordon/10601/.../rec08/Rec08_Oct21.ppt)
regarding random forest creation. I wanted to ask a question.
After scanning through the randomly selected features and their Information gain value, we select the feature with the max value of IG for feature j. Then, how do we split using this information? How do we proceed after this?
LearnTree(X, Y)
Let X be an R x M matrix, R-datapoints and M-attributes and Y with R elements which contains the output class of each data point.
j* = *argmaxj* **IG** j // (This is the splitting attribute we'll use)
The maximum value of IG can come from either a categorical (text-based) or real (number-based) attribute.
---> If it's coming from a categorical attribute(j): for each value v in jth attribute, we'll define a new matrix, now taking X v and Y v as the input derive a childtree.
Xv = subset of all the rows of X in which Xij = v;
Yv = corresponding subset of Y values;
Child v = LearnTree(Xv, Yv);
PS: The number of child trees will be same as the number of unique value v's in the jth attribute
---> If it's coming from the real valued attribute (j): we need to find out the best split threshold
PS: The threshold value t is the same value that provides the maximum IG value for that attribute
define IG(Y|X:t) as H(Y) - H(Y|X:t)
define H(Y|X:t) = H(Y|X<t) P(X<t) + H(Y|X>=t) P(X>=t)
define IG*(Y|X) = maxt IG(Y|X:t)
We'll be splitting over this t value, we then define two ChildTrees by defining two new pairs of X t and Y t.
X_lo = subset of all the rows whose Xij < t
Y_lo = corresponding subset Y values
Child_lo = LearnTree(X_lo, Y_lo)
X_hi = subset of all the rows whose Xij >t
Y_hi = corresponding subset Y values
Child_hi = LearnTree(X_hi, Y_hi)
After splitting is done, the data is then classified.
For more information, go here!
I hope I answered your question.

Combinatorial Optimization - Variation on Knapsack

Here is a real-world combinatorial optimization problem.
We are given a large set of value propositions for a certain product. The value propositions are of different types but each type is independent and adds equal benefit to the overall product. In building the product, we can include any non-negative integer number of "units" of each type. However, after adding the first unit of a certain type, the marginal benefit of additional units of that type continually decreases. In fact, the marginal benefit of a new unit is the inverse of the number of units of that type, after adding the new unit. Our product must have a least one unit of some type, and there is a small correction that we must make to the overall value because of this requirement.
Let T[] be an array representing the number of each type in a certain production run of the product. Then the overall value V is given by (pseudo code):
V = 1
For Each t in T
V = V * (t + 1)
Next t
V = V - 1 // correction
On cost side, units of the same type have the same cost. But units of different types each have unique, irrational costs. The number of types is large, but we are given an array of type costs C[] that is sorted from smallest to largest. Let's further assume that the type quantity array T[] is also sorted by cost from smallest to largest. Then the overall cost U is simply the sum of each unit cost:
U = 0
For i = 0, i < NumOfValueTypes
U = U + T[i] * C[i]
Next i
So far so good. So here is the problem: Given product P with value V and cost U, find the product Q with the cost U' and value V', having the minimal U' such that U' > U, V'/U' > V/U.
The problem you've described is nonlinear integer programming problem because it contains a product of integer variables t. Its feasibility set is not closed because of strict inequalities which can be worked around by using non-strict inequalities and adding a small positive number (epsilon) to the right hand sides. Then the problem can be formulated in AMPL as follows:
set Types;
param Costs{Types}; # C
param GivenProductValue; # V
param GivenProductCost; # U
param Epsilon;
var units{Types} integer >= 0; # T
var productCost = sum {t in Types} units[t] * Costs[t];
minimize cost: productCost;
s.t. greaterCost: productCost >= GivenProductCost + Epsilon;
s.t. greaterValuePerCost:
prod {t in Types} (units[t] + 1) - 1 >=
productCost * GivenProductValue / GivenProductCost + Epsilon;
This problem can be solved using a nonlinear integer programming solver such as Couenne.
Honestly I don't think there is an easy way to solve this. The best thing would be to write the system and solve it with a solver ( Excel solver will do the tricks, but you can use Ampl to solve this non lienar program.)
The Program:
Define: U;
V;
C=[c1,...cn];
Variables: T=[t1,t2,...tn];
Objective Function: SUM(ti.ci)
Constraints:
For all i: ti integer
SUM(ti.ci) > U
(PROD(ti+1)-1).U > V.SUM(ti.ci)
It works well with excel, (you just replace >U by >=U+d where d is the significative number of the costs- (i.e if C=[1.1, 1.8, 3.0, 9.3] d =0.1) since excel doesn't allow stric inequalities in the solver.)
I guess with a real solver like Ampl it will work perfectly.
Hope it helps,

Resources