I am trying to solve a system of equations (5 unknown variables, 5 equations) but the Solve[] function just hangs and I have to abort the evaluation. I can understand as some of the equations are quite messy-- in my opinion at least (I'm not a mathematician).
I checked the equations used in Solve[] by substituting in "known/true" simulation values and they all work out.
So, my question is this: Is it possible to "help" Solve[] by saying, for example...
Solve[{eq1, eq2, eq3, eq4, eq5},{var1, var2, var3, var4, var5}, (*code here along the lines of { 0 < var1 < 10, var2 < 25, ...}*)]
I can provide more information if it would be of assistance.
Thanks!
The program Mathematica provides actually very simple solution inside the function Solve[].
You can add all kind of desired conditions as inequalities ConditionOnVar1, ConditionOnVar1:
Solve[{Eq1, Eq2, ConditionOnVar1, ConditionOnVar2},{Var1, Var2}]
Trivial 1D Example
Solve[Cos[theta]==1 && theta >= 0 && theta < 2\[Pi], theta]
Related
I'm trying to solve a simple linear system in Maxima using solve like so:
/*Standard form*/
eq1 : x1 + 3*x2 + s1 = 6;
eq2 : 3*x1 + 2*x2 + s2 = 6;
base1 : solve([eq1,eq2],[s1,s2]);
This however returns an empty list and I don't know why. Any ideas? I'm pretty sure the system has a solution, so that shouldn't be the issue.
EDIT:
I attempted to insert the equations explicitly into solve in place of eq1 and eq2, and now it works. Now the question is, why do I need to explicitly insert the equations to be solved for into the first argument of solve. An in-depth answer about how Maxima works in this case would be welcome.
This happened to me when one of the variables in the equation was previously defined.
E.g., if z was previously defined:
Then simply changing z to, e.g., p returns the solutions:
I'm using SCIPAMPL to solve mixed integer nonlinear programming problems (MINLPs). For the most part it's been working well, but I found an instance where the solver detects infeasibility erroneously.
set K default {};
var x integer >= 0;
var y integer >= 0;
var z;
var v1{K} binary;
param yk{K} integer default 0;
param M := 300;
param eps := 0.5;
minimize upperobjf:
16*x^2 + 9*y^2;
subject to
ll1: 4*x + y <= 50;
ul1: -4*x + y <= 0;
vf1{k in K}: z + eps <= (x + yk[k] - 20)^4 + M*(1 - v1[k]);
vf2: z >= (x + y - 20)^4;
aux1{k in K}: -(4*x + yk[k] - 50) <= M*v1[k] - eps;
# fix1: x = 4;
# fix2: y = 12;
let K := {1,2,3,4,5,6,7,8,9,10,11};
for {k in K} let yk[k] := k - 1;
solve;
display x,y,z,v1;
The solver is detecting infeasibility at the presolve phase. However, if you uncomment the two constraints that fix x and y to 4 and 12, the solver works and outputs the correct v and z values.
I'm curious about why this might be happening and whether I can formulate the problem in a different way to avoid it. One suggestion I got was that infeasibility detection is usually not very good with non-convex problems.
Edit: I should mention that this isn't just a SCIP issue. SCIP just hits the issue with this particular set K. If for instance I use bonmin, another global MINLP solver, I can solve the problem for this particular K, but if you expand K to go up to 15, then bonmin detects infeasibility when the problem remains feasible. For that K, I'm yet to find a solver that actually works. I've also tried minlp solvers based on FILTER. I'm yet to try BARON since it only takes GAMS input.
There are very good remarks about modeling issues regarding, e.g., big-M constraints in the comments to your original question. Numerical issues can indeed cause troubles, especially when nonlinear constraints are present.
Depending on how deep you would like to dive into that matter, I see 3 options for you:
You can decrease numeric precision by tuning the parameters numerics/feastol, numerics/epsilon, and numerics/lpfeastol. You can save the following lines in a file "scip.set" and save it to the working directory from where you call scipampl:
# absolute values smaller than this are considered zero
# [type: real, range: [1e-20,0.001], default: 1e-09]
numerics/epsilon = 1e-07
# absolute values of sums smaller than this are considered zero
# [type: real, range: [1e-17,0.001], default: 1e-06]
numerics/sumepsilon = 1e-05
# feasibility tolerance for constraints
# [type: real, range: [1e-17,0.001], default: 1e-06]
numerics/feastol = 1e-05
# primal feasibility tolerance of LP solver
# [type: real, range: [1e-17,0.001], default: 1e-06]
numerics/lpfeastol = 1e-05
You can now test different numerical precisions within scipampl by modifying the file scip.set
Save the solution you obtain by fixing your x and y-variables. If you pass this solution to the model without fixings, you get a message what caused the infeasibility. Usually, you will get a message that some variable bound or constraint is violated slightly outside a tolerance.
If you want to know precisely through which presolver a solution becomes infeasible, or if the former approach does not show any violation, SCIP offers the functionality to read in a debug solution; Specify the solution file "debug.sol" by uncommenting the line in src/scip/debug.h
/* #define SCIP_DEBUG_SOLUTION "debug.sol" */
and recompile SCIP and SCIPAmpl by using
make DBG=true
SCIP checks the debug-solution against every presolving reduction and outputs the presolver which causes the trouble.
I hope this is useful for you.
Looking deeper into this instance, SCIP seems to do something wrong in presolve.
In cons_nonlinear.c:7816 (function consPresolNonlinear), remove the line
if( nrounds == 0 )
so that SCIPexprgraphPropagateVarBounds is executed in any case.
That seems to fix the issue.
To make sure that this is not a duplicate, I have already checked this and this out.
I want to generate random numbers in a specific range including step size (not continuous distribution).
For example, I want to generate random numbers between -2 and 3 in which the step between two consecutive numbers is 0.02. (e.g. [-2 -1.98 -1.96 ... 2.69 2.98 3] so a generated number should be 2.96 not 2.95).
I have tried this:
a=-2*100;
b=3*100;
r = (b-a).*rand(5,1) + a;
for i=1:length(r)
if r(i) >= 0
if mod(fix(r(i)),2)
r(i)=ceil(r(i))/100;
else
r(i)=floor(r(i))/100;
end
else
if mod(fix(r(i)),2)
r(i)=floor(r(i))/100;
else
r(i)=ceil(r(i))/100;
end
end
end
and it works.
there is an alternative way to do this in MATLAB which is :
y = datasample(-2:0.02:3,5,'Replace',false)
I want to know:
How can I make my own implementation faster (improve the
performance)?
If the second method is faster (it looks faster to me), how can I
use similar implementation in C++?
Those previous answers do cover your case if you read carefully. For example, this one produces random numbers between limits with a step size of one. But let's generalize this to an arbitrary step size in case you can't figure out how to get there. There are several different ways. Here's one using randi where we use the default step size of one and the range from one to the number possible values as indices:
lo = 2;
hi = 3;
step = 0.02;
v = lo:step:hi;
r = v(randi(length(v),[5 1]))
If you look inside datasample (type edit datasample in your command window to view the code) you'll see that it's doing something very similar to this answer. In the case of the 'Replace' option being true see around line 135 (in R2013a at least).
If the 'Replace' option is false, as in your use of datasample above, then randperm actually needs to be used instead (see around line 159):
lo = 2;
hi = 3;
step = 0.02;
v = lo:step:hi;
r = v(randperm(length(v),51))
Because there is no replacement in this case, 51 is the maximum number of values that can be requested in a call and all values of r will be unique.
In C++ you should not use rand() if you're doing scientific computing and generating large numbers of random variates. Instead you should use a large period random number generator such as Mersenne Twister (the default in Matlab). C++11 includes a version of this generator as part of . More here in rand(). If you want something fast, you should try the Double precision SIMD-oriented Fast Mersenne Twister. You'll have to ask another question if you want to implement your code in C++.
The distribution you want is a simple transform of integers, so how about:
step = 0.02
r = randi([-2 3] / step, [5, 1]) * step;
In C++, rand() generates integers too, so it should be pretty obvious how to take a similar approach there.
I'm very new to Mathematica, and I'm getting pretty frustrated with the errors I'm generating when it comes to creating a function. Below, I have a function I'm writing for 'centering' a matrix where rows correspond to examples, columns to features. The aim is to subtract from each element the mean of the column to which it belongs.
centerdata[datamat_] := (
numdatapoints =
Dimensions[datamat][[1]](*Get number of datapoints*)
numberfeatures =
Dimensions[datamat[[1]]][[1]](*Get number of datapoints*)
columnmean = ((Total[datamat])/numdatapoints)
For[i = 1, i < numdatapoints + 1, i++, (* For each row*)
For[j = 1, j < numfeatures + 1, j++, (* For each element*)
datum = datamat[[i]][[j]];
newval = (datum - (colmean[[j]]));
ReplacePart[datamat, {i, j} -> newval];
];
];
Return[datamat];
)
Running this function for a matrix, I get the following error:
"Set::write: Tag Times in 4 {5.84333,3.054,3.75867,1.19867} is Protected. >>
Set::write: "Tag Times in 4\ 150 is Protected."
Where {5.84333,3.054,3.75867,1.19867} is the first example in the data matrix and 150 is the number of examples in the matrix (I'm using the famous iris dataset, for anyone interested). These errors correspond to this code:
numdatapoints = Dimensions[datamat][[1]](*Get number of datapoints*)
numberfeatures = Dimensions[datamat[[1]]][[1]](*Get number of datapoints*)
Googling and toying with this error hasn't helped much as the replies in general relate to multiplication, which clearly isn't being done here.
Given a table (tab) of data the function Mean[tab] will return a list of the means of each column. Next, you want to subtract this (element-wise) from each row in the table, try this:
Map[Plus[-Mean[tab],#]&,tab]
I have a feeling that there is probably either an intrinsic statistical function to do this in one statement or that I am blind to a much simpler solution.
Since you are a beginner I suggest that you immediately read the documentation for:
Map, which is one of the fundamental operators in functional programming languages such as Mathematica pretends to be; and
pure functions whose use involves the cryptic symbols # and &.
If you are writing loops in Mathematica programs you are almost certainly mis-using the system.
I am trying to evaluate the following integral:
I can find the area for the following polynomial as follows:
pn =
-0.0250 0.0667 0.2500 -0.6000 0
First using the integration by Simpson's rule
fn=#(x) exp(polyval(pn,x));
area=quad(fn,-10,10);
fprintf('area evaluated by Simpsons rule : %f \n',area)
and the result is area evaluated by Simpsons rule : 11.483072
Then with the following code that evaluates the summation in the above formula with gamma function
a=pn(1);b=pn(2);c=pn(3);d=pn(4);f=pn(5);
area=0;
result=0;
for n=0:40;
for m=0:40;
for p=0:40;
if(rem(n+p,2)==0)
result=result+ (b^n * c^m * d^p) / ( factorial(n)*factorial(m)*factorial(p) ) *...
gamma( (3*n+2*m+p+1)/4 ) / (-a)^( (3*n+2*m+p+1)/4 );
end
end
end
end
result=result*1/2*exp(f)
and this returns 11.4831. More or less the same result with the quad function. Now my question is whether or not it is possible for me to get rid of this nested loop as I will construct the cumulative distribution function so that I can get samples from this distribution using the inverse CDF transform. (for constructing the cdf I will use gammainc i.e. the incomplete gamma function instead of gamma)
I will need to sample from such densities that may have different polynomial coefficients and speed is of concern to me. I can already sample from such densities using Monte Carlo methods but I would like to see whether or not it is possible for me to use exact sampling from the density in order to speed up.
Thank you very much in advance.
There are several things one might do. The simplest is to avoid calling factorial. Instead one can use the relation that
factorial(n) = gamma(n+1)
Since gamma seems to be actually faster than a call to factorial, you can save a bit there. Even better, you can
>> timeit(#() factorial(40))
ans =
4.28681157826087e-05
>> timeit(#() gamma(41))
ans =
2.06671024634146e-05
>> timeit(#() gammaln(41))
ans =
2.17632543333333e-05
Even better, one can do all 4 calls in a single call to gammaln. For example, think about what this does:
gammaln([(3*n+2*m+p+1)/4,n+1,m+1,p+1])*[1 -1 -1 -1]'
Note that this call has no problem with overflows either in case your numbers get large enough. And since gammln is vectorized, that one call is fast. It costs little more time to compute 4 values than it does to compute one.
>> timeit(#() gammaln([15 20 40 30]))
ans =
2.73937416896552e-05
>> timeit(#() gammaln(40))
ans =
2.46521943333333e-05
Admittedly, if you use gammaln, you will need a call to exp at the end to recover the final result. You could do it with a single call to gamma however too. Perhaps like this:
g = gamma([(3*n+2*m+p+1)/4,n+1,m+1,p+1]);
g = g(1)/(g(2)*g(3)*g(4));
Next, you can be more creative in the inner loop on p. Rather than a full loop, coupled with a test to ignore the combinations you don't need, why not just do this?
for p=mod(n,2):2:40
That statement will select only those values of p that would have been used anyway, so now you can drop the if statement completely.
All of the above will give you what I'll guess is about a 5x speed increase in your loops. But it still has a set of nested loops. With some effort, you might be able to improve that too.
For example, rather than computing all of those factorials (or gamma functions) many times, do it ONCE. This should work:
a=pn(1);b=pn(2);c=pn(3);d=pn(4);f=pn(5);
area=0;
result=0;
nlim = 40;
facts = factorial(0:nlim);
gammas = gamma((0:(6*nlim+1))/4);
for n=0:nlim
for m=0:nlim
for p=mod(n,2):2:nlim
result = result + (b.^n * c.^m * d.^p) ...
.*gammas(3*n+2*m+p+1 + 1) ...
./ (facts(n+1).*facts(m+1).*facts(p+1)) ...
./ (-a)^( (3*n+2*m+p+1)/4 );
end
end
end
result=result*1/2*exp(f)
In my test on my machine, I find that your triply nested loops required 4.3 seconds to run. My version above produces the same result, yet required only 0.028418 seconds, a speedup of roughly 150 to 1, despite the triply nested loops.
Well, without even making changes to your code you could install an excellent package from Tom Minka at Microsoft called lightspeed which replaces some built-in matlab functions with much faster versions. I know there's a replacement for gammaln().
You'll get nontrivial speed improvements, though I'm not sure how much, and it's straight-forward to install.