I am trying to understand the following code given in MediumLimitCache.cc file to understand the formula of finding communication range using loss factor. I know for isotropic antenna, FSPL = Pt/Pr = (4*pi*d*f/c)^2. I could not understand how this formula is implemented in below code.
Would anyone please explain me the functions of the below code? Thank you.
m MediumLimitCache::computeMaxRange(W maxTransmissionPower, W minReceptionPower) const
{
// TODO: this is NaN by default
Hz centerFrequency = Hz(par("centerFrequency"));
**double loss = unit(minReceptionPower / maxTransmissionPower).get() / maxAntennaGain / maxAntennaGain;
return radioMedium->getPathLoss()->computeRange(radioMedium->getPropagation()->getPropagationSpeed(), centerFrequency, loss);**
}
The formula is NOT implemented in this code. This is MediumLimitCache i.e. this is just an optimization component that requires a maximum estimation.
You might find more info inside the computeRange() method.
Related
I'm building a Random Forest with Caret package on R with method = "rf". I see that every type of random forest on caret seems only tune mtry which is the number of features selected randomly for each tree. I do not understand why max_depth of each tree is not a tunable parameter (like cart) ? In my mind, it is a parameter which can limit over-fitting.
For example, my rf seems really better on train data than the test data :
model <- train(
group ~., data = train.data, method = "rf",
trControl = trainControl("repeatedcv", number = 5,repeats =10),
tuneLength=5
)
> postResample(fitted(model),train.data$group)
Accuracy Kappa
0.9574592 0.9745841
> postResample(predict(model,test.data),test.data$group)
Accuracy Kappa
0.7333333 0.5428571
As you can see my model is clearly over-fitted. However, I tried a lot of different things to handle this but nothing worked. I always have something like 0.7 accuracy on test data and 0.95 on train data. This is why I want to optimize other parameters.
I cannot share my data to reproduce this.
I'm using F# with Accord.NET, and I'm trying to perform an optimization using the Nelder-Mead algorithm.
After a week of attempts, trying to follow the examples from website, I still can't perform the operation.
I didn't find the way to write the expression for optimize the function.
I wrote a custom function which accept 9 parameters:
let FunSqEuclid (F:float) (X:float[]) (T:float) (iv:float[]) (atmVol:float) (alpha:float) (beta:float) (volVol:float) (rho:float) =
let dum01 = VecAlphaSABR (F:float) (X:float[]) (T:float) (atmVol:float) (alpha:float) (beta:float) (volVol:float) (rho:float)
let dum02 = Array.map2 (+) dum01 iv
let dum03 = dum02.SquareEuclidean()
dum03
What I need is to optimize this function varying only the "volVol" and "rho" parameters, but keeping constant all the others.
Following examples (in C#), I tried with:
let ObFunc = NonlinearObjectiveFunction(function: () => (FunSqEuclid (F:float) (X:float[]) (T:float) (iv:float[]) (atmVol:float) (alpha:float) (beta:float) (volVol:float) (rho:float)))
using costraints to keep parameters constant, but I have error on keyword "function", both for NonlinearObjectiveFunction and NonlinearCostraint.
I read on documentation that objective function can be written as a Linq Expression, but I never used it.
There is an alternative way to insert objective function and costraints? Or, please, can you suggest where are similar examples in Linq Expression for F#?
EDIT
I found more informations from the examples of "Extreme Optimization" library. I have seen it has a similar approach to "Accord.net" about the optimization, and there are examples in F#, so, with appropriate adaptations, I understand how it works when parameters are simple values.
The point is that I'm trying to translate some R code to F#.
The R code performing the optimization is the following:
objective <- function(x){sum( (iv - SABR.BSIV(t, f, K, exp(x[1]), .t1(x[2]), .t2(x[3]), exp(x[4])))^2) }
x <- nlm(objective, c(0.2, 1.0, 0.0, 0.1))
where K and iv are arrays. So, I still didn't find a way to pass array arguments for the objective function in Accord.net.
Please, can you suggest me some way?
Thanks.
I am relatively new to Modelica (Dymola-environment) and I am getting very desperate/upset that I cannot solve such a simple problem as a random number generation in Modelica and I hope that you can help me out.
The simple function random produces a random number between 0 and 1 with an input seed seedIn[3] and produces the output seed seedOut[3] for the next time step or event. The call
(z,seedOut) = random(seedIn);
works perfectly fine.
The problem is that I cannot find a way in Modelica to compute this assignment over time by using the seedOut[3] as the next seedIn[3], which is very frustrating.
My simple program looks like this:
*model Randomgenerator
Real z;
Integer seedIn[3]( start={1,23,131},fixed=true), seedOut[3];
equation
(z,seedOut) = random(seedIn);
algorithm
seedIn := seedOut;
end Randomgenerator;*
I have tried nearly all possibilities with algorithm assignments, initial conditions and equations but none of them works. I just simply want to use seedOut in the next time step. One problem seems to be that when entering into the algorithm section, neither the initial conditions nor the values from the equation section are used.
Using the 'sample' and 'reinit' functions the code below will calculate a new random number at the frequency specified in 'sample'. Note the way of defining the "start value" of seedIn.
model Randomgenerator
Real seedIn[3] = {1,23,131};
Real z;
Real[3] seedOut;
equation
(z,seedOut) = random(seedIn);
when sample(1,1) then
reinit(seedIn,pre(seedOut));
end when;
end Randomgenerator;
The 'pre' function allows the use of the previous value of the variable. If this was not used, the output 'z' would have returned a constant value. Two things regarding the 'reinint' function, it requires use of 'when' and requires 'Real' variables/expressions hence seedIn and seedOut are now defined as 'Real'.
The simple "random" generator I used was:
function random
input Real[3] seedIn;
output Real z;
output Real[3] seedOut;
algorithm
seedOut[1] :=seedIn[1] + 1;
seedOut[2] :=seedIn[2] + 5;
seedOut[3] :=seedIn[3] + 10;
z :=(0.1*seedIn[1] + 0.2*seedIn[2] + 0.3*seedIn[3])/(0.5*sum(seedIn));
end random;
Surely there are other ways depending on the application to perform this operation. At least this will give you something to start with. Hope it helps.
I'm trying to speed up my code using parfor. The purpose of the code is to slide a 3D square window on a 3D image and for each block of mxmxm apply a function.
I wrote this code:
function [ o_image ] = SlidingWindow( i_image, i_padSize, i_fun, i_options )
%SLIDINGWINDOW Summary of this function goes here
% Detailed explanation goes here
o_image = zeros(size(i_image,1),size(i_image,2),size(i_image,3));
i_image = padarray(i_image,i_padSize,'symmetric');
i_padSize = num2cell(i_padSize);
[m,n,p] = deal(i_padSize{:});
[row,col,depth] = size(i_image);
windowShape = i_options.windowShape;
mask = i_options.mask;
parfor (i = m+1:row-m,i_options.cores)
temp = i_image(i-m:i+m,:,:);
for j = n+1:col-n
for h = p+1:depth-p
ii = i-m;
jj = j-n;
hh = h-p;
temp = temp(:,j-n:j+n, h-p:h+p);
o_image(ii,jj,hh) = parfeval(i_fun, temp, windowShape, mask);
end
end
end
end
I get one warning and one error that I don't understand how to solve.
The warning says:
the entire array or structure 'i_image' is a broadcast variable.
The error says:
the PARFOR loop can not run due to the way variable 'o_image' is used.
I don't understand how to fix these two things. Any help is greatly appreciated!
As far as I understand, parfeval takes care of running your function on the available number of workers, which is why it doesn't need to be surrounded by parfor. Assuming you already have an active parpool, changing the external parfor into for eliminates both problems.
Unfortunately, I can't support my answer with a benchmark or suggest a more fitting solution because your inputs are unknown.
It seems to me that the code can be optimized in other ways, mainly by vectorization. I would suggest you looked into the following resources:
This question, for additional info on parfeval.
Examples on how to use bsxfun and permute and benchmarks thereof: ex1, ex2, ex3.
P.S.: The 2nd part of (i = m+1:row-m,i_options.cores) seems out of place...
Where can I find the implementation of barrier function in Matlab?
I am trying to see how the algorithm interior-point is implemented, and this is what I found in the end of fmincon.m
elseif strcmpi(OUTPUT.algorithm,interiorPoint)
defaultopt.MaxIter = 1000; defaultopt.MaxFunEvals = 3000; defaultopt.TolX = 1e-10;
defaultopt.Hessian = 'bfgs';
mEq = lin_eq + sizes.mNonlinEq + nnz(xIndices.fixed); % number of equalities
% Interior-point-specific options. Default values for lbfgs memory is 10, and
% ldl pivot threshold is 0.01
options = getIpOptions(options,sizes.nVar,mEq,flags.constr,defaultopt,10,0.01);
[X,FVAL,EXITFLAG,OUTPUT,LAMBDA,GRAD,HESSIAN] = barrier(funfcn,X,A,B,Aeq,Beq,l,u,confcn,options.HessFcn, ...
initVals.f,initVals.g,initVals.ncineq,initVals.nceq,initVals.gnc,initVals.gnceq,HESSIAN, ...
xIndices,options,optionFeedback,finDiffFlags,varargin{:});
So I want to see what's in barrier but failed.
edit barrier.m
I got:
The barrier function is defined in a p-file (precisely located in MATLABROOT/toolbox/optim/optim/barrier.p).
Unfortunately the point of p-files is exactly that they are obfuscated, i.e. you cannot read the source code. This is a recurent questio on SO, see this thread for instance.
I'm afraid you cannot read what's inside barrier. Maybe if you ask the Mathworks kindly they can give you some information on the content.
Best