MethodError: no method matching parseNLExpr_runtime - matrix

I have an error for the following code in Julia for solving NLP problem.
using JuMP
using Ipopt
using DataFrames,ConditionalJuMP
m = Model(solver=IpoptSolver())
#importing data
xi=[-1.016713795 0.804447648 0.932137661 1.064136698 -0.963217531
-1.048396778 1.076371484 1.099027177 1.061556926 -0.95185481
-0.980261302 0.271253807 0.184946729 1.062838197 -0.958794505
-0.980703191 0.278820569 0.231132677 1.062967459 -0.959302488
-0.953074503 -0.00768112 0.128808175 1.067743779 -0.978524489
-1.014866259 0.815325963 1.065956208 1.067059122 -0.974682291
-0.995088119 0.550359991 0.845087207 1.066556784 -0.973154887]
xj=xi
pii=[-300
-259.6530828
-284.3708955
-291.3387625
-342.4479859
-356.5031603
-351.0154738]
sample_size=7
bus_num=5
tempsum=0
#define variables
#variable(m,aij[1:2*bus_num,1:2*bus_num]) # define matrix by
2bus_num*2bus_num
for n=1:sample_size
for i=1:bus_num
for j=1:bus_num
tempsum=(aij[i,j]*xi[n,i]*xj[n,j]-pii[n,2])^2+tempsum
end
end
end
#define constraints
#constraint(m, [i=1:2*bus_num,j=2*bus_num],aij[i,j]==aij[j,i])
#constraint(m, [i=1:2*bus_num,j=2*bus_num],aij[i,j]*aij[i,j]<=aij
[i,i]*aij[j,j])
#constraint(m, [i=1:2*bus_num,j=2*bus_num],aij[i,i]*aij[j,j]<=0.25*
(aij[i,i]+aij[j,j])*(aij[i,i]+aij[j,j]))
#define NLP objective function
#NLobjective(m, Min, tempsum)
solve(m)
println("m = ",getobjectivevalue(m))
println("Aij = ", getvalue(aij))
but after operation, an error was shown as below.
MethodError: no method matching parseNLExpr_runtime(::JuMP.Model,
::JuMP.GenericQuadExpr{Float64,JuMP.Variable},
::Array{ReverseDiffSparse.NodeData,1}, ::Int64, ::Array{Float64,1})
The error occurs on #NLobjective(m, Min, tempsum), but I dont know how to modify the code. May you please anyone help me ?
Besides, I am also confused by how to add positive definite contraints here for the solution Matrix aij, as I want to get a positive definite matrix aij.

#NLobjective(m, Min, tempsum) the tempsum in this code must be a function which is registered by Jump.register(***).

Related

Robust Standard Errors in lm() using stargazer()

I have read a lot about the pain of replicate the easy robust option from STATA to R to use robust standard errors. I replicated following approaches: StackExchange and Economic Theory Blog. They work but the problem I face is, if I want to print my results using the stargazer function (this prints the .tex code for Latex files).
Here is the illustration to my problem:
reg1 <-lm(rev~id + source + listed + country , data=data2_rev)
stargazer(reg1)
This prints the R output as .tex code (non-robust SE) If i want to use robust SE, i can do it with the sandwich package as follow:
vcov <- vcovHC(reg1, "HC1")
if I now use stargazer(vcov) only the output of the vcovHC function is printed and not the regression output itself.
With the package lmtest() it is possible to print at least the estimator, but not the observations, R2, adj. R2, Residual, Residual St.Error and the F-Statistics.
lmtest::coeftest(reg1, vcov. = sandwich::vcovHC(reg1, type = 'HC1'))
This gives the following output:
t test of coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -2.54923 6.85521 -0.3719 0.710611
id 0.39634 0.12376 3.2026 0.001722 **
source 1.48164 4.20183 0.3526 0.724960
country -4.00398 4.00256 -1.0004 0.319041
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
How can I add or get an output with the following parameters as well?
Residual standard error: 17.43 on 127 degrees of freedom
Multiple R-squared: 0.09676, Adjusted R-squared: 0.07543
F-statistic: 4.535 on 3 and 127 DF, p-value: 0.00469
Did anybody face the same problem and can help me out?
How can I use robust standard errors in the lm function and apply the stargazer function?
You already calculated robust standard errors, and there's an easy way to include it in the stargazeroutput:
library("sandwich")
library("plm")
library("stargazer")
data("Produc", package = "plm")
# Regression
model <- plm(log(gsp) ~ log(pcap) + log(pc) + log(emp) + unemp,
data = Produc,
index = c("state","year"),
method="pooling")
# Adjust standard errors
cov1 <- vcovHC(model, type = "HC1")
robust_se <- sqrt(diag(cov1))
# Stargazer output (with and without RSE)
stargazer(model, model, type = "text",
se = list(NULL, robust_se))
Solution found here: https://www.jakeruss.com/cheatsheets/stargazer/#robust-standard-errors-replicating-statas-robust-option
Update I'm not so much into F-Tests. People are discussing those issues, e.g. https://stats.stackexchange.com/questions/93787/f-test-formula-under-robust-standard-error
When you follow http://www3.grips.ac.jp/~yamanota/Lecture_Note_9_Heteroskedasticity
"A heteroskedasticity-robust t statistic can be obtained by dividing an OSL estimator by its robust standard error (for zero null hypotheses). The usual F-statistic, however, is invalid. Instead, we need to use the heteroskedasticity-robust Wald statistic."
and use a Wald statistic here?
This is a fairly simple solution using coeftest:
reg1 <-lm(rev~id + source + listed + country , data=data2_rev)
cl_robust <- coeftest(reg1, vcov = vcovCL, type = "HC1", cluster = ~
country)
se_robust <- cl_robust[, 2]
stargazer(reg1, reg1, cl_robust, se = list(NULL, se_robust, NULL))
Note that I only included cl_robust in the output as a verification that the results are identical.

How to overcome simulation of ode issue in R?

I#m trying to solve and then optimise the parameter and initial condition values in the system of differential equations. However, I can't run the code due to error messages (this code was working fine when the FS was given as an equation). Due to the limited data points, I want to be able to optimise against E values the whole system (calculated using equation F). I've made the code work till the LLode function, so optimisation doesn't take place at all.
I've managed to solve the error thanks to forum replies (Lyngbakr):
initSim<-ode(initCond, tspan, hormonesode, params, method="ode45",atol = 1e-10, rtol = 1e-10,hmax=NULL,maxsteps=100000000000000000)
Warning message:
In rk(y, times, func, parms, method = "ode45", ...) :
Number of time steps 1 exceeded maxsteps at t = 0
I will appreciate any hint.
Malgosia
My code looks as follow:
tspan<-c(0,1,2,3,4,5)#,6,7,8,9,16,17,18,19)
E<-c(0.303,0.205,0.205,0.381,0.272,0.188)#,0.317,0.274,0.106,0.161,0.947,2.722,4.701,0.24)
names(df)=c("tspan","E")
require(deSolve)
#initial parameter values
#k1<-1.062169#0.370 7.754
k2<-1.908891#-0.00727284 0.022
#k3<-0.321
k4<-2.14
k5<-10.7
#k6<-1.07
A0<-1.38 #15.47
B0<-0.61 #0.298
C0<-0.28
#F0<-0.303#3.28803757 3.434
#define a weight vector outside LLODE function
wts<-sqrt(sqrt(E))
#combine parameters into a vector
params<-c(k2,k4,k5,A0,B0,C0)
names(params)<-c("k2","k4","k5","A0","B0","C0")
#ode function
hormonesode<-function(t,x,params){
A<-x[1]
B<-x[2]
C<-x[3]
D<-x[4]
E<-x[5]
F<-x[6]
# k1<-params[1]
k2<-params[1]
#k3<-params[3]
k4<-params[2]
k5<-params[3]
A0<-params[4]
B0<-params[5]
C0<-params[6]
P<-3.02796-3.1867*cos(0.314159*t)-0.55332*cos(2*0.314159*t)+0.362678*cos(3*0.314159*t)+0.486708*cos(4*0.314159*t)-0.10133*cos(5*0.314159*t)-0.21977*cos(6*0.314159*t)-0.08926*cos(7*0.314159*t)+0.222292*cos(8*0.314159*t)-1.05119*sin(0.314159*t)+0.855633*sin(2*0.314159*t)+0.176677*sin(3*0.314159*t)-0.05658*sin(4*0.314159*t)-0.34108*sin(5*0.314159*t)-0.15718*sin(6*0.314159*t)+0.397642*sin(7*0.314159*t)-0.0986*sin(8*0.314159*t)
FS<-0.1944+0.002017*cos(0.261799*t)+0.009603*cos(2*0.261799*t)+0.01754*cos(3*0.261799*t)+0.106208*cos(4*0.261799*t)+0.020423*cos(5*0.261799*t)+0.015417*cos(6*0.261799*t)+0.01079*cos(7*0.261799*t)+0.115042*cos(8*0.261799*t)+0.008853*sin(0.261799*t)+0.013523*sin(2*0.261799*t)+0.012254*sin(3*0.261799*t)+0.026053*sin(4*0.261799*t)+0.000957*sin(5*0.261799*t)-0.001*sin(6*0.261799*t)+0.002374*sin(7*0.261799*t)+0.026775*sin(8*0.261799*t)
dA<-1.0621*1/(1+(1/5*P)^5)*(1/3*(F^10)/(1+(F)*1/3)^10)-2.14*A;
dB<-75*(((A/5)^10)/(1+(A/5)^10))-8.56*B;
dC<-1.909*FS+0.321*FS*C- 0.749*C;
dD<-0.749*C- 0.749*D+0.214*FS*D^2;
dE<-0.749*D-0.749*E+0.214*B*E^2;
dF<-k2 + k4*D + k5*E-1.07*F;
output<-c(dA, dB, dC, dD, dE, dF)
list(output)
}
#Initial conditions
A0<-2500#1.0038#2.794
B0<-105#25.0061#6.13
C0<-0.018#0.02#0.06126
D0<-0
E0<-0
F0<-0.303#3.28803757 3.434
initCond<-c(A0, B0, C0, D0, E0, F0)
initCond
#run ode with initial guesses
initSim<-ode(initCond, tspan, hormonesode, params, method="ode45",atol = 1e-1, rtol = 1e-1,hmax=NULL, maxsteps=100000)
plot(tspan,initSim[,7], type="l", lty="dashed")
points(tspan,E)
initSim
LLode<-function(params){
A0<-params[4]
B0<-params[5]
C0<-params[6]
D0<-0
E0<-0
F0<-0.303
initCond<-c(A0, B0, C0, D0, E0, F0)
#Run the ODE
odeOut<-ode(initCond,tspan,hormonesode,params,method="ode45",atol = 1e-1, rtol = 1e-1,hmax=NULL, maxsteps=100000)
if(attr(odeOut,"istate")[1]!=0){
#Check if satisfactory 2 indicates perceived success of method 'lsoda', 0 for
#'ode45'. Other integrators may have different indicator codes
cat("Integration possibly failed\n")
LL<-.Machine$double.xmax*1e-06#indicate failure
}else{
y<-odeOut[,7] #measurement variable
wtDiff1<-(y-E)*wts#weighted difference
LL<-as.numeric(crossprod(wtDiff1))#Sum of squares
}
LL
}
# optimize uzing optim()
MLoptres<-optim(params,LLode,method="Nelder-Mead",
control=list(trace=0,maxit=500))
MLoptres
require(optimx)
optxres<-optimx(params,LLode,lower=rep(0,6),upper=rep(200,6),
method=c("nmkb","bobyqa"),
control=list(usenumDeriv=TRUE, maxit=500,trace=0))
summary(optxres,order=value)
bestpar<-coef(summary(optxres,order=value)[1,])
cat("best parameters:")
print(bestpar)
#dput(optxres,file='includes/C20bestpar.dput')
bpSim<-ode(initCond,tspan,hormonesode,bestpar,method="ode45")
X11()
plot(tspan,initSim[,7],type="l",lty="dashed")
points(tspan,E)
#points(tspan,MLoptres[,]/k,type="l",lty="twodash")
points(tspan,bpSim[,7],type="l")
title(main="Improved fit using optimx")

Error in setting max features parameter in Isolation Forest algorithm using sklearn

I'm trying to train a dataset with 357 features using Isolation Forest sklearn implementation. I can successfully train and get results when the max features variable is set to 1.0 (the default value).
However when max features is set to 2, it gives the following error:
ValueError: Number of features of the model must match the input.
Model n_features is 2 and input n_features is 357
It also gives the same error when the feature count is 1 (int) and not 1.0 (float).
How I understood was that when the feature count is 2 (int), two features should be considered in creating each tree. Is this wrong? How can I change the max features parameter?
The code is as follows:
from sklearn.ensemble.iforest import IsolationForest
def isolation_forest_imp(dataset):
estimators = 10
samples = 100
features = 2
contamination = 0.1
bootstrap = False
random_state = None
verbosity = 0
estimator = IsolationForest(n_estimators=estimators, max_samples=samples, contamination=contamination,
max_features=features,
bootstrap=boostrap, random_state=random_state, verbose=verbosity)
model = estimator.fit(dataset)
In the documentation it states:
max_features : int or float, optional (default=1.0)
The number of features to draw from X to train each base estimator.
- If int, then draw `max_features` features.
- If float, then draw `max_features * X.shape[1]` features.
So, 2 should mean take two features and 1.0 should mean take all of the features, 0.5 take half and so on, from what I understand.
I think this could be a bug, since, taking a look in IsolationForest's fit:
# Isolation Forest inherits from BaseBagging
# and when _fit is called, BaseBagging takes care of the features correctly
super(IsolationForest, self)._fit(X, y, max_samples,
max_depth=max_depth,
sample_weight=sample_weight)
# however, when after _fit the decision_function is called using X - the whole sample - not taking into account the max_features
self.threshold_ = -sp.stats.scoreatpercentile(
-self.decision_function(X), 100. * (1. - self.contamination))
then:
# when the decision function _validate_X_predict is called, with X unmodified,
# it calls the base estimator's (dt) _validate_X_predict with the whole X
X = self.estimators_[0]._validate_X_predict(X, check_input=True)
...
# from tree.py:
def _validate_X_predict(self, X, check_input):
"""Validate X whenever one tries to predict, apply, predict_proba"""
if self.tree_ is None:
raise NotFittedError("Estimator not fitted, "
"call `fit` before exploiting the model.")
if check_input:
X = check_array(X, dtype=DTYPE, accept_sparse="csr")
if issparse(X) and (X.indices.dtype != np.intc or
X.indptr.dtype != np.intc):
raise ValueError("No support for np.int64 index based "
"sparse matrices")
# so, this check fails because X is the original X, not with the max_features applied
n_features = X.shape[1]
if self.n_features_ != n_features:
raise ValueError("Number of features of the model must "
"match the input. Model n_features is %s and "
"input n_features is %s "
% (self.n_features_, n_features))
return X
So, I am not sure on how you can handle this. Maybe figure out the percentage that leads to just the two features you need - even though I am not sure it'll work as expected.
Note: I am using scikit-learn v.0.18
Edit: as #Vivek Kumar commented this is an issue and upgrading to 0.20 should do the trick.

Setting up Rcpp Armadillo in windows with Rstudio

I am trying to set up RcppArmadillo in my windows system with Rstudio. I have successfully installed RcppArmadillo with the command
install.packages("RcppArmadillo")
in R console.
But when I try to compile a c++ code with RcppArmadillo dependency I get a error like
g++ -m64 -I"C:/PROGRA~1/R/R-30~1.3/include" -DNDEBUG -I"C:/PROGRA~1/R/R-30~1.3/library/Rcpp/include" -I"d:/RCompile/CRANpkg/extralibs64/local/include" -O2 -Wall -mtune=core2 -c colrowStat.cpp -o colrowStat.o colrowStat.cpp:5:26: fatal error: RcppArmadillo.h: No such file or directory compilation terminated. make: *** [colrowStat.o] Error 1 Warning message: running command 'make -f "C:/PROGRA~1/R/R-30~1.3/etc/x64/Makeconf" -f "C:/PROGRA~1/R/R-30~1.3/share/make/winshlib.mk" SHLIB_LDFLAGS='$(SHLIB_CXXLDFLAGS)' SHLIB_LD='$(SHLIB_CXXLD)' SHLIB="sourceCpp_38187.dll" WIN=64 TCLBIN=64 OBJECTS="colrowStat.o"' had status 2
But the header files are available in path_to_my_documents/R/win-libraries/3.0/RcppArmadillo/Include
I think the include path for compilation dose not have this path. I don't how to add this folder to the path. I greatly appreciate any help with this problem.
You are doing it wrong. There are many ways to do it, and we documented several of them. What you do here is not one of them.
Try this instead and go from there:
R> library(Rcpp)
R> cppFunction("arma::mat op(arma::vec x) { return(x*x.t()); }",
+ depends="RcppArmadillo")
R> op(1:2)
[,1] [,2]
[1,] 1 2
[2,] 2 4
R>
This is one of the basic examples: take a vector, multiply it by its transpose and return the result outer product matrix.
What you ultimately want is a package, and for that you could do much worse than starting by RcppArmadillo.package.skeleton().
Your question is short on details, but if you are on a Windows machine and are using RStudio, then here is a fully reproducible example on how to use RcppArmadillo without using the inline package, which is not ideal except for very short functions.
As Dirk has pointed out, this advice is available elsewhere -- the Rcpp* ecosystem is bizarrely well-documented, but this might help a novice.
0. Preliminaries:
You should have the following installed:
Rtools
R package devtools
R package Rcpp
R package RcppArmadillo
1. C++ code:
The example is a simple one of computing the OLS estimator for a linear regression model. Here is what the C++ file, with one function (fnLinRegRcpp) that takes the design matrices as inputs and produces the OLS coefficient estimates and the model residuals as an Rcpp List looks like:
// LinearRegression.cpp
// [[Rcpp::depends(RcppArmadillo)]]
#include <RcppArmadillo.h>
using namespace arma; // use the Armadillo library for matrix computations
using namespace Rcpp;
// [[Rcpp::export]]
List fnLinRegRcpp(vec vY, mat mX) {
// compute the OLS estimator & model residuals
vec vBeta = solve(mX.t()*mX, mX.t()*vY);
vec vResid = vY - mX * vBeta;
// construct the return object
List ret;
ret["beta"] = vBeta;
ret["resid"] = vResid;
return ret;
}
// END
Note the use of Rcpp attributes:
// [[Rcpp::depends(RcppArmadillo)]]
to indicate library dependencies on the Armadillo library.
2. R code
Here is an example of the compilation of the C++ code using the sourceCpp function, together with an example of the use of the function, and a comparison of the output to the built-in lm.fit function.
# LinearRegression.R
library(devtools)
library(Rcpp)
library(RcppArmadillo)
Rcpp::sourceCpp("code/LinearRegression.cpp",
showOutput = TRUE,
rebuild = FALSE)
# generate some sample data
iK = 4
iN = 100
mX = cbind(1, matrix(rnorm(iK*iN), iN, iK))
vBeta0 = c(2, 3.5, 0.11, 6.33, 23)
vY = rnorm(iN, mean = mX %*% vBeta0)
# test the function
linReg1 = fnLinRegRcpp(vY, mX)
linReg1$beta # coefficient estimates
# compare the results to the built-in lm.fit function
lm.fit(y = vY, x = mX)$coef # coefficient estimates
# END

image Texture Feature using Gabor filter

I have the following gabor filter to extract image texture feature..
a=imread('image0001.jpg');
a=double(a);
a=a-mean(a(:));
[r,c,l]=size(a);
K=5; S=6;
Uh=0.4;
Ul=0.05;
alpha=(Uh/Ul)^(1/(S-1));
sigmau=(alpha-1)*Uh/((alpha+1)*sqrt(2*log(2)));
sigmav=tan(pi/(2*K))*(Uh-2*log(2)*((sigmau^2)/Uh))/sqrt((2*log(2))-(((2*log(2))^2)*(sigmau^2)/(Uh^2)));
sigmax=1/(2*pi*sigmau);
sigmay=1/(2*pi*sigmav);
b=fft2(a);
[e d]=size(b);
i=1;
G=zeros(r,c,S*K);
IZ=zeros(r,c,S*K);
for m=0:S-1
for n=0:K-1
fprintf(1,'.');
for x=-r/2+1:r/2;
for y=-c/2+1:c/2;
xdash=(alpha^(-m))*((x)*cos(n*pi/K)+(y)*sin(n*pi/K));
ydash=(alpha^(-m))*((y)*cos(n*pi/K)-(x)*sin(n*pi/K));
g(r/2+x,r/2+y)=(alpha^(-m))*((1/(2*pi*sigmax*sigmay))*exp(-0.5*(((xdash^2)/(sigmax^2))+((ydash^2)/(sigmay^2)))+0.8i*pi*xdash));
end
end
[rr cc]=size(g);
G(:,:,i)=g;
h=fft2(g);
z=b.*h;
iz=ifft2(z);
IZ(:,:,i)=iz;
FeatureVector(i)=mean(abs(iz(:)));
i=i+1;
end
end
fprintf(1,'\n');
%%%%%%%%%
When I run this code I get this Error:
Error using ==> times Matrix
dimensions must agree. Error in ==>
ComputeGaborFeatures4 at 37
z=b.*h;
Please if any one can help me to solve this error or any one can give me another simple gabor filter?
The error is due to the calling of Array Multiplication (.*) with b and h of non equal size, because rr doesn't equal r and cc doesn't c.
Either you wanted to use Matrix Multiplication (*) or you need to make g and a the same size before calling fft2.
the error might change the g(r/2+x,r/2+y) to g(r/2+x,c/2+y), the the

Resources