LOOCV Cross-validation: lmer model - cross-validation

Kindly request assistance with the R code for the Leave-one-out CV (LOOCV) for the lmer model with k fold being random factors in the model. I believe the 'Caret' package does only for the lm/glm but not the Linear Mixed model. I might be wrong.
Your assistance will be highly appreciated!

Related

What loss functions are associated with the distributions in h2o xgboost and gbm?

I need to know which loss functions are used in the h2o gbm and xgboost functions for the gaussian, binomial and multinomial distributions. Unfortunately, my knowledge of Java is very limited and I can't really decipher the source code, and there doesn't seem to be any document clarifying which distribution is associated with which function. I think I gather from here that it's logloss for binomial and MSE for gaussian, but I can't find anything for multinomial. Does anybody here maybe know the answer?
Thank you for your question. We definitely should provide this information in the documentation. We are working on improving the doc. To answer your question:
The loss function for multinomial classification is softmax for H2O GBM and XGBoost too. H2O GBM is implemented based on this paper: Greedy function approximation: A gradient boosting machine, Jerome H. Friedman 2001. In chapter 4.6. the author nicely explains how it is calculated and why.
Based on loss function the negHalfGradient method is defined and every distribution implements it individually. For multinomial distribution (here) the implementation is:
#Override
public double negHalfGradient(double y, double f, int l) {
return ((int) y == l ? 1f : 0f) - f;
}
Where:
y is an actual response
f is a predicted response in link space
l is a class label (converted lexicographically from original labels to 0-number of class - 1)
Let me know if you have other questions.

CPLEX: Using Sum in Sum for CPLEX Coding

I am quite new to this platform and I was searching for an aid for my currently running project. I currently have some problems over writing a sum function in a sum function in CPLEX.
To give a brief information about my problem, here goes a tiny part of my decision variables and my objective function:
dvar boolean y[Amount][Address][Floor][Lane];
minimize sum(i in Amount, j in Address, k in Floor, l in Lane) y[i][j][k][l];
As parameters, I do not face any trouble except the Address parameter. I have the Address parameter in a form as follows:
The general formulation is Address[i], and I have Address[1]=40 , Address[2]=12 , Address [3]=24 etc...
I need to implement the Adress[i] parameter to my decision variable and objective function. So I definitely need to change the Address part to Address[i] and need to have another sum in the objective function. The following one was my idea:
minimize sum(i in Amount, j in (sum(i in Address[i]), k in Floor, l in Lane) y[i][j][k][l];
But CPLEX does not accept this syntax. It says I have an "syntax error, unexpected ','" usage. The ',' is the one that comes after "j in (sum(i in Address[i])". I can clearly see that I am not able to code down my idea in the given form, and I was wondering if it is possible to have such a sum function in a sum function. I took a look at the internet links but I failed to find sufficient information about my situation.
So, is it possible to implement a sum in another sum function?
I am very sorry if this problem was asked before, but I couldn't really find something sufficient. Thank you for your kind answers and mind blowing advices. You are the bests.
Regards,

Inequality solving using Prolog

I am working on solving inequality problems using prolog.I have found a code and it solves ax+b>=0 type equations.
The code I have used is as follows.
:-use_module(library(clpr)).
dec_inc(left,right):-
copy_term(left-right,Copyleft-Copyright).
tell_cs(Copyleft).
max(Copyright,right,Leq).
tell_cs(Leq).
max([],[],[]).
max([E=<_|Ps],[_=<P1|P1s],[K=<P1|Ls]):-
sup(E,K),
max(Ps,P1s,Ls).
tell_cs([]).
tell_cs([C|Cs]):-
{C},
tell_cs(Cs).
for example
when we give {2*X+2>=5}. it gives the correct answer. {X>=1.5}.
2.But if I enter fraction like {(X+3)/(3*X+1)>=1}. it gives {1- (3+X)/ (1+3.0*X)=<0.0}.
How can I solve this type of inequality questions to find the final answer.(questions which include fractions).
Please help me.
If there is any learning material I can refer please let me know.
library(clpr) documentation advises that it deals with non linear constraints only passively, so you're out of luck. You need a more sophisticated algebra system.

Algorithm and code in SCILAB for row reduced echelon form

I am a novice learner of SCILAB, and I know that there is a pre-defined function rref to produce the row reduced echelon form. I am looking for an algorithm for transforming a m x n matrix into row reduced echelon form and normal form and hence find the rank of a matrix.
Can you please help? Also, we have rref as a pre-defined function in SCILAB, how can we get the scilab code for it? How to find out the code/ algorithm behind any function in SCILAB?
Thanks for your help.
Help about functions
The help pages of Scilab always provide some information and short examples. You can also look at the help online (rref help).
The examples are without output, but demonstrate the various uses. A good first approach is to copy-paste the complete example code into a new scinotes window, save it and press F5 to see what it does. Then modify or extend the code to suite your wanted behavior.
rref & rank
Aren't you looking for the rank function instead? Here an example of using both.
A = [1,2,3;4,5,6;1,2,3]
rref(A);
rank(A);
B = [1,2,3;7,5,6;0,8,7];
rref(B);
rank(B);
Source code
Since Scilab is open source you can find the source code on their git repository, for instance the rref implementation is here.

Is there a way to predict unknown function value based on its previous values

I have values returned by unknown function like for example
# this is an easy case - parabolic function
# but in my case function is realy unknown as it is connected to process execution time
[0, 1, 4, 9]
is there a way to predict next value?
Not necessarily. Your "parabolic function" might be implemented like this:
def mindscrew
#nums ||= [0, 1, 4, 9, "cat", "dog", "cheese"]
#nums.pop
end
You can take a guess, but to predict with certainty is impossible.
You can try using neural networks approach. There are pretty many articles you can find by Google query "neural network function approximation". Many books are also available, e.g. this one.
If you just want data points
Extrapolation of data outside of known points can be estimated, but you need to accept the potential differences are much larger than with interpolation of data between known points. Strictly, both can be arbitrarily inaccurate, as the function could do anything crazy between the known points, even if it is a well-behaved continuous function. And if it isn't well-behaved, all bets are already off ;-p
There are a number of mathematical approaches to this (that have direct application to computer science) - anything from simple linear algebra to things like cubic splines; and everything in between.
If you want the function
Getting esoteric; another interesting model here is genetic programming; by evolving an expression over the known data points it is possible to find a suitably-close approximation. Sometimes it works; sometimes it doesn't. Not the language you were looking for, but Jason Bock shows some C# code that does this in .NET 3.5, here: Evolving LINQ Expressions.
I happen to have his code "to hand" (I've used it in some presentations); with something like a => a * a it will find it almost instantly, but it should (in theory) be able to find virtually any method - but without any defined maximum run length ;-p It is also possible to get into a dead end (evolutionary speaking) where you simply never recover...
Use the Wolfram Alpha API :)
Yes. Maybe.
If you have some input and output values, i.e. in your case [0,1,2,3] and [0,1,4,9], you could use response surfaces (basicly function fitting i believe) to 'guess' the actual function (in your case f(x)=x^2). If you let your guessing function be f(x)=c1*x+c2*x^2+c3 there are algorithms that will determine that c1=0, c2=1 and c3=0 given your input and output and given the resulting function you can predict the next value.
Note that most other answers to this question are valid as well. I am just assuming that you want to fit some function to data. In other words, I find your question quite vague, please try to pose your questions as complete as possible!
In general, no... unless you know it's a function of a particular form (e.g. polynomial of some degree N) and there is enough information to constrain the function.
e.g. for a more "ordinary" counterexample (see Chuck's answer) for why you can't necessarily assume n^2 w/o knowing it's a quadratic equation, you could have f(n) = n4 - 6n3 + 12n2 - 6n, which has for n=0,1,2,3,4,5 f(n) = 0,1,4,9,40,145.
If you do know it's a particular form, there are some options... if the form is a linear addition of basis functions (e.g. f(x) = a + bcos(x) + csqrt(x)) then using least-squares can get you the unknown coefficients for the best fit using those basis functions.
See also this question.
You can apply statistical methods to try and guess the next answer, but that might not work very well if the function is like this one (c):
int evil(void){
static int e = 0;
if(50 == e++){
e = e * 100;
}
return e;
}
This function will return nice simple increasing numbers then ... BAM.
That's a hard problem.
You should check out the recurrence relation equation for special cases where it could be possible such a task.

Resources