I want to train a neural network with the sine() function.
Currently I use this code and the (cerebrum gem):
require 'cerebrum'
input = Array.new
300.times do |i|
inputH = Hash.new
inputH[:input]=[i]
sinus = Math::sin(i)
inputH[:output] = [sinus]
input.push(inputH)
end
network = Cerebrum.new
network.train(input, {
error_threshold: 0.00005,
iterations: 40000,
log: true,
log_period: 1000,
learning_rate: 0.3
})
res = Array.new
300.times do |i|
result = network.run([i])
res.push(result[0])
end
puts "#{res}"
But it does not work, if I run the trained network I get some weird output values (instead of getting a part of the sine curve).
So, what I am doing wrong?
Cerebrum is a very basic and slow NN implementation. There are better options in Ruby, such as ruby-fann gem.
Most likely your problem is the network is too simple. You have not specified any hidden layers - it looks like the code assigns a default hidden layer with 3 neurons in it for your case.
Try something like:
network = Cerebrum.new({
learning_rate: 0.01,
momentum: 0.9,
hidden_layers: [100]
})
and expect it to take forever to train, plus still not be very good.
Also, your choice of 300 outputs is too broad - to the network it will look mostly like noise and it won't interpolate well between points. A neural network does not somehow figure out "oh, that must be a sine wave" and match to it. Instead it interpolates between the points - the clever bit happens when it does so in multiple dimensions at once, perhaps finding structure that you could not spot so easily with a manual inspection. To give it a reasonable chance of learning something, I suggest you give it much denser points e.g. where you currently have sinus = Math::sin(i) instead use:
sinus = Math::sin(i.to_f/10)
That's still almost 5 iterations through the sine wave. Which should hopefully be enough to prove that the network can learn an arbitrary function.
Related
I'm building a Random Forest with Caret package on R with method = "rf". I see that every type of random forest on caret seems only tune mtry which is the number of features selected randomly for each tree. I do not understand why max_depth of each tree is not a tunable parameter (like cart) ? In my mind, it is a parameter which can limit over-fitting.
For example, my rf seems really better on train data than the test data :
model <- train(
group ~., data = train.data, method = "rf",
trControl = trainControl("repeatedcv", number = 5,repeats =10),
tuneLength=5
)
> postResample(fitted(model),train.data$group)
Accuracy Kappa
0.9574592 0.9745841
> postResample(predict(model,test.data),test.data$group)
Accuracy Kappa
0.7333333 0.5428571
As you can see my model is clearly over-fitted. However, I tried a lot of different things to handle this but nothing worked. I always have something like 0.7 accuracy on test data and 0.95 on train data. This is why I want to optimize other parameters.
I cannot share my data to reproduce this.
I want to train a model using the tensorflow estimator and want to track multiple metrics during training end evaluation. The metrics i want to track are accruacy and mean intersection-over-union (and my loss).
I managed to figure out how to track the accuracy during training:
if mode == tf.estimator.ModeKeys.TRAIN:
...
accuracy = tf.metrics.accuracy(labels=indices_ground_truth, predictions=indices_prediction, name='acc_op')
tf.summary.scalar('accuracy', accuracy[1])
and evaluation:
if mode == tf.estimator.ModeKeys.EVAL:
...
accuracy = tf.metrics.accuracy(labels=indices_ground_truth, predictions=indices_prediction)
eval_metric_ops = {'accuracy': accuracy}
return tf.estimator.EstimatorSpec(mode, loss=loss, eval_metric_ops=eval_metric_ops)
For evaluation the mean intersection over union works the same. So its actually:
if mode == tf.estimator.ModeKeys.EVAL:
...
miou = tf.metrics.mean_iou(labels=indices_ground_truth, predictions=indices_prediction, num_classes=13)
accuracy = tf.metrics.accuracy(labels=indices_ground_truth, predictions=indices_prediction)
eval_metric_ops = {'miou': miou,
'accuracy': accuracy}
return tf.estimator.EstimatorSpec(mode, loss=loss, eval_metric_ops=eval_metric_ops)
As far as i know i have to track the update operation (the second return value) on the value during training. Otherwise it returns 0 every time. For a single value like the accuracy that works.
But for the miou the second return value is the update operation of the confusion matrix used to calculate the miou. Thats a [numClass,numClass] tensor. If i try to track it like the accuracy tf.summary.scalar('miou', miou[1]) it crashes because a [numClass,numClass] tensor is not a scalar.
tf.summary.scalar('miou', miou[0]) gives me 0s everytime.
So how can i give the miou to the summary?
Here is how I calculate the IoU while training:
mIoU, update_op = tf.contrib.metrics.streaming_mean_iou(predict, raw_gt, num_classes=2, weights=None)
tf.summary.scalar('meanIoU', mIoU)
confusion_matrix, _ = sess.run([update_op, train_op], feed_dict=feed_dict)
iou = sess.run(mIoU)
print('iou score = {:.3f}, ({:.3f} sec/step)'.format(iou, duration))
You don't need to track the confusion matrix output to track the IoU on tensorboard. The above works fine for me. I think, what you are missing is running the tensors in your session. You need to run update_op such as sess.run(update_op), while running metric operations as sess.run(iou)
The following is a snippet of code of Ant Colony Optimization. I've removed whatever I feel would absolutely not be necessary to understand the code. The rest I'm not sure as I'm unfamiliar with coding on matlab. However, I'm running this algorithm on 500 or so cities with 500 ants and 1000 iterations, and the code runs extremely slow as compared to other algorithm implementations on matlab. For the purposes of my project, I simply need the datasets, not demonstrate coding capability on matlab, and I had time constraints that simply did not allow me to learn matlab from scratch, as that was not taken into consideration nor expected when the deadline was given, so I got the algorithm from an online source.
Matlab recommends preallocating two variables inside a loop as they are arrays that change size I believe. However I do not fully understand the purpose of those two parts of the code, so I haven't been able to do so. I believe both arrays increment a new item every iteration of the loop, so technically they should both be zero-able and could preallocate the size of both expected final sizes based on the for loop condition, but I'm not sure. I've tried preallocating zeroes to the two arrays, but it does not seem to fix anything as Matlab still shows preallocate for speed recommendation.
I've added two comments on the two variables recommended by MATLAB to preallocate below. If someone would be kind as to skim over it and let me know if it is possible, it'd be much appreciated.
x = 10*rand(50,1);
y = 10*rand(50,1);
n=numel(x);
D=zeros(n,n);
for i=1:n-1
for j=i+1:n
D(i,j)=sqrt((x(i)-x(j))^2+(y(i)-y(j))^2);
D(j,i)=D(i,j);
end
end
model.n=n;
model.x=x;
model.y=y;
model.D=D;
nVar=model.n;
MaxIt=100;
nAnt=50;
Q=1;
tau0=10*Q/(nVar*mean(model.D(:)));
alpha=1;
beta=5;
rho=0.6;
eta=1./model.D;
tau=tau0*ones(nVar,nVar);
BestCost=zeros(MaxIt,1);
empty_ant.Tour=[];
empty_ant.Cost=[];
ant=repmat(empty_ant,nAnt,1);
BestSol.Cost=inf;
for it=1:MaxIt
for k=1:nAnt
ant(k).Tour=randi([1 nVar]);
for l=2:nVar
i=ant(k).Tour(end);
P=tau(i,:).^alpha.*eta(i,:).^beta;
P(ant(k).Tour)=0;
P=P/sum(P);
r=rand;
C=cumsum(P);
j=find(r<=C,1,'first');
ant(k).Tour=[ant(k).Tour j];
end
tour = ant(k).Tour;
n=numel(tour);
tour=[tour tour(1)]; %MatLab recommends preallocation here
ant(k).Cost=0;
for i=1:n
ant(k).Cost=ant(k).Cost+model.D(tour(i),tour(i+1));
end
if ant(k).Cost<BestSol.Cost
BestSol=ant(k);
end
end
for k=1:nAnt
tour=ant(k).Tour;
tour=[tour tour(1)];
for l=1:nVar
i=tour(l);
j=tour(l+1);
tau(i,j)=tau(i,j)+Q/ant(k).Cost;
end
end
tau=(1-rho)*tau;
BestCost(it)=BestSol.Cost;
figure(1);
tour=BestSol.Tour;
tour=[tour tour(1)]; %MatLab recommends preallocation here
plot(model.x(tour),model.y(tour),'g.-');
end
If you change the size of an array, that means copying it to a new location in memory. This is not a huge problem for small arrays but for large arrays this slows down your code immensely. The tour arrays you're using are fixed size (51 or n+1 in this case) so you should preallocate them as zero arrays. the only thing you do is add the first element of the tour again to the end so all you have to do is set the last element of the array.
Here is what you should change:
x = 10*rand(50,1);
y = 10*rand(50,1);
n=numel(x);
D=zeros(n,n);
for i=1:n-1
for j=i+1:n
D(i,j)=sqrt((x(i)-x(j))^2+(y(i)-y(j))^2);
D(j,i)=D(i,j);
end
end
model.n=n;
model.x=x;
model.y=y;
model.D=D;
nVar=model.n;
MaxIt=1000;
nAnt=50;
Q=1;
tau0=10*Q/(nVar*mean(model.D(:)));
alpha=1;
beta=5;
rho=0.6;
eta=1./model.D;
tau=tau0*ones(nVar,nVar);
BestCost=zeros(MaxIt,1);
empty_ant.Tour=zeros(n, 1);
empty_ant.Cost=[];
ant=repmat(empty_ant,nAnt,1);
BestSol.Cost=inf;
for it=1:MaxIt
for k=1:nAnt
ant(k).Tour=randi([1 nVar]);
for l=2:nVar
i=ant(k).Tour(end);
P=tau(i,:).^alpha.*eta(i,:).^beta;
P(ant(k).Tour)=0;
P=P/sum(P);
r=rand;
C=cumsum(P);
j=find(r<=C,1,'first');
ant(k).Tour=[ant(k).Tour j];
end
tour = zeros(n+1,1);
tour(1:n) = ant(k).Tour;
n=numel(ant(k).Tour);
tour(end) = tour(1); %MatLab recommends preallocation here
ant(k).Cost=0;
for i=1:n
ant(k).Cost=ant(k).Cost+model.D(tour(i),tour(i+1));
end
if ant(k).Cost<BestSol.Cost
BestSol=ant(k);
end
end
for k=1:nAnt
tour(1:n)=ant(k).Tour;
tour(end) = tour(1);
for l=1:nVar
i=tour(l);
j=tour(l+1);
tau(i,j)=tau(i,j)+Q/ant(k).Cost;
end
end
tau=(1-rho)*tau;
BestCost(it)=BestSol.Cost;
figure(1);
tour(1:n) = BestSol.Tour;
tour(end) = tour(1); %MatLab recommends preallocation here
plot(model.x(tour),model.y(tour),'g.-');
end
I think that the warning that the MATLAB Editor gives in this case is misplaced. The array is not repeatedly resized, it is just resized once. In principle, tour(end+1)=tour(1) is more efficient than tour=[tour,tour(1)], but in this case you might not notice the difference in cost.
If you want to speed up this code you could think of vectorizing some of its loops, and of reducing the number of indexing operations performed. For example this section:
tour = ant(k).Tour;
n=numel(tour);
tour=[tour tour(1)]; %MatLab recommends preallocation here
ant(k).Cost=0;
for i=1:n
ant(k).Cost=ant(k).Cost+model.D(tour(i),tour(i+1));
end
if ant(k).Cost<BestSol.Cost
BestSol=ant(k);
end
could be written as:
tour = ant(k).Tour;
ind = sub2ind(size(model.D),tour,circshift(tour,-1));
ant(k).Cost = sum(model.D(ind));
if ant(k).Cost < BestSol.Cost
BestSol = ant(k);
end
This rewritten code doesn't have a loop, which usually makes things a little faster, and it also doesn't repeatedly do complicated indexing (ant(k).Cost is two indexing operations, within a loop that will slow you down more than necessary).
There are more opportunities for optimization like these, but rewriting the whole function is outside the scope of this answer.
I have not tried running the code, please let me know if there are any errors when using the proposed change.
public BinomialModelPrediction predictBinomial(RowData data) throws PredictException {
double[] preds = this.preamble(ModelCategory.Binomial, data);
BinomialModelPrediction p = new BinomialModelPrediction();
double d = preds[0];
p.labelIndex = (int)d;
String[] domainValues = this.m.getDomainValues(this.m.getResponseIdx());
p.label = domainValues[p.labelIndex];
p.classProbabilities = new double[this.m.getNumResponseClasses()];
System.arraycopy(preds, 1, p.classProbabilities, 0, p.classProbabilities.length);
if(this.m.calibrateClassProbabilities(preds)) {
p.calibratedClassProbabilities = new double[this.m.getNumResponseClasses()];
System.arraycopy(preds, 1, p.calibratedClassProbabilities, 0, p.calibratedClassProbabilities.length);
}
return p;
}
Eg: classProbabilities =[0.82333,0,276666]
labelIndex = 1
label = true
domainValues = [false,true]
what does this labelIndex signifies and does the class probabilities
order is same as the domain value order ,If order is same then it means that here probability of false is 0.82333 and probability of true is 0.27666 but why is this labelIndex showing as 1 and label as true.
Please help me to figure out this issue.
Like Tom commented, the prediction is not "wrong". You can infer from this that the threshold H2O has chosen is less than 0.27666. You probably have imbalanced training data, otherwise H2O would have not picked a low threshold for classifying a predicted value of 0.27666 as a 1. Does your training set include fewer examples of the positive class than the negative class?
If you don't like that threshold for whatever reason, then you can manually create your own. Just make sure you know how to properly evaluate the effect of using different thresholds on the performance of your model, otherwise I'd recommend just using the default threshold.
The name, "classProbabilities" is a misnomer. These are not actual probabilities, they are predicted values, though people often use the terms interchangeably. Binary classification algorithms produce "predicted values" that look like probabilities when they're between 0 and 1, but unless a calibration process is performed, they are not going to represent the probabilities. Calibration is not necessarily a straight-forward process and there are many techniques. Here's some more info about calibration methods for imbalanced data. In H2O, you can perform calibration using Platt scaling using the calibrate_model option. But this is probably not really necessary to what you're trying to do.
The proper way to use the raw output from a binary classification model is to only look at the predicted value for the positive class (you can simply ignore the predicted value for the negative class). Then you choose a threshold which suits your needs, or you can use the default threshold in H2O, which is chosen to maximize the F1 score. Some other software will use a hardcoded threshold of 0.5, but that will be a terrible choice if you don't have an even number of positive and negative examples in your training data. If you have only a few positive examples in your training data, then the best threshold will be something much lower than 0.5.
So I'm trying to create a little something and I have looked all over the place looking for ways of generating a random number. However no matter where I test my code, it results in a non-random number. Here is an example I wrote up.
local lowdrops = {"Wooden Sword","Wooden Bow","Ion Thruster Machine Gun Blaster"}
local meddrops = {}
local highdrops = {}
function randomLoot(lootCategory)
if lootCategory == low then
print(lowdrops[math.random(3)])
end
if lootCategory == medium then
end
if lootCategory == high then
end
end
randomLoot(low)
Wherever I test my code I get the same result. For example when I test the code here http://www.lua.org/cgi-bin/demo it always ends up with the "Ion Thruster Machine Gun Blaster" and doesen't randomize. For that matter testing simply
random = math.random (10)
print(random)
gives me 9, is there something i'm missing?
You need to run math.randomseed() once before using math.random(), like this:
math.randomseed(os.time())
One possible problem is that the first number may not be so "randomized" in some platforms. So a better solution is to pop some random number before using them for real:
math.randomseed(os.time())
math.random(); math.random(); math.random()
Reference: Lua Math Library