neural network training set - algorithm

My question is about a training set in a supervised artificial neural network (ANN)
Training set, as some of you probably know, consists of pairs (input, desired output)
Training phase itself is the following
for every pair in a training set
-we input the first value of the pair and calculate the output error i.e. how far is the generated output from the desired output, which is the second value of the pair
-based on that error value we use backpropagate algorithm to calculate weight gradients and update weights of ANN
end for
Now assume that there are pair1, pair2, ...pair m, ... in the training set
we take pair1, produce some error, update weights, then take pair2, etc.
later we reach pair m, produce some error, and update weights,
My question is, what if that weight update after pair m will eliminate some weight update, or even updates which happened before ?
For example, if pair m is going to eliminate weight updates happened after pair1, or pair2, or both, then although ANN will produce a reasonable output for input m, it will kinda forget the updates for pair1 and pair2, and the result for inputs 1 and 2 will be poor,
then what's the point of training?
Unless we train ANN with pair1 and pair2 again, after pair m

For example, if pair m is going to eliminate weight updates happened after pair1, or pair2, or both, then although ANN will produce a reasonable output for input m, it will kinda forget the updates for pair1 and pair2, and the result for inputs 1 and 2 will be poor, then what's the point of training ??
The aim of training a neural network is to end up with weights that give you the desired output for all-possible input values. What you're doing here is traversing the error surface as you back-propagate so that you end up in an area where the error is below the error threshold. Keep in mind that when you backpropagate the error for one set of inputs, it doesn't mean that the neural network automatically recognizes that particular input and immediately produces the exact response when that input is presented again. When you backpropagate, all it means is that you have changed your weights in such a manner that your neural network will get better at recognizing that particular input (that is, the error keeps decreasing).
So if you present pair-1 and then pair-2, it is possible that pair-2 may negate the changes to a certain degree. However in the long run the neural network's weights will tend towards recognizing all inputs properly. The thing is, you cannot look at the result of a particular training attempt for a particular set of inputs/outputs and be concerned that the changes will be negated. As I mentioned before, when you're training a neural network you are traversing an error surface to find a location where the error is the lowest. Think of it as walking along a landscape that has a bunch of hills and valleys. Imagine that you don't have a map and that you have a special compass that tells you in what direction you need to move, and by what distance. The compass is basically trying to direct you to the lowest point in this landscape. Now this compass doesn't know the the landscape well either and so in trying to send you to the lowest point, it may go in a slightly-wrong direction (i.e., send you some way up a hill) but it will try and correct itself after that. In the long run, you will eventually end up at the lowest point in the landscape (unless you're in a local minima i.e., a low-point, but one that is not the lowest point).

Whenever you're doing supervised training, you should run several (or even thousands) rounds through a training dataset. Each such round through the training dataset is called an epoch.
There is also two different ways of updating the the parameters in the neural network, during supervised training. Stochastic training and batch training. Batch training is one loop through the dataset, accumulating the total error through the set, and updating the parameters (weights) only once when all error has been accumulated. Stochastic training is the method you describe, where the weights are adjusted for each input, desired output pair.
In almost all cases, where the training data set is relatively representative for the general case, you should prefer stochastic training over batch training. Stochastic training beats batch training in 99 of 100 cases! (Citation needed :-)). (Simple XOR training cases and other toy problems are the exceptions)
Back to your question (which applies for stochastic training): Yes, the second pair could indeed adjust the weights in the opposite direction from the first pair. However it is not really likely that all weights are adjusted opposite direction for two cases. However since you will run several epochs through the set the effect will diminish through each epoch. You should also randomize the order of the pairs for each epoch. (Use some kind of Fisher-Yates algorithm.) This will diminish the effect even more.
Next tip: Keep a benchmark dataset separate from the training data. For each n epoch of training, benchmark the neural network with the benchmark set. That is calculating the total error over the pairs in this benchmark dataset. When the error does not decrease, it's time to stop the training.
Good luck!

If you were performing a stochastic gradient descent (SGD), then this probably wouldn't happen because the parameter updates for pair 1 would take effect before the parameter updates for pair 2 would be computed. That is why SGD may converge faster.
If you are computing your parameter updates using all your data simultaneously (or even a chunk of it) then these two pairs may cancel each other out. However, that is not a bad thing because, clearly, these two pairs of data points are giving conflicting information. This is why batch backprop is typically considered to be more stable.

Related

Error calculation in backpropagation (gradient descent)

Can someone please give an explanation about the calculation of the error in backpropagation which is found in many code examples such as:
error=calculated-target
// then calculate error with respect to each parameter...
Is this same for squared error and cross entropy error? How?
Thanks...
I will note x an example from the training set, f(x) the prediction of your network for this particular example, and g_x the ground truth (label) associated to x.
The short answer is, the root means squared error (RMS) is used when you have a network that can exactly, and differentiably, predict the labels that you want. The cross-entropy error is used when your network predicts scores for a set of discrete labels.
To clarify, you usually use Root Mean Squared (RMS) when you want to predict values that can change continuously. Imagine you want your network to predict vectors in R^n. This is the case when, for example, you want to predict surface normals or optical flow. Then, these values can changes continuously, and ||f(x)-g_x|| is differentiable. You can use backprop and train your network.
Crosss-entropy, on the other hand, is useful in classification with n labels, for example, in image classification. In that case, the g_x take the discrete values c_1,c_2,...,c_m where m is the number of classes.
Now, you can not use RMS because if you assume that your netwrok predicts the exact labels (i.e. f(x) in {c_1,...,c_m}), then ||f(x)-g_x|| is no longer differentiable, and you can not use back-propagation. So, you make a network that does not compute class labels directly, but instead computes a set of scores s_1,...,s_m for each class label. Then, you maximize the probability of the correct score, by maximizing a softmax function on the scores. This makes the loss function differentiable.

Exploration Algorithm

Massively edited this question to make it easier to understand.
Given an environment with arbitrary dimensions and arbitrary positioning of an arbitrary number of obstacles, I have an agent exploring the environment with a limited range of sight (obstacles don't block sight). It can move in the four cardinal directions of NSEW, one cell at a time, and the graph is unweighted (each step has a cost of 1). Linked below is a map representing the agent's (yellow guy) current belief of the environment at the instant of planning. Time does not pass in the simulation while the agent is planning.
http://imagizer.imageshack.us/a/img913/9274/qRsazT.jpg
What exploration algorithm can I use to maximise the cost-efficiency of utility, given that revisiting cells are allowed? Each cell holds a utility value. Ideally, I would seek to maximise the sum of utility of all cells SEEN (not visited) divided by the path length, although if that is too complex for any suitable algorithm then the number of cells seen will suffice. There is a maximum path length but it is generally in the hundreds or higher. (The actual test environments used on my agent are at least 4x bigger, although theoretically there is no upper bound on the dimensions that can be set, and the maximum path length would thus increase accordingly)
I consider BFS and DFS to be intractable, A* to be non-optimal given a lack of suitable heuristics, and Dijkstra's inappropriate in generating a single unbroken path. Is there any algorithm you can think of? Also, I need help with loop detection, as I've never done that before since allowing revisitations is my first time.
One approach I have considered is to reduce the map into a spanning tree, except that instead of defining it as a tree that connects all cells, it is defined as a tree that can see all cells. My approach would result in the following:
http://imagizer.imageshack.us/a/img910/3050/HGu40d.jpg
In the resultant tree, the agent can go from a node to any adjacent nodes that are 0-1 turn away at intersections. This is as far as my thinking has gotten right now. A solution generated using this tree may not be optimal, but it should at least be near-optimal with much fewer cells being processed by the algorithm, so if that would make the algorithm more likely to be tractable, then I guess that is an acceptable trade-off. I'm still stuck with thinking how exactly to generate a path for this however.
Your problem is very similar to a canonical Reinforcement Learning (RL) problem, the Grid World. I would formalize it as a standard Markov Decision Process (MDP) and use any RL algorithm to solve it.
The formalization would be:
States s: your NxM discrete grid.
Actions a: UP, DOWN, LEFT, RIGHT.
Reward r: the value of the cells that the agent can see from the destination cell s', i.e. r(s,a,s') = sum(value(seen(s')).
Transition function: P(s' | s, a) = 1 if s' is not out of the boundaries or a black cell, 0 otherwise.
Since you are interested in the average reward, the discount factor is 1 and you have to normalize the cumulative reward by the number of steps. You also said that each step has cost one, so you could subtract 1 to the immediate reward rat each time step, but this would not add anything since you will already average by the number of steps.
Since the problem is discrete the policy could be a simple softmax (or Gibbs) distribution.
As solving algorithm you can use Q-learning, which guarantees the optimality of the solution provided a sufficient number of samples. However, if your grid is too big (and you said that there is no limit) I would suggest policy search algorithms, like policy gradient or relative entropy (although they guarantee convergence only to local optima). You can find something about Q-learning basically everywhere on the Internet. For a recent survey on policy search I suggest this.
The cool thing about these approaches is that they encode the exploration in the policy (e.g., the temperature in a softmax policy, the variance in a Gaussian distribution) and will try to maximize the cumulative long term reward as described by your MDP. So usually you initialize your policy with a high exploration (e.g., a complete random policy) and by trial and error the algorithm will make it deterministic and converge to the optimal one (however, sometimes also a stochastic policy is optimal).
The main difference between all the RL algorithms is how they perform the update of the policy at each iteration and manage the tradeoff exploration-exploitation (how much should I explore VS how much should I exploit the information I already have).
As suggested by Demplo, you could also use Genetic Algorithms (GA), but they are usually slower and require more tuning (elitism, crossover, mutation...).
I have also tried some policy search algorithms on your problem and they seems to work well, although I initialized the grid randomly and do not know the exact optimal solution. If you provide some additional details (a test grid, the max number of steps and if the initial position is fixed or random) I can test them more precisely.

Neural Network-like Data Structure

So I'm working on a little side-project for the purpose of experimenting with genetic algorithms. The project involves two classes, Critters and Food. The critters gain hunger each tick and lose hunger when they find food. The Critters can move, and the Food is stationary. Each Critter has a genome which is just a string of randomly generated characters. The goal is that after several generations, the Critters will evolve specialized movement patterns that result in optimal food consumption.
Right now, each Critter is governed by a neural network. The neural network is initialized with weights and biases derived from the Critter's genome. The first input into the neural network is [0,0]. The neural network produces two outputs which dictate the direction of the Critter's x and y movement respectively. This output is used as the input for the neural network at the next tick. For example:
1: [0,0]->NN->[.598.., -.234...] // Critter moves right and up
2: [.598...,-.234...]->NN->[-.409...,-.232...] // Critter moves left and up
3: [-.409...,-.232...]->NN-> etc.
The problem is that, regardless of how the weights are initialized, the neural network is finding a sort of "fixed point." That is, after two or three iterations the output and input are practically the same so the Critter always moves in the same direction. Now I'm not training the neural net and I don't really want to. So what I'm looking for is an alternative method of generating the output.
More specifically, let's say I have n random weights generated by the genome. I need a relation determined by those n weights that can map (in the loosest sense of the word) 2 inputs in the range [-1,1] to two outputs in the same range. The main thing is that I want the weights to have a significant impact on the behavior of the function. I don't want it to be something like y=mx+b where we're only changing y and b.
I know that's a pretty vague description. At first I thought the Neural Network would be perfect, but it seems as though the inputs have virtually no affect on the outputs without training (which is fair since Neural Networks are meant to be trained).
Any advice?
Just an idea.
You have f(genome) -> (w_1, w_2, ..., w_n), where f generates w based on the genome.
You could for example use a hash function h and compute [h(w_1, ..., w_(n/2)), h(w_(n/2+1), ..., w_n))].
Normally a hash function should give very distinct outputs for small change in inputs. But not always. You can look for hash functions that are continuous (small change in input, small change in output). This kind of function would be used for similarity search, http://en.wikipedia.org/wiki/Locality_sensitive_hashing might provide some ideas. This way you could actually use the hash directly on the genome.
Otherwise, you could try to split the genome or the weights and give the splits different purposes. Let's say n = 4.
(w_1, w_2) affects x
(w_3, w_4) affects y
You could then compute x as (w_1 + w_2*Random_[-1,1])/2, where Random_[-1,1] is a random number from the interval [-1,1] and assuming that w_i \in [-1,1] for all i.
Similarly for y.
Your genetic algorithm would then optimize how fast and how randomly the Critters move in order to optimally find food. If you have more weights (or longer genome), you could try to come up with a fancier function in a similar spirit.
This actually shows that with genetic algorithms the problem solving shifts to finding a good genome representation and a good fitness function so don't worry if you get stuck on it a bit.

Question about Backpropagation Algorithm with Artificial Neural Networks -- Order of updating

Hey everyone, I've been trying to get an ANN I coded to work with the backpropagation algorithm. I have read several papers on them, but I'm noticing a few discrepancies.
Here seems to be the super general format of the algorithm:
Give input
Get output
Calculate error
Calculate change in weights
Repeat steps 3 and 4 until we reach the input level
But here's the problem: The weights need to be updated at some point, obviously. However, because we're back propagating, we need to use the weights of previous layers (ones closer to the output layer, I mean) when calculating the error for layers closer to the input layer. But we already calculated the weight changes for the layers closer to the output layer! So, when we use these weights to calculate the error for layers closer to the input, do we use their old values, or their "updated values"?
In other words, if we were to put the the step of updating the weights in my super general algorithm, would it be:
(Updating the weights immediately)
Give input
Get output
Calculate error
Calculate change in weights
Update these weights
Repeat steps 3,4,5 until we reach the input level
OR
(Using the "old" values of the weights)
Give input
Get output
Calculate error
Calculate change in weights
Store these changes in a matrix, but don't change these weights yet
Repeat steps 3,4,5 until we reach the input level
Update the weights all at once using our stored values
In this paper I read, in both abstract examples (the ones based on figures 3.3 and 3.4), they say to use the old values, not to immediately update the values. However, in their "worked example 3.1", they use the new values (even though what they say they're using are the old values) for calculating the error of the hidden layer.
Also, in my book "Introduction to Machine Learning by Ethem Alpaydin", though there is a lot of abstract stuff I don't yet understand, he says "Note that the change in the first-layer weight delta-w_hj, makes use of the second layer weight v_h. Therefore, we should calculate the changes in both layers and update the first-layer weights, making use of the old value of the second-layer weights, then update the second-layer weights."
To be honest, it really seems like they just made a mistake and all the weights are updated simultaneously at the end, but I want to be sure. My ANN is giving me strange results, and I want to be positive that this isn't the cause.
Anyone know?
Thanks!
As far as I know, you should update weights immediately. The purpose of back-propagation is to find weights that minimize the error of the ANN, and it does so by doing a gradient descent. I think the algorithm description in the Wikipedia page is quite good. You may also double-check its implementation in the joone engine.
You are usually backpropagating deltas not errors. These deltas are calculated from the errors, but they do not mean the same thing. Once you have the deltas for layer n (counting from input to output) you use these deltas and the weigths from the layer n to calculate the deltas for layer n-1 (one closer to input). The deltas only have a meaning for the old state of the network, not for the new state, so you should always use the old weights for propagating the deltas back to the input.
Deltas mean in a sense how much each part of the NN has contributed to the error before, not how much it will contribute to the error in the next step (because you do not know the actual error yet).
As with most machine-learning techniques it will probably still work, if you use the updated, weights, but it might converge slower.
If you simply train it on a single input-output pair my intuition would be to update weights immediately, because the gradient is not constant. But I don't think your book mentions only a single input-output pair. Usually you come up with an ANN because you have many input-output samples from a function you would like to model with the ANN. Thus your loops should repeat from step 1 instead of from step 3.
If we label your two methods as new->online and old->offline, then we have two algorithms.
The online algorithm is good when you don't know how many sample input-output relations you are going to see, and you don't mind some randomness in they way the weights update.
The offline algorithm is good if you want to fit a particular set of data optimally. To avoid overfitting the samples in your data set, you can split it into a training set and a test set. You use the training set to update the weights, and the test set to measure how good a fit you have. When the error on the test set begins to increase, you are done.
Which algorithm is best depends on the purpose of using an ANN. Since you talk about training until you "reach input level", I assume you train until output is exactly as the target value in the data set. In this case the offline algorithm is what you need. If you were building a backgammon playing program, the online algorithm would be a better because you have an unlimited data set.
In this book, the author talks about how the whole point of the backpropagation algorithm is that it allows you to efficiently compute all the weights in one go. In other words, using the "old values" is efficient. Using the new values is more computationally expensive, and so that's why people use the "old values" to update the weights.

Parameter Tuning for Perceptron Learning Algorithm

I'm having sort of an issue trying to figure out how to tune the parameters for my perceptron algorithm so that it performs relatively well on unseen data.
I've implemented a verified working perceptron algorithm and I'd like to figure out a method by which I can tune the numbers of iterations and the learning rate of the perceptron. These are the two parameters I'm interested in.
I know that the learning rate of the perceptron doesn't affect whether or not the algorithm converges and completes. I'm trying to grasp how to change n. Too fast and it'll swing around a lot, and too low and it'll take longer.
As for the number of iterations, I'm not entirely sure how to determine an ideal number.
In any case, any help would be appreciated. Thanks.
Start with a small number of iterations (it's actually more conventional to count 'epochs' rather than iterations--'epochs' refers to the number of iterations through the entire data set used to train the network). By 'small' let's say something like 50 epochs. The reason for this is that you want to see how the total error is changing with each additional training cycle (epoch)--hopefully it's going down (more on 'total error' below).
Obviously you are interested in the point (the number of epochs) where the next additional epoch does not cause a further decrease in total error. So begin with a small number of epochs so you can approach that point by increasing the epochs.
The learning rate you begin with should not be too fine or too coarse, (obviously subjective but hopefully you have a rough sense for what is a large versus small learning rate).
Next, insert a few lines of testing code in your perceptron--really just a few well-placed 'print' statements. For each iteration, calculate and show the delta (actual value for each data point in the training data minus predicted value) then sum the individual delta values over all points (data rows) in the training data (i usually take the absolute value of the delta, or you can take the square root of the sum of the squared differences--doesn't matter too much. Call that summed value "total error"--just to be clear, this is total error (sum of the error across all nodes) per epoch.
Then, plot the total error as a function of epoch number (ie, epoch number on the x axis, total error on the y axis). Initially of course, you'll see the data points in the upper left-hand corner trending down and to the right and with a decreasing slope
Let the algorithm train the network against the training data. Increase the epochs (by e.g., 10 per run) until you see the curve (total error versus epoch number) flatten--i.e., additional iterations doesn't cause a decrease in total error.
So the slope of that curve is important and so is its vertical position--ie., how much total error you have and whether it continues to trend downward with more training cycles (epochs). If, after increasing epochs, you eventually notice an increase in error, start again with a lower learning rate.
The learning rate (usually a fraction between about 0.01 and 0.2) will certainly affect how quickly the network is trained--i.e., it can move you to the local minimum more quickly. It can also cause you to jump over it. So code a loop that trains a network, let's say five separate times, using a fixed number of epochs (and a the same starting point) each time but varying the learning rate from e.g., 0.05 to 0.2, each time increasing the learning rate by 0.05.
One more parameter is important here (though not strictly necessary), 'momentum'. As the name suggests, using a momentum term will help you get an adequately trained network more quickly. In essence, momentum is a multiplier to the learning rate--as long as the the error rate is decreasing, the momentum term accelerates the progress. The intuition behind the momentum term is 'as long as you traveling toward the destination, increase your velocity'.Typical values for the momentum term are 0.1 or 0.2. In the training scheme above, you should probably hold momentum constant while varying the learning rate.
About the learning rate not affecting whether or not the perceptron converges - That's not true. If you choose a learning rate that is too high, you will probably get a divergent network. If you change the learning rate during learning, and it drops too fast (i.e stronger than 1/n) you can also get a network that never converges (That's because the sum of N(t) over t from 1 to inf is finite. that means the vector of weights can only change by a finite amount).
Theoretically it can be shown for simple cases that changing n (learning rate) according to 1/t (where t is the number of presented examples) should work good, but I actually found that in practice, the best way to do this, is to find good high n value (the highest value that doesn't make your learning diverge) and low n value (this one is tricker to figure. really depends on the data and problem), and then let n change linearly over time from high n to low n.
The learning rate depends on the typical values of data. There is no rule of thumb in general. Feature scaling is a method used to standardize the range of independent variables or features of data. In data processing, it is also known as data normalization and is generally performed during the data preprocessing step.
Normalizing the data to a zero-mean, unit variance or between 0-1 or any other standard form can help in selecting a value of learning rate. As doug mentioned, learning rate between 0.05 and 0.2 generally works well.
Also this will help in making the algorithm converge faster.
Source: Juszczak, P.; D. M. J. Tax, and R. P. W. Dui (2002). "Feature scaling in support vector data descriptions". Proc. 8th Annu. Conf. Adv. School Comput. Imaging: 95–10.

Resources