Sun 28 Aug 2016 03:03:40 AM UTC, original submission:
The code (and the configuration files) call L1 and L2 error functions, and use them to refer to the Mean Error or ME (sum of linear errors) and Mean Squared Error or MSE (sum of squared errors) respectively. These error functions are worthwhile, but should be called ME and MSE instead.
L1 and L2 (and L0) everywhere else in neural network literature are used to refer to update regularization strategies, not error functions. Regularization strategies are applied to try to prevent overfitting. Update regularization strategies, including Clipping, L0, L1, and L2, do this by attempting to prevent the connection weights from growing too large.
L0 subtracts a tiny constant from every weight. L1 subtracts a tiny fraction of each weight's current value. L2 subtracts a tiny fraction of the square of the weight's value. These subtractions are made after each weight update and should normally be a small fraction of the learning rate as long as weights are inside a "reasonable" range.
L0 has the effect of driving the weights of unnecessary connections to zero, which can be very useful in some cases and can be used as a guide to find connections which can be eliminated in order to get simpler, faster networks. L2 distributes weight more or less equally among all the weights that it can be divided between, and usually makes training more reliable. L1, as you'd expect, does a bit of both.
|