bugGneural Network - Bugs: bug #48927, Terminology issues. L1/L2...

 
 

bug #48927: Terminology issues. L1/L2 regularization vs ME and MSE error functions

Submitted by:  Ray Dillinger <rayd>
Submitted on:  Sun 28 Aug 2016 03:03:40 AM UTC  
 
Category: NoneSeverity: 2 - Minor
Item Group: NoneStatus: Fixed
Privacy: PublicAssigned to: Ray Dillinger <rayd>
Open/Closed: Closed

Add a New Comment(Rich Markup)
   

You are not logged in

Please log in, so followups can be emailed to you.

 

Fri 23 Sep 2016 03:51:55 AM UTC, comment #2:

Fixed.

NOTE: THIS IS A BACKWARD-INCOMPATIBLE CHANGE! EXISTING CONFIGURATION SCRIPTS WILL NOT WORK WITH THE MODIFIED CODEBASE.

In standardizing the terminology I changed the language accepted by the parser and produced by the save function.

The test input files are updated to the altered language.

Ray Dillinger <rayd>
Project MemberIn charge of this item.
Tue 30 Aug 2016 04:08:40 AM UTC, comment #1:

Additionally noting that the code refers to accumulator functions as 'discriminant' functions. 'Discriminant' functions are a type of fitness function not used with test cases - which is important for reinforcement learning or open-ended evolution. We'll have to add them at some point and don't want to confuse things by having something else being called by that name.

Also the current terminology has 'points' instead of 'testing cases' and 'training cases.' There's no use in nonstandard terminology when doing a standard thing.

Ray Dillinger <rayd>
Project MemberIn charge of this item.
Sun 28 Aug 2016 03:03:40 AM UTC, original submission:

The code (and the configuration files) call L1 and L2 error functions, and use them to refer to the Mean Error or ME (sum of linear errors) and Mean Squared Error or MSE (sum of squared errors) respectively. These error functions are worthwhile, but should be called ME and MSE instead.

L1 and L2 (and L0) everywhere else in neural network literature are used to refer to update regularization strategies, not error functions. Regularization strategies are applied to try to prevent overfitting. Update regularization strategies, including Clipping, L0, L1, and L2, do this by attempting to prevent the connection weights from growing too large.

L0 subtracts a tiny constant from every weight. L1 subtracts a tiny fraction of each weight's current value. L2 subtracts a tiny fraction of the square of the weight's value. These subtractions are made after each weight update and should normally be a small fraction of the learning rate as long as weights are inside a "reasonable" range.

L0 has the effect of driving the weights of unnecessary connections to zero, which can be very useful in some cases and can be used as a guide to find connections which can be eliminated in order to get simpler, faster networks. L2 distributes weight more or less equally among all the weights that it can be divided between, and usually makes training more reliable. L1, as you'd expect, does a bit of both.

Ray Dillinger <rayd>
Project MemberIn charge of this item.

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach File(s):
   
   
Comment:
   

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -unavailable- added by rayd (Submitted the item)
  • -unavailable- added by rayd
  •  

    Do you think this task is very important?
    If so, you can click here to add your encouragement to it.
    This task has 0 encouragements so far.

    Only logged-in users can vote.

     

    Please enter the title of George Orwell's famous dystopian book (it's a date):

     

     

    Follow 4 latest changes.

    Date Changed By Updated Field Previous Value => Replaced By
    Fri 23 Sep 2016 04:22:03 AM UTCraydStatusNone=>Fixed
    Fri 23 Sep 2016 03:51:55 AM UTCraydAssigned toNone=>rayd
      Open/ClosedOpen=>Closed
    Sun 28 Aug 2016 03:03:40 AM UTCraydCarbon-Copy-=>Added -unavailable-

    Back to the top


    Powered by Savane 3.1-cleanup1