-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bias terms are also regularized. #11
Comments
@zgrkpnr the library doesn't have tests yet, could you write a bit of code along with expected and actual results showing the problem with regularization involving bias terms and it will be put under unit test. |
@smoothdeveloper Are you asking for a code for fix or do you want me to show that regularizing bias terms causes weak performance? If latter, it is not necessary. Perceptron theory suggests that bias terms don't depend on input values and they have nothing to do with overfitting. Therefore, they are excluded from regularization. Moreover, regularization is done in order to obtain a "low bias" model. |
@zgrkpnr I was looking for a complete code sample exercising regularization and including bias terms to look at. From what you describe, the outcome of the test should be identical results but obtained faster. When a bug is found in a library we tend to write "non-regression test" for it, tests are run any time a change is made to the library to make sure we didn't introduce known issues back. With such sample I could maybe write a test which shows that the implementation is failing, and when the implementation gets fixed the test would be passing and adding some security for later releases. |
This regularization methods penalizes bias terms, as well. Bias terms should be excluded from being penalized.
The text was updated successfully, but these errors were encountered: