ENH, TST: Add rtol to classifier and a test.#44
Conversation
|
I've edited the original comment to note which issue this PR is related to. Usually a short description that includes the connected issue can be helpful. I'll review this now. |
tylerjereddy
left a comment
There was a problem hiding this comment.
This looks great, thanks @nray. I added a few minor comments you could do in a follow-up, but not worth blocking on.
@eiviani-lanl the regression test and source changes Navamita added here can be used with only minor modification for your classifier with rtol. That should make it pretty easy for you over on the sklearn side. For the regressor, Navamita merged my equivalent changes recently, so you can pull those in as well if you haven't already.
| def test_rtol_classifier(reg_alpha, rtol, expected_acc, expected_roc): | ||
| # For Moore-Penrose, a large singular value cutoff (rtol) | ||
| # may be required to achieve reasonable results. This test | ||
| # showcases that a default low cut of leads to almost random classification |
There was a problem hiding this comment.
| # showcases that a default low cut of leads to almost random classification | |
| # showcases that a default low cut off leads to almost random classification |
| X_test_s = scaler.transform(X_test) | ||
|
|
||
| activation = "softmax" | ||
| weight_scheme = "normal" |
There was a problem hiding this comment.
Might as well just specify the arguments directly in the call to the estimator below, since they're not used for anything else.
Since that's minor, I'll let you do that in a follow-up.
Related to gh-12.