-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
float type mismatch #2475
Comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Describe the bug
float32
inputs get transformed internally intofloat64
inputs which leads toRuntimeError: Input type (double) and bias type (float) should be the same
.To Reproduce
FastGradientMethod(estimator=classifier, eps=0.2)
withHopSkipJump(classifier=classifier)
Expected behavior
The get_started_pytorch.py should run with the HopSkipJump attack instead of the FastGradientMethod.
System information (please complete the following information):
Possible fix
Through trial-and-error I tried to find the locations where float32 get transformed into float64. My working solution so far is this, but there might be places I missed or a better fix altogether:
The text was updated successfully, but these errors were encountered: