Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how does the backward step work? #4

Open
i-chaochen opened this issue Jan 5, 2020 · 1 comment
Open

how does the backward step work? #4

i-chaochen opened this issue Jan 5, 2020 · 1 comment

Comments

@i-chaochen
Copy link

i-chaochen commented Jan 5, 2020

Hi thanks for sharing this solution @mahyarnajibi @ashafahi

In your paper you define the backward step as the following:

Screenshot 2020-01-05 at 22 41 13

I wonder how this equation comes from? Is any reference or explanation for this equation?

In the paper you indicate that is a proximal update that minimizes the Frobenius distance from the base instance in input space, but as far as I know Frobenius distance is the following

image

So how does your backward step minimizes the Frobenius distance?

Thanks!

@i-chaochen
Copy link
Author

i-chaochen commented Jan 5, 2020

https://math.stackexchange.com/questions/946911/minimize-the-frobenius-norm-of-the-difference-of-two-matrices-with-respect-to-ma

From the above link I couldn't find any similar equations like your's in terms of minimizing the Frobenius distance.

Also, if I understood correctly, at poisoning attacks on transfer learning, your input space is feature representation from Inception-v3 without the last fully connected layer?

So actually you're minimising the distance between the poison instance and base instance on the output of the feature representation from Inception-v3 without the last fully connected layer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant