You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In your paper you define the backward step as the following:
I wonder how this equation comes from? Is any reference or explanation for this equation?
In the paper you indicate that is a proximal update that minimizes the Frobenius distance from the base instance in input space, but as far as I know Frobenius distance is the following
So how does your backward step minimizes the Frobenius distance?
Thanks!
The text was updated successfully, but these errors were encountered:
From the above link I couldn't find any similar equations like your's in terms of minimizing the Frobenius distance.
Also, if I understood correctly, at poisoning attacks on transfer learning, your input space is feature representation from Inception-v3 without the last fully connected layer?
So actually you're minimising the distance between the poison instance and base instance on the output of the feature representation from Inception-v3 without the last fully connected layer.
Hi thanks for sharing this solution @mahyarnajibi @ashafahi
In your paper you define the backward step as the following:
I wonder how this equation comes from? Is any reference or explanation for this equation?
In the paper you indicate that is a proximal update that minimizes the Frobenius distance from the base instance in input space, but as far as I know Frobenius distance is the following
So how does your backward step minimizes the Frobenius distance?
Thanks!
The text was updated successfully, but these errors were encountered: