Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FIX + Enhancement] FGM and PGD: fix L1 and extend to Lp #2382

Merged
merged 38 commits into from
Jun 4, 2024

Conversation

eliegoudout
Copy link
Contributor

@eliegoudout eliegoudout commented Jan 9, 2024

Summary for review

Object of the PR

It corrects the $L^1$ extension of FGM evasion attack (+ PGD) and properly extends them to $L^p$ for any $p\geqslant 1$. The conversation started here, but essentially, I fix the current extension of FGM and PGD to $L^1$ and propose a proper extension to $L^p, (p\geqslant 1)$, where the added noise is defined by
$$\text{noise direction}:=\left(\frac{\vert\nabla\vert}{\Vert\nabla\Vert_q}\right)^{q/p}\text{sign}(\nabla)$$
with $\frac{1}{p}+\frac{1}{q} = 1$ (and some abuse of notation when $p=1$ or $p=+\infty$). This is optimal since it maximizes $\langle\nabla,\text{noise direction}\rangle$ subject to the constraint $\Vert\text{noise direction}\Vert_p = 1$.

Previously, for $p=1$, the package used $\text{noise direction} = \frac{\vert\nabla\vert}{\Vert\nabla\Vert_1}$ instead of the better and optimal $\text{noise direction}=(0,…,0,\text{sign}(\nabla_i),0,…,0),(i=\text{argmax}_j|\nabla_j|)$.

What's changed?

  1. FastGradientMethod and ProjectedGradientDescent now allow norm to be any $p\geqslant 1$, instead of only $1, 2$ or $+\infty$,
  2. In both cases, the $p=1$ case is corrected and should provide better results (if anyone cares to make a comparison, that would be nice),
  3. Since PGD resorts to projection to ensure that $\Vert noise\Vert_p\leqslant\varepsilon$, the related projection/_projection functions had to be updated:
    3.a. They now work for any $p\geqslant 1$ (in fact $p>0$),
    3.b. When $p\neq+\infty$, the projection is simply made by rescaling the vector, which is suboptimal when $p\neq 2$. This behaviour was already here when $p=1$.
    3.c. The supboptimal behaviour is explicited via the addition of the keyword suboptimal=True (by default) and in the __doc__.
    3.d. An "optimal" projection is something more computationally intensive to find (via an iterative method) and was not added in this PR (future work?)
    3.e. An optimal implementation was already in the files for $p=1$ so I plugged it inside projection from art.utils.py, but it remains inaccessible from the attack classes (no control over the suboptimal kw).
    3.f. It is now possible to pass a different radius of projection sample-wise (see doc for the way to do it),
    3.g. For a given sample, the special case of imposing feature-wise bound when $p=+\infty$ was already implemented, i.e. $\vert x_i\vert\leqslant\varepsilon_i$ for every feature $i$. This can be generalized to any $L^p$ norm working with the condition $\Vert x\Vert_{p, 1/\varepsilon}\leqslant 1$ where $\Vert x\Vert_{p, w}:=\left(\sum_{i}w_i\vert x_i\vert^p\right)^{1/p}$. This extentions is left for future work also.
  4. I added a unit test for FGM with $p=10$.
  5. One point of attention: When $p=1$, the implementation I used for $\text{noise direction}$ in numpy and pytorch finds the ghighest coefficient in the gradients, and turns it into $\pm 1$ while putting all the others to $0$. I had trouble doing this in tensorflow, so I used an approach I believe is slower, which, in the end, splits the $\pm 1$ across all highest coefficients (in absolute value). As a result, when several coefficients of the gradients are maximal in absolute value, the tensorflow implementation gives a different noise direction (but with the same value of $\langle\nabla,\text{noise direction}\rangle$) than numpy of pytorch implementations. I don't know if this is a problem or not.
  6. Found and hotfixed (7eb30c4) Momentum Iterative Method computation bug, for NumPy and PyTorch frameworks. Tensorflow disabled (5c74f1e).

Please give your feedbacks

I will be glad to hear what you think. Specifically, I'm not at all an expert in tensorflow and pytorch, so I'm not really sure about the potential type cast issues. I know I had a bit of trouble debugging the code in the first place.

Also, I think there may be potential for improved documentation or unit tests, but I would like to hear what you have to say about this.

All the best!
Élie


Archive

Conversation started here, noticing previous FGM extension to $L^1$ norm was not optimal and that other $L^p$ spaces were not supported.

Tasks:

  • identify changes to make
  • Implement FGM changes
  • Extend projection from art.utils.py. For this:
    • Add suboptimal keyword to allow a first approach for any $p > 0$. Default value is True, which corresponds to current implementation
    • Test implemented projection_l1_1and projection_l1_2 and incorporate them when suboptimal=False for $p=1$.
    • I renamed test_projection to test_projection_norm because it's the only thing it does.
    • Cases $0 < p < 1$ and $1 < p < +\infty$ remain NotImplemented when suboptimal=False for now.
  • Pass FGM tests
  • Implement PGD changes
    • _compute_perturbation changes: $L^1$ fix + general $p$ extension
    • _projection changes: Follow art.utils.projection mods
  • Pass PGD tests
  • Fix Momentum Iterative Method: wrong momentum update.

Edit: As I pointed out, it would be better (imo) to entirely generalize to any $p\in[1, +\infty]$ with
$$\text{noise direction}:=\left(\frac{\vert\nabla\vert}{\Vert\nabla\Vert_q}\right)^{q/p}\text{sign}(\nabla)$$
and the appropriate abuses of notations, which is more natural.


Closes #2381.

@beat-buesser beat-buesser changed the base branch from main to dev_1.18.0 January 10, 2024 21:07
@codecov-commenter
Copy link

codecov-commenter commented Jan 10, 2024

Codecov Report

Attention: Patch coverage is 86.60714% with 15 lines in your changes missing coverage. Please review.

Project coverage is 85.66%. Comparing base (5da8bcb) to head (435e6b3).

Additional details and impacted files

Impacted file tree graph

@@              Coverage Diff               @@
##           dev_1.18.0    #2382      +/-   ##
==============================================
+ Coverage       85.53%   85.66%   +0.12%     
==============================================
  Files             327      327              
  Lines           29936    29954      +18     
  Branches         5546     5540       -6     
==============================================
+ Hits            25607    25661      +54     
+ Misses           2903     2866      -37     
- Partials         1426     1427       +1     
Files Coverage Δ
...ted_gradient_descent/projected_gradient_descent.py 97.33% <100.00%> (+0.03%) ⬆️
...adient_descent/projected_gradient_descent_numpy.py 88.04% <ø> (ø)
art/attacks/evasion/fast_gradient.py 86.01% <95.00%> (+1.96%) ⬆️
art/utils.py 79.91% <89.47%> (+5.94%) ⬆️
...ient_descent/projected_gradient_descent_pytorch.py 95.86% <91.66%> (+2.38%) ⬆️
...escent/projected_gradient_descent_tensorflow_v2.py 90.57% <73.52%> (-5.64%) ⬇️

... and 5 files with indirect coverage changes

@beat-buesser beat-buesser self-assigned this Jan 11, 2024
@eliegoudout
Copy link
Contributor Author

eliegoudout commented Jan 14, 2024

I already exended art/attacks/evasion/fast_gradient.py to every $p\geqslant 1$ and changed the $L^1$ test accordingly. But when trying to add another ($p=10$) test, I noticed that FGM uses a projection function.

Can anyone tell me why that is? There should be no projection in vanilla FGM. I would understand for PGD, but it already makes the FGM tests fail.

Thanks!

@beat-buesser
Copy link
Collaborator

beat-buesser commented Jan 15, 2024

Hi @eliegoudout I think for FGSM the projection to eps should not have any effect if the perturbation is calculated correctly to be of size eps_step. For PGD the projection will make sure the perturbation stays within eps. Would it be possible to update function projection to support all p to also enable all norms in PGD?

@eliegoudout
Copy link
Contributor Author

eliegoudout commented Jan 15, 2024

I agree that projection shouldn't have any impact but the function is still called, which makes tests fail as long as projection is not extended.

I thought about extending projection, and as it happens, it's a non trivial problem, and the only open source implementation I found was stale for 2 years, but might still be very good. I have not had time to try it yet, but it's on todo list.

All of a sudden, it will sadly add a bit of complexity to the projection part and to ART. Is that a problem?

@eliegoudout eliegoudout changed the title [FIX/Enhancement] L1 extension for $FGM$ evasion attack [FIX/Enhancement] Fix L1 and add Lp extension for FGM evasion attack (and potentially PGD) Jan 17, 2024
@eliegoudout
Copy link
Contributor Author

Hi,

  • FGM is now fully and properly extended to $1\leqslant p\leqslant +\infty$.
  • Related tests pass (and I added one for $p=10$ for future reference.
  • projection now supports all $p &gt; 0$ in suboptimal mode (i.e. previous mode anyway, enabled by default with keyword suboptimal)
  • It has yet to be implemented for $0 &lt; p &lt; 1$ or $1 &lt; p &lt; +\infty$
  • The implementation for this which I linked above seems to be completely off. Maybe I broke something, but it doesn't work at all from what I could try.
  • I updated tasks list.
  • I think it is worth mentioning that in current release, not only PGD for $L^1$ is based on a wrong noise update (which I now fix), but also, it uses a simple rescaling for $L^1$ projection, which is also suboptimal even though projection_l1_1/2 are already implemented. Since the projection is not a trivial problem, either it turns out that the computation is fast enough to be negligible, or I think it would be a good idea to add the suboptimal (or another word) option to PDG.

Cheers!

@beat-buesser
Copy link
Collaborator

Hi @eliegoudout Thank you very much! Adding a bit of complexity is ok if it leads to an accurate calculation. I'll take a closer look soon. I like the suboptimal argument in projection Do you think this PR is ready for review (it's currently in draft)?

@eliegoudout
Copy link
Contributor Author

Thanks for your feedback! If we restrict this PR to FGM then it is almost ready for review (I would need to remove some "TO DO" I wrote in PGD files. On the other hand, if we want to merge PGD in the same PR, I think I might get it ready early next week (or least likely in the end of this week).

Adding good projection for every p would still require some more work and can be left for even later.

@eliegoudout
Copy link
Contributor Author

Trying to pass the final tests, I now understand (I think) that the ndarray option for eps in projection is to provide feature-wise bounds, not sample-wise bounds as I originally thought. This was not documented I think so i will try to reconcile both possibilities (pixel-wise and sample-wise) and document the behaviour.

@eliegoudout
Copy link
Contributor Author

eliegoudout commented Jan 27, 2024

Regarding my previous message I think I need some opinion to go further.

The problem:
Currently projection takes eps either as a scalar or a ndarray, but the behaviour in undocumented.

What I think should be good for the user:
If they want,

  • to be able to use sample-wise eps (different eps for each sample) but also,
  • feature-wise eps, which corresponds to projecting onto the unit weighted $L^p$ ball, with weighted norm defined as
    $$\Vert x\Vert_{p, \varepsilon} = \big(\sum_{i}\frac{1}{\varepsilon_i}\vert x_i\vert^p\big)^{1/p}.$$

Current implementation
Allows both but prioritizes feature-wise, because eps is first broadcasted to values.shape. For example, to set sample-wise epsilons for mnist images, the user has to input eps.shape = (n_samples, 1, 1, 1) or eps.shape = (n_samples, 28, 28, 1) for example, which seems weird to me.

What I think looks better:
I propose the following behaviour, relying on the fact that values.ndim >= 2 since samples are always at least one dimensional (never just scalars):

:param eps: If a scalar or a 1-D array, the sample-wise L_p norm of the ball onto which the projection occurs.
Otherwise, must have the same ndim as values, in which case the projection occurs onto the weighted
unit L_p ball, with weight 1 / eps (currently supported only with infinity norm).

As you can see, this implies that when using feature-wise bounds, the user must provide eps with same eps.ndim as values.ndim.
This is necessary to allow both sample-wise and feature-wise bounds, because in the case where values.shape = (10, 10) for example (10 samples which are 1-D arrays of length 10), how would projection interpret eps.shape = (10,)?
In a way, this is prioritizing sample-wise bounds.

Possible Solutions:

  • The implemented one (looks very weird to me)
  • What I propose above. This seems to be a breaking change, as test test_9a_keras_mnist seems to generate an eps.shape = (28, 28, 1) with values.shape = (10, 28, 28, 1),
  • Add another kw option such as samplewise which would default to False. If False, current behaviour is maintained, if True, then 1d arrays are treated as in the solution I propose. I think this kw solution is heavy and quite bad tbh.
  • A final option would be to always assume that samples have at least 2 dimensions. In this case, values.ndim >=3 so we could split behaviour according to whether eps.ndim <= 1 (sample-wise) or eps.ndim >= 2 (feature-wise). This seems dangerous though, as people might have time series or proprocessed sample into 1d...

Do you have any opinion? Is the solution a propose really breaking anything?

Thanks!

@eliegoudout
Copy link
Contributor Author

eliegoudout commented Jan 29, 2024

In the end, I decided to revert to previous implementation scheme, which prioritizes the feature-wise bounds, since it doesn't break anything and it may make more sense anyways, for attacks designed for batch generation.

I documened as follows the use of eps in projection/_projection:

:param eps: If a scalar, the norm of the L_p ball onto which samples are projected. Equivalently in general,
can be any array of non-negatives broadcastable with values, and the projection occurs onto the
unit ball for the weighted L_{p, w} norm with w = 1 / eps. Currently, for any given sample,
non-uniform weights are only supported with infinity norm. Example: To specify sample-wise scalar,
you can provide eps.shape = (n_samples,) + (1,) * values[0].ndim.

I started debugging the failing tests, but I have trouble finding the issue for the last two tests. It seems that the $L^1$ computation for PGD is different between numpy and pytorch, but I can't figure out why.
Edit: Found it!

Cheers!

@eliegoudout
Copy link
Contributor Author

@beat-buesser I think my PR is ready for review!

@eliegoudout eliegoudout marked this pull request as ready for review January 29, 2024 23:24
@eliegoudout eliegoudout changed the title [FIX/Enhancement] Fix L1 and add Lp extension for FGM evasion attack (and potentially PGD) [FIX + Enhancement] FGM and PGD: fix L1 and extend to Lp Jan 30, 2024
@eliegoudout
Copy link
Contributor Author

eliegoudout commented Feb 23, 2024

Do you think this PR is ready for review (it's currently in draft)?

It is now ready and I would be grateful for any review :) I also fixed the small casting issue that was remaining, as well as fixed a small overview.

@eliegoudout
Copy link
Contributor Author

@beat-buesser Hello :) Do you think someone can review / merge this PR at some point?

I'm not in a hurry, but I'm worried it gets forgotten/frozen. Thanks!

@beat-buesser
Copy link
Collaborator

Hi @eliegoudout Thank you for your patience! I'll review and if possible merge it this week.

@beat-buesser
Copy link
Collaborator

@eliegoudout Could you please take a look at passing the DCO check?

@beat-buesser beat-buesser added bug Something isn't working improvement Improve implementation labels Apr 4, 2024
@beat-buesser beat-buesser added this to the ART 1.18.0 milestone Apr 4, 2024
beat-buesser and others added 6 commits April 4, 2024 19:27
Signed-off-by: Beat Buesser <[email protected]>
Signed-off-by: Élie Goudout <[email protected]>
Signed-off-by: Beat Buesser <[email protected]>
Signed-off-by: Élie Goudout <[email protected]>
@eliegoudout
Copy link
Contributor Author

eliegoudout commented Apr 22, 2024

Okay, very cool, thanks! It seems that these errors occur because I forgot to change the expected value for $\Vert x-x_{\text{adv}}\Vert$ for BIM:

_________________________________ test_images __________________________________
[...]
>           assert np.mean(np.abs(x_train_mnist - x_train_mnist_adv)) == pytest.approx(0.09437845647335052, abs=0.05)
E           assert 0.00080794824 == 0.09437845647335052 ± 5.0e-02
E             comparison failed
E             Obtained: 0.0008079482358880341
E             Expected: 0.09437845647335052 ± 5.0e-02
_____________________________ test_images_targeted _____________________________
[...]
>           assert np.mean(np.abs(x_train_mnist - x_train_mnist_adv)) == pytest.approx(0.08690829575061798, abs=0.05)
E           assert 0.0004436275 == 0.08690829575061798 ± 5.0e-02
E             comparison failed
E             Obtained: 0.0004436275048647076
E             Expected: 0.08690829575061798 ± 5.0e-02

I think it is expected to have a smaller (i.e. better) $\Vert x-x_{\text{adv}}\Vert$. Should I change the tests by simply replacing with the obtained value? Should I keep the same uncertainty? 5.0e-02 seems a bit much. Maybe abs=1e-4 instead?

) -> "torch.Tensor":
"""
Project `values` on the L_p norm ball of size `eps`.

:param values: Values to clip.
:param eps: Maximum norm allowed.
:param norm_p: L_p norm to use for clipping supporting 1, 2, `np.Inf` and "inf".
:param eps: If a scalar, the norm of the L_p ball onto which samples are projected. Equivalently in general,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should add this extended documentation of eps also to the class level in line 84 of this files and similar lines for TensorFlow and Numpy implementations.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not entirely sure if this is the exact same epsilon. As such, I don't really know what the description should be at the class level to be honest. I think the only difference is that the class-level eps is potentially split if the method is iterative, but I'm not sure that there is no other difference. As such, I would appreciate if you could provide the wanted doc for this one.

@beat-buesser
Copy link
Collaborator

Hi @eliegoudout About the failing test, I think we should take a closer look at why this is happening. MomentumIterativeMethod is basically running in that test ProjectedGradientDescentNumpy with norm="inf". I think the results for infinity norm should not change because of the existing fast accurate implementation, or?

@beat-buesser
Copy link
Collaborator

Hi @eliegoudout Could you please take a look at the failing unit tests? It seems the relative tolerance is too tight for some tests.

@eliegoudout
Copy link
Contributor Author

Hi @eliegoudout Could you please take a look at the failing unit tests? It seems the relative tolerance is too tight for some tests.

Thank you for your message. I do intend to get around this, but I'm struggling to find the necessary time, sorry! I'll do my best :)

@eliegoudout
Copy link
Contributor Author

eliegoudout commented May 22, 2024

Hi @eliegoudout Could you please take a look at the failing unit tests? It seems the relative tolerance is too tight for some tests.

Done ✅
I realized that I didn't properly run both numpy and pytorch frameworks when setting the test values. I rectified both the expected value and the tolerance (rel=0.01 when I got approx 0.008 while testing).

)
# Update momentum in-place (important).
# The L1 normalization for accumulation is an arbitrary choice of the paper.
grad_2d = tf.reshape(grad, (len(grad), -1)) # pylint: disable=unreachable

Check warning

Code scanning / CodeQL

Unreachable code Warning

This statement is unreachable.
@beat-buesser
Copy link
Collaborator

Hi @eliegoudout Thank you very much! It looks like we need one more change, the MXNet workflow is still failing at two unit tests and requires a larger tolerance for the updated tests. You don't have to install MXNet and can use the information in the test logs to adjust the test tolerances:

>           assert np.mean(np.abs(x_train_mnist - x_train_mnist_adv)) == pytest.approx(0.1288, 0.003)
E           assert 0.11162121 == 0.1288 ± 3.9e-04
E             comparison failed
E             Obtained: 0.11162120848894119
E             Expected: 0.1288 ± 3.9e-04

tests/attacks/evasion/test_momentum_iterative_method.py:51: AssertionError
_____________________________ test_images_targeted _____________________________

art_warning = <function art_warning.<locals>._art_warning at 0x7f8eb88c4430>
fix_get_mnist_subset = (array([[[[0., 0., 0., ..., 0., 0., 0.],
         [0., 0., 0., ..., 0., 0., 0.],
         [0., 0., 0., ..., 0., 0., 0....0.],
       [0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
       [1., 0., 0., 0., 0., 0., 0., 0., 0., 0.]], dtype=float32))
image_dl_estimator_for_attack = <function image_dl_estimator_for_attack.<locals>._image_dl_estimator_for_attack at 0x7f8eb88c4f70>

    @pytest.mark.skip_framework("tensorflow")  # See issue #2439
    def test_images_targeted(art_warning, fix_get_mnist_subset, image_dl_estimator_for_attack):
        try:
            (x_train_mnist, y_train_mnist, x_test_mnist, y_test_mnist) = fix_get_mnist_subset
            classifier = image_dl_estimator_for_attack(MomentumIterativeMethod)
    
            attack = MomentumIterativeMethod(classifier, eps=0.3, eps_step=0.1, decay=1.0, max_iter=10)
            x_train_mnist_adv = attack.generate(x=x_train_mnist, y=y_train_mnist)
    
>           assert np.mean(np.abs(x_train_mnist - x_train_mnist_adv)) == pytest.approx(0.1077, 0.01)
E           assert 0.097485796 == 0.1077 ± 1.1e-03
E             comparison failed
E             Obtained: 0.09748579561710358
E             Expected: 0.1077 ± 1.1e-03

After fixing this last item, I can merge this pull request. I think your changes are a significant contribution to ART. Therefore, if you like, you may add your employer's name to the AUTHORS file.

@eliegoudout
Copy link
Contributor Author

Hi @eliegoudout Thank you very much! It looks like we need one more change, the MXNet workflow is still failing at two unit tests and requires a larger tolerance for the updated tests. You don't have to install MXNet and can use the information in the test logs to adjust the test tolerances

Okay, indeed I couldn't test those locally and didn't catch them before, thanks. One question: is this high tolerance to be expected? Going from ~1e-3 to ~1e-1 seems really sad, but I'm not at all familiar with this framework so I don't know what to expect, I'll blindly trust you on this.

After fixing this last item, I can merge this pull request.

Great to read!

I think your changes are a significant contribution to ART. Therefore, if you like, you may add your employer's name to the AUTHORS file.

Thank you for this recognition, I'm happy this is perceived as significant. Since I made these contributions on my spare time, I would need to discuss this with my company. Let me get back to you on this before the merge (during next week at the latest).

@beat-buesser
Copy link
Collaborator

Hi @eliegoudout Ok, that sounds great. Instead of one single and large tolerance you could check for the framework and set two expected values with an if-block like this:

def test_generate_parallel(art_warning, fix_get_mnist_subset, image_dl_estimator, framework):
    try:
        classifier, _ = image_dl_estimator(from_logits=True)

        if framework == "tensorflow2":

where framework contains the label of the backend used for testing. For example:

if framework == "mxnet":
    expected_value = ...
else:
    expected_value = ...

Notice to add framework as argument to the test function.

@eliegoudout
Copy link
Contributor Author

eliegoudout commented May 28, 2024

Hi @eliegoudout Ok, that sounds great. Instead of one single and large tolerance you could check for the framework and set two expected values with an if-block like this:

def test_generate_parallel(art_warning, fix_get_mnist_subset, image_dl_estimator, framework):
    try:
        classifier, _ = image_dl_estimator(from_logits=True)

        if framework == "tensorflow2":

where framework contains the label of the backend used for testing. For example:

if framework == "mxnet":
    expected_value = ...
else:
    expected_value = ...

Notice to add framework as argument to the test function.

@beat-buesser Thanks for clarifying the way to go (I'm not proficient with pytest, would not have thought that the signature would be understood out of the box). I've made the changes.

Regarding the AUTHORS, I wrote my name directly for transparency, as I do not think it would have made sense to put my company at this stage.

I hope this is now mergeable :)
Thanks again for all the feedbacks and the help.

Sincerely.
Élie

@beat-buesser beat-buesser merged commit 435e6b3 into Trusted-AI:dev_1.18.0 Jun 4, 2024
32 of 33 checks passed
@beat-buesser
Copy link
Collaborator

Hi @eliegoudout Thank you very much for your first contribution to ART! We will release ART 1.18 in the coming days which will bring your updates to ART's main branch and packages.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working improvement Improve implementation
Projects
No open projects
Status: Pull request done
Development

Successfully merging this pull request may close these issues.

FGM is wrong + extend to all p >= 1
3 participants