Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FIX + Enhancement] FGM and PGD: fix L1 and extend to Lp #2382

Merged
merged 38 commits into from
Jun 4, 2024
Merged
Show file tree
Hide file tree
Changes from 34 commits
Commits
Show all changes
38 commits
Select commit Hold shift + click to select a range
05d2923
Update docs
beat-buesser Dec 27, 2023
ae4b4f0
Bump version to ART 1.17.0
beat-buesser Dec 27, 2023
a8cb2d8
Improved L1 extension for FGM evasion attack
eliegoudout Jan 9, 2024
4e2e837
Properly generalize FGM to all p
eliegoudout Jan 13, 2024
a772f0d
Corrected L1 FGM test. L10 supplementary test (expected to find)
eliegoudout Jan 13, 2024
930cbac
projection compatible with all p>=1 in suboptimal mode + renamed test…
eliegoudout Jan 17, 2024
f29e5ad
FGM tests ok with added p=10. Cleaner implem without
eliegoudout Jan 17, 2024
cc091b3
axis=1
eliegoudout Jan 20, 2024
a6581af
projection doc
eliegoudout Jan 20, 2024
1c41b71
PGD norm doc
eliegoudout Jan 20, 2024
7b149b4
TO DO: projection adaptation (see art.utils.projection)
eliegoudout Jan 20, 2024
a143476
PGD torch: fix L1 perturbation and extend to all p>=1
eliegoudout Jan 20, 2024
dffe33d
PGD tf: fix L1 perturbation and extend to all p>=1
eliegoudout Jan 21, 2024
97ed4fe
np.abs instead of built-in abs
eliegoudout Jan 26, 2024
2695a1f
PGD torch: applied _projection mods
eliegoudout Jan 26, 2024
93e3add
PGD torch: _projection staticmethod
eliegoudout Jan 26, 2024
cbe1fa5
PGD tf: applied _projection mods
eliegoudout Jan 26, 2024
7c1c93c
PGD: debug for tets
eliegoudout Jan 26, 2024
b7d0a6a
projection: back to feature-wise priority (eps broadcasted to samples…
eliegoudout Jan 29, 2024
6038995
avoid wrong type casting
eliegoudout Jan 29, 2024
12e1347
dont use inplace abs_() on view
eliegoudout Jan 29, 2024
d36948c
projection: out casted to input values dtype
eliegoudout Feb 22, 2024
0cfe35a
Keep tol for momentum. Better naming grad_2d
eliegoudout Feb 23, 2024
e8f0718
Merge branch 'dev_1.18.0' into main
eliegoudout Apr 5, 2024
fe7ea47
Review pull/2382#pullrequestreview-1985672896
eliegoudout Apr 8, 2024
f4b5b93
Merge branch 'main' of https://github.com/eliegoudout/adversarial-rob…
eliegoudout Apr 8, 2024
96a6c93
fixed typo (norm)
eliegoudout Apr 10, 2024
ffd3622
Merge branch 'dev_1.18.0' into main
beat-buesser Apr 20, 2024
0d45b97
Merge branch 'dev_1.18.0' into main
beat-buesser May 6, 2024
7eb30c4
Fix momentum computation (wrong formula)
eliegoudout Apr 27, 2024
5c74f1e
skip momentum iterative tests for tf framework (#2439)
eliegoudout May 7, 2024
c441b8d
Merge branch 'main' of https://github.com/eliegoudout/adversarial-rob…
eliegoudout May 7, 2024
d5a3178
rectified Momentum Iterative Method test values for test_images_targeted
eliegoudout May 22, 2024
0ddafcd
disable unreachable pylint warning after temporary NotImplementedErro…
eliegoudout May 22, 2024
baa1039
Merge branch 'dev_1.18.0' into main
beat-buesser May 22, 2024
e2666b4
mxnet separate test values
eliegoudout May 28, 2024
9e5e0ed
Update AUTHORS (2382#issuecomment-2128144501)
eliegoudout May 28, 2024
435e6b3
Merge branch 'main' of https://github.com/eliegoudout/adversarial-rob…
eliegoudout May 28, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
71 changes: 39 additions & 32 deletions art/attacks/evasion/fast_gradient.py
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@
Create a :class:`.FastGradientMethod` instance.

:param estimator: A trained classifier.
:param norm: The norm of the adversarial perturbation. Possible values: "inf", np.inf, 1 or 2.
:param norm: The norm of the adversarial perturbation. Possible values: "inf", `np.inf` or a real `p >= 1`.
eliegoudout marked this conversation as resolved.
Show resolved Hide resolved
:param eps: Attack step size (input variation).
:param eps_step: Step size of input variation for minimal perturbation computation.
:param targeted: Indicates whether the attack is targeted (True) or untargeted (False)
Expand Down Expand Up @@ -288,16 +288,18 @@

logger.info(
"Success rate of FGM attack: %.2f%%",
rate_best
if rate_best is not None
else 100
* compute_success(
self.estimator, # type: ignore
x,
y_array,
adv_x_best,
self.targeted,
batch_size=self.batch_size,
(
rate_best
if rate_best is not None
else 100
* compute_success(
self.estimator, # type: ignore
x,
y_array,
adv_x_best,
self.targeted,
batch_size=self.batch_size,
)
),
)

Expand Down Expand Up @@ -334,8 +336,9 @@

def _check_params(self) -> None:

if self.norm not in [1, 2, np.inf, "inf"]:
raise ValueError('Norm order must be either 1, 2, `np.inf` or "inf".')
norm: float = np.inf if self.norm == "inf" else float(self.norm)
if norm < 1:
raise ValueError('Norm order must be either "inf", `np.inf` or a real `p >= 1`.')

if not (
isinstance(self.eps, (int, float))
Expand Down Expand Up @@ -391,9 +394,6 @@
decay: Optional[float] = None,
momentum: Optional[np.ndarray] = None,
) -> np.ndarray:
# Pick a small scalar to avoid division by 0
tol = 10e-8

# Get gradient wrt loss; invert it if attack is targeted
grad = self.estimator.loss_gradient(x, y) * (1 - 2 * int(self.targeted))

Expand Down Expand Up @@ -426,32 +426,39 @@

# Apply norm bound
def _apply_norm(norm, grad, object_type=False):
"""Returns an x maximizing <grad, x> subject to ||x||_norm<=1."""
if (grad.dtype != object and np.isinf(grad).any()) or np.isnan( # pragma: no cover
grad.astype(np.float32)
).any():
logger.info("The loss gradient array contains at least one positive or negative infinity.")

grad_2d = grad.reshape(1 if object_type else len(grad), -1)
if norm in [np.inf, "inf"]:
grad = np.sign(grad)
grad_2d = np.ones_like(grad_2d)
elif norm == 1:
if not object_type:
ind = tuple(range(1, len(x.shape)))
else:
ind = None
grad = grad / (np.sum(np.abs(grad), axis=ind, keepdims=True) + tol)
elif norm == 2:
if not object_type:
ind = tuple(range(1, len(x.shape)))
else:
ind = None
grad = grad / (np.sqrt(np.sum(np.square(grad), axis=ind, keepdims=True)) + tol)
i_max = np.argmax(np.abs(grad_2d), axis=1)
grad_2d = np.zeros_like(grad_2d)
grad_2d[range(len(grad_2d)), i_max] = 1
elif norm > 1:
conjugate = norm / (norm - 1)
q_norm = np.linalg.norm(grad_2d, ord=conjugate, axis=1, keepdims=True)
grad_2d = (np.abs(grad_2d) / np.where(q_norm, q_norm, np.inf)) ** (conjugate - 1)
grad = grad_2d.reshape(grad.shape) * np.sign(grad)
return grad

# Add momentum
# Compute gradient momentum
if decay is not None and momentum is not None:
grad = _apply_norm(norm=1, grad=grad)
grad = decay * momentum + grad
momentum += grad
if x.dtype == object:

Check warning on line 451 in art/attacks/evasion/fast_gradient.py

View check run for this annotation

Codecov / codecov/patch

art/attacks/evasion/fast_gradient.py#L451

Added line #L451 was not covered by tests
raise NotImplementedError("Momentum Iterative Method not yet implemented for object type input.")
# Update momentum in-place (important).
# The L1 normalization for accumulation is an arbitrary choice of the paper.
grad_2d = grad.reshape(len(grad), -1)
norm1 = np.linalg.norm(grad_2d, ord=1, axis=1, keepdims=True)
normalized_grad = (grad_2d / np.where(norm1, norm1, np.inf)).reshape(grad.shape)
momentum *= decay
momentum += normalized_grad

Check warning on line 459 in art/attacks/evasion/fast_gradient.py

View check run for this annotation

Codecov / codecov/patch

art/attacks/evasion/fast_gradient.py#L455-L459

Added lines #L455 - L459 were not covered by tests
# Use the momentum to compute the perturbation, instead of the gradient
grad = momentum

Check warning on line 461 in art/attacks/evasion/fast_gradient.py

View check run for this annotation

Codecov / codecov/patch

art/attacks/evasion/fast_gradient.py#L461

Added line #L461 was not covered by tests

if x.dtype == object:
for i_sample in range(x.shape[0]):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,9 @@ def __init__(
Create a :class:`.ProjectedGradientDescent` instance.

:param estimator: An trained estimator.
:param norm: The norm of the adversarial perturbation supporting "inf", np.inf, 1 or 2.
:param norm: The norm of the adversarial perturbation, supporting "inf", `np.inf` or a real `p >= 1`.
Currently, when `p` is not infinity, the projection step only rescales the noise, which may be
suboptimal for `p != 2`.
:param eps: Maximum perturbation that the attacker can introduce.
:param eps_step: Attack step size (input variation) at each iteration.
:param random_eps: When True, epsilon is drawn randomly from truncated normal distribution. The literature
Expand Down Expand Up @@ -210,8 +212,9 @@ def set_params(self, **kwargs) -> None:

def _check_params(self) -> None:

if self.norm not in [1, 2, np.inf, "inf"]:
raise ValueError('Norm order must be either 1, 2, `np.inf` or "inf".')
norm: float = np.inf if self.norm == "inf" else float(self.norm)
if norm < 1:
raise ValueError('Norm order must be either "inf", `np.inf` or a real `p >= 1`.')

if not (
isinstance(self.eps, (int, float))
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,9 @@ def __init__(
Create a :class:`.ProjectedGradientDescentCommon` instance.

:param estimator: A trained classifier.
:param norm: The norm of the adversarial perturbation supporting "inf", np.inf, 1 or 2.
:param norm: The norm of the adversarial perturbation, supporting "inf", `np.inf` or a real `p >= 1`.
Currently, when `p` is not infinity, the projection step only rescales the noise, which may be
suboptimal for `p != 2`.
:param eps: Maximum perturbation that the attacker can introduce.
:param eps_step: Attack step size (input variation) at each iteration.
:param random_eps: When True, epsilon is drawn randomly from truncated normal distribution. The literature
Expand Down Expand Up @@ -179,8 +181,9 @@ def _set_targets(self, x: np.ndarray, y: Optional[np.ndarray], classifier_mixin:

def _check_params(self) -> None: # pragma: no cover

if self.norm not in [1, 2, np.inf, "inf"]:
raise ValueError('Norm order must be either 1, 2, `np.inf` or "inf".')
norm: float = np.inf if self.norm == "inf" else float(self.norm)
if norm < 1:
raise ValueError('Norm order must be either "inf", `np.inf` or a real `p >= 1`.')

if not (
isinstance(self.eps, (int, float))
Expand Down Expand Up @@ -263,7 +266,9 @@ def __init__(
Create a :class:`.ProjectedGradientDescentNumpy` instance.

:param estimator: An trained estimator.
:param norm: The norm of the adversarial perturbation supporting "inf", np.inf, 1 or 2.
:param norm: The norm of the adversarial perturbation, supporting "inf", `np.inf` or a real `p >= 1`.
Currently, when `p` is not infinity, the projection step only rescales the noise, which may be
suboptimal for `p != 2`.
:param eps: Maximum perturbation that the attacker can introduce.
:param eps_step: Attack step size (input variation) at each iteration.
:param random_eps: When True, epsilon is drawn randomly from truncated normal distribution. The literature
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,9 @@
Create a :class:`.ProjectedGradientDescentPyTorch` instance.

:param estimator: An trained estimator.
:param norm: The norm of the adversarial perturbation. Possible values: "inf", np.inf, 1 or 2.
:param norm: The norm of the adversarial perturbation, supporting "inf", `np.inf` or a real `p >= 1`.
Currently, when `p` is not infinity, the projection step only rescales the noise, which may be
suboptimal for `p != 2`.
:param eps: Maximum perturbation that the attacker can introduce.
:param eps_step: Attack step size (input variation) at each iteration.
:param random_eps: When True, epsilon is drawn randomly from truncated normal distribution. The literature
Expand Down Expand Up @@ -185,7 +187,7 @@
adv_x = x.astype(ART_NUMPY_DTYPE)

# Compute perturbation with batching
for (batch_id, batch_all) in enumerate(
for batch_id, batch_all in enumerate(
tqdm(data_loader, desc="PGD - Batches", leave=False, disable=not self.verbose)
):

Expand Down Expand Up @@ -303,11 +305,8 @@
"""
import torch

# Pick a small scalar to avoid division by 0
tol = 10e-8

# Get gradient wrt loss; invert it if attack is targeted
grad = self.estimator.loss_gradient(x=x, y=y) * (1 - 2 * int(self.targeted))
grad = self.estimator.loss_gradient(x=x, y=y) * (-1 if self.targeted else 1)

Check warning on line 309 in art/attacks/evasion/projected_gradient_descent/projected_gradient_descent_pytorch.py

View check run for this annotation

Codecov / codecov/patch

art/attacks/evasion/projected_gradient_descent/projected_gradient_descent_pytorch.py#L309

Added line #L309 was not covered by tests

# Write summary
if self.summary_writer is not None: # pragma: no cover
Expand All @@ -331,25 +330,33 @@
if mask is not None:
grad = torch.where(mask == 0.0, torch.tensor(0.0).to(self.estimator.device), grad)

# Apply momentum
# Compute gradient momentum
if self.decay is not None:
ind = tuple(range(1, len(x.shape)))
grad = grad / (torch.sum(grad.abs(), dim=ind, keepdims=True) + tol) # type: ignore
grad = self.decay * momentum + grad
# Accumulate the gradient for the next iter
momentum += grad
# Update momentum in-place (important).
# The L1 normalization for accumulation is an arbitrary choice of the paper.
grad_2d = grad.reshape(len(grad), -1)
norm1 = torch.linalg.norm(grad_2d, ord=1, dim=1, keepdim=True)
normalized_grad = (grad_2d * norm1.where(norm1 == 0, 1 / norm1)).reshape(grad.shape)
momentum *= self.decay
momentum += normalized_grad

Check warning on line 341 in art/attacks/evasion/projected_gradient_descent/projected_gradient_descent_pytorch.py

View check run for this annotation

Codecov / codecov/patch

art/attacks/evasion/projected_gradient_descent/projected_gradient_descent_pytorch.py#L337-L341

Added lines #L337 - L341 were not covered by tests
# Use the momentum to compute the perturbation, instead of the gradient
grad = momentum

Check warning on line 343 in art/attacks/evasion/projected_gradient_descent/projected_gradient_descent_pytorch.py

View check run for this annotation

Codecov / codecov/patch

art/attacks/evasion/projected_gradient_descent/projected_gradient_descent_pytorch.py#L343

Added line #L343 was not covered by tests

# Apply norm bound
if self.norm in ["inf", np.inf]:
grad = grad.sign()

elif self.norm == 1:
ind = tuple(range(1, len(x.shape)))
grad = grad / (torch.sum(grad.abs(), dim=ind, keepdims=True) + tol) # type: ignore

elif self.norm == 2:
ind = tuple(range(1, len(x.shape)))
grad = grad / (torch.sqrt(torch.sum(grad * grad, axis=ind, keepdims=True)) + tol) # type: ignore
norm: float = np.inf if self.norm == "inf" else float(self.norm)
grad_2d = grad.reshape(len(grad), -1)

Check warning on line 347 in art/attacks/evasion/projected_gradient_descent/projected_gradient_descent_pytorch.py

View check run for this annotation

Codecov / codecov/patch

art/attacks/evasion/projected_gradient_descent/projected_gradient_descent_pytorch.py#L346-L347

Added lines #L346 - L347 were not covered by tests
if norm == np.inf:
grad_2d = torch.ones_like(grad_2d)

Check warning on line 349 in art/attacks/evasion/projected_gradient_descent/projected_gradient_descent_pytorch.py

View check run for this annotation

Codecov / codecov/patch

art/attacks/evasion/projected_gradient_descent/projected_gradient_descent_pytorch.py#L349

Added line #L349 was not covered by tests
elif norm == 1:
i_max = torch.argmax(grad_2d.abs(), dim=1)
grad_2d = torch.zeros_like(grad_2d)
grad_2d[range(len(grad_2d)), i_max] = 1

Check warning on line 353 in art/attacks/evasion/projected_gradient_descent/projected_gradient_descent_pytorch.py

View check run for this annotation

Codecov / codecov/patch

art/attacks/evasion/projected_gradient_descent/projected_gradient_descent_pytorch.py#L351-L353

Added lines #L351 - L353 were not covered by tests
elif norm > 1:
conjugate = norm / (norm - 1)
q_norm = torch.linalg.norm(grad_2d, ord=conjugate, dim=1, keepdim=True)
grad_2d = (grad_2d.abs() * q_norm.where(q_norm == 0, 1 / q_norm)) ** (conjugate - 1)

Check warning on line 357 in art/attacks/evasion/projected_gradient_descent/projected_gradient_descent_pytorch.py

View check run for this annotation

Codecov / codecov/patch

art/attacks/evasion/projected_gradient_descent/projected_gradient_descent_pytorch.py#L355-L357

Added lines #L355 - L357 were not covered by tests

grad = grad_2d.reshape(grad.shape) * grad.sign()

Check warning on line 359 in art/attacks/evasion/projected_gradient_descent/projected_gradient_descent_pytorch.py

View check run for this annotation

Codecov / codecov/patch

art/attacks/evasion/projected_gradient_descent/projected_gradient_descent_pytorch.py#L359

Added line #L359 was not covered by tests

assert x.shape == grad.shape

Expand Down Expand Up @@ -448,65 +455,60 @@

return x_adv

@staticmethod
def _projection(
self, values: "torch.Tensor", eps: Union[int, float, np.ndarray], norm_p: Union[int, float, str]
values: "torch.Tensor",
eps: Union[int, float, np.ndarray],
norm_p: Union[int, float, str],
*,
suboptimal: bool = True,
) -> "torch.Tensor":
"""
Project `values` on the L_p norm ball of size `eps`.

:param values: Values to clip.
:param eps: Maximum norm allowed.
:param norm_p: L_p norm to use for clipping supporting 1, 2, `np.Inf` and "inf".
:param eps: If a scalar, the norm of the L_p ball onto which samples are projected. Equivalently in general,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should add this extended documentation of eps also to the class level in line 84 of this files and similar lines for TensorFlow and Numpy implementations.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not entirely sure if this is the exact same epsilon. As such, I don't really know what the description should be at the class level to be honest. I think the only difference is that the class-level eps is potentially split if the method is iterative, but I'm not sure that there is no other difference. As such, I would appreciate if you could provide the wanted doc for this one.

can be any array of non-negatives broadcastable with `values`, and the projection occurs onto the
unit ball for the weighted L_{p, w} norm with `w = 1 / eps`. Currently, for any given sample,
non-uniform weights are only supported with infinity norm. Example: To specify sample-wise scalar,
you can provide `eps.shape = (n_samples,) + (1,) * values[0].ndim`.
:param norm_p: Lp norm to use for clipping, with `norm_p > 0`. Only 2, `np.inf` and "inf" are supported
with `suboptimal=False` for now.
:param suboptimal: If `True` simply projects by rescaling to Lp ball. Fast but may be suboptimal for
`norm_p != 2`.
Ignored when `norm_p in [np.inf, "inf"]` because optimal solution is fast. Defaults to `True`.
:return: Values of `values` after projection.
"""
import torch

# Pick a small scalar to avoid division by 0
tol = 10e-8
values_tmp = values.reshape(values.shape[0], -1)
norm = np.inf if norm_p == "inf" else float(norm_p)
assert norm > 0

Check warning on line 485 in art/attacks/evasion/projected_gradient_descent/projected_gradient_descent_pytorch.py

View check run for this annotation

Codecov / codecov/patch

art/attacks/evasion/projected_gradient_descent/projected_gradient_descent_pytorch.py#L484-L485

Added lines #L484 - L485 were not covered by tests

if norm_p == 2:
if isinstance(eps, np.ndarray):
raise NotImplementedError(
"The parameter `eps` of type `np.ndarray` is not supported to use with norm 2."
)
values_tmp = values.reshape(len(values), -1) # (n_samples, d)

Check warning on line 487 in art/attacks/evasion/projected_gradient_descent/projected_gradient_descent_pytorch.py

View check run for this annotation

Codecov / codecov/patch

art/attacks/evasion/projected_gradient_descent/projected_gradient_descent_pytorch.py#L487

Added line #L487 was not covered by tests

values_tmp = (
values_tmp
* torch.min(
torch.tensor([1.0], dtype=torch.float32).to(self.estimator.device),
eps / (torch.norm(values_tmp, p=2, dim=1) + tol),
).unsqueeze_(-1)
eps = np.broadcast_to(eps, values.shape)
eps = eps.reshape(len(eps), -1) # (n_samples, d)
assert np.all(eps >= 0)
if norm != np.inf and not np.all(eps == eps[:, [0]]):

Check warning on line 492 in art/attacks/evasion/projected_gradient_descent/projected_gradient_descent_pytorch.py

View check run for this annotation

Codecov / codecov/patch

art/attacks/evasion/projected_gradient_descent/projected_gradient_descent_pytorch.py#L489-L492

Added lines #L489 - L492 were not covered by tests
raise NotImplementedError(
"Projection onto the weighted L_p ball is currently not supported with finite `norm_p`."
)

elif norm_p == 1:
if isinstance(eps, np.ndarray):
if (suboptimal or norm == 2) and norm != np.inf: # Simple rescaling
values_norm = torch.linalg.norm(values_tmp, ord=norm, dim=1, keepdim=True) # (n_samples, 1)
values_tmp = values_tmp * values_norm.where(

Check warning on line 499 in art/attacks/evasion/projected_gradient_descent/projected_gradient_descent_pytorch.py

View check run for this annotation

Codecov / codecov/patch

art/attacks/evasion/projected_gradient_descent/projected_gradient_descent_pytorch.py#L498-L499

Added lines #L498 - L499 were not covered by tests
values_norm == 0, torch.minimum(torch.ones(1), torch.Tensor(eps) / values_norm)
)
else: # Optimal
if norm == np.inf: # Easy exact case
values_tmp = values_tmp.sign() * torch.minimum(values_tmp.abs(), torch.Tensor(eps))
elif norm >= 1: # Convex optim

Check warning on line 505 in art/attacks/evasion/projected_gradient_descent/projected_gradient_descent_pytorch.py

View check run for this annotation

Codecov / codecov/patch

art/attacks/evasion/projected_gradient_descent/projected_gradient_descent_pytorch.py#L504-L505

Added lines #L504 - L505 were not covered by tests
raise NotImplementedError(
"The parameter `eps` of type `np.ndarray` is not supported to use with norm 1."
"Finite values of `norm_p >= 1` are currently not supported with `suboptimal=False`."
)
else: # Non-convex optim
raise NotImplementedError("Values of `norm_p < 1` are currently not supported with `suboptimal=False`")

values_tmp = (
values_tmp
* torch.min(
torch.tensor([1.0], dtype=torch.float32).to(self.estimator.device),
eps / (torch.norm(values_tmp, p=1, dim=1) + tol),
).unsqueeze_(-1)
)

elif norm_p in [np.inf, "inf"]:
if isinstance(eps, np.ndarray):
eps = eps * np.ones_like(values.cpu())
eps = eps.reshape([eps.shape[0], -1]) # type: ignore

values_tmp = values_tmp.sign() * torch.min(
values_tmp.abs(), torch.tensor([eps], dtype=torch.float32).to(self.estimator.device)
)

else:
raise NotImplementedError(
"Values of `norm_p` different from 1, 2 and `np.inf` are currently not supported."
)

values = values_tmp.reshape(values.shape)
values = values_tmp.reshape(values.shape).to(values.dtype)

Check warning on line 512 in art/attacks/evasion/projected_gradient_descent/projected_gradient_descent_pytorch.py

View check run for this annotation

Codecov / codecov/patch

art/attacks/evasion/projected_gradient_descent/projected_gradient_descent_pytorch.py#L512

Added line #L512 was not covered by tests

return values
Loading
Loading