Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add zero-checks to axpy-like operations #1573

Draft
wants to merge 1 commit into
base: develop
Choose a base branch
from
Draft

Add zero-checks to axpy-like operations #1573

wants to merge 1 commit into from

Conversation

upsj
Copy link
Member

@upsj upsj commented Mar 18, 2024

Operations like $\alpha A x + 0 y$ may propagate NaNs from y to the output despite the 0 coefficient. This can be avoided by checking the beta scaling factors for zero explicitly.

TODO:

  • add reference tests for matrices
  • add generic tests for Dense

This prevents NaNs from polluting the output
@upsj upsj self-assigned this Mar 18, 2024
@ginkgo-bot ginkgo-bot added reg:testing This is related to testing. mod:cuda This is related to the CUDA module. mod:openmp This is related to the OpenMP module. mod:reference This is related to the reference module. type:matrix-format This is related to the Matrix formats mod:hip This is related to the HIP module. mod:dpcpp This is related to the DPC++ module. labels Mar 18, 2024
Copy link
Collaborator

@fritzgoebel fritzgoebel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@MarcelKoch MarcelKoch self-requested a review April 5, 2024 09:27
@yhmtsai yhmtsai self-requested a review April 5, 2024 09:33
@MarcelKoch MarcelKoch added this to the Ginkgo 1.8.0 milestone Apr 5, 2024
Copy link
Member

@MarcelKoch MarcelKoch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm in general. Maybe there are still some static_cast missing to make it compile.

Comment on lines +383 to +385
[&beta_val](const type& x) {
return is_zero(beta_val) ? zero(beta_val) : beta_val * x;
});
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor performance comment, you could try switching the lambda and zero check, i.e.

is_zero(beta) 
  ? [&beta_val](const type& x) { return zero(beta); } 
  :  [&beta_val](const type& x) { return beta_val * x; }

But this might not work, since the two branches of the ?: operator have different types. And it might increase compile times, since it might compile the kernel two times

arithmetic_type result = c->at(row, j);
result *= beta_val;
arithmetic_type result =
is_zero(beta_val) ? zero(beta_val) : beta_val * c->at(row, j);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
is_zero(beta_val) ? zero(beta_val) : beta_val * c->at(row, j);
is_zero(beta_val) ? zero(beta_val) : beta_val * static_cast<arithmetic_type>(c->at(row, j));

to make it compile

@yhmtsai
Copy link
Member

yhmtsai commented Apr 15, 2024

Is there any reason to avoid propagation of NaN? If there's no performance penalty, I think propagation of NaN is easier to know the algorithm does not work out due to some arthmetic error.

@MarcelKoch
Copy link
Member

@yhmtsai if you are computing for example y = 0.5 * A * x + 0.0 * y then propagating NaNs from y is unnecessary, since it's mathematical equivalent to just leaving y out. This can easily happen, if y is not initialized before that computation.

@yhmtsai
Copy link
Member

yhmtsai commented Apr 15, 2024

no, 0 * NaN should be NaN not zero, so it is not mathimatical equality by just leaving them out.
Yes, it might happen in unitialized memory, but I would say it should be properly initialized or using the call without touching the unitialization put. (for us, we may proparbably provide A->apply(alpha, x, y) for y = alpha * A * x)

@yhmtsai
Copy link
Member

yhmtsai commented Apr 15, 2024

I know current vendor library usually treat 0 as do not touch due to BLAS.
I am not sure the other routines hold the same rule

@upsj
Copy link
Member Author

upsj commented Apr 15, 2024

0 * NaN should be NaN not zero

that makes calculations more fragile, we already do similar things (special cases) for zeros inside our solver kernels

@MarcelKoch
Copy link
Member

@yhmtsai we treat 0 * NaN = 0 only in the case of axpy-like operation. So there will still be NaN propagation in normal SpMV, dot-products, etc. But for these axpy-like operations, users will not care about the IEEE standard. For them, initializing x by setting it to 0 and multiplying it with 0 should be equivalent.

@tcojean tcojean modified the milestones: Ginkgo 1.8.0, Ginkgo 1.9.0 May 31, 2024
@MarcelKoch MarcelKoch modified the milestones: Ginkgo 1.9.0, Ginkgo 1.10.0 Dec 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
mod:cuda This is related to the CUDA module. mod:dpcpp This is related to the DPC++ module. mod:hip This is related to the HIP module. mod:openmp This is related to the OpenMP module. mod:reference This is related to the reference module. reg:testing This is related to testing. type:matrix-format This is related to the Matrix formats
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants