Skip to content

Conversation

@simonbyrne
Copy link
Contributor

This fixes the onnxscript export for the irfft function.

Fixes onnx/onnx#5920, and adds support to the changes in onnx/onnx#7574 and microsoft/onnxruntime#27028.

Most of the diff is due to the onnx_opset generated code changes from onnx/onnx#5920. That can be removed if you would prefer.

Signed-off-by: Simon Byrne <[email protected]>
axis=dimension,
inverse=True,
onesided=False,
onesided=True,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a way to implement this with how the dft op is defined currently? I think the current implementation has issues with normalization, but I couldn't figure out what went wrong. You expertise is appreciated!

Given the onnx update is not merged and implemented yet, we are not able to incorporate the change to torchlib at the moment as we treat it as UB (unless the current behavior is actually correct in runtimes?).

Copy link
Contributor Author

@simonbyrne simonbyrne Jan 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the current implementation may be UB itself: from what I can tell, it treats the n//2+1 length matrix as if it was actually length n. Most of the time this means that you will be accessing unset memory and get small values (e.g. 1e-300), so it is equivalent to roughly scaling most of the factors by 1/2 (except for the 0th frequency, so the offset remains the same).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, I take that back: since it passes the fft length, it won't be UB, but it still will zero pad (and hence give the wrong answer).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be possible to implement the correct behavior given what we have currently (you would need to extend the array and reverse/conjugate the values), but it would be suboptimal. If the patch to ONNX is accepted, what would be the timeline to getting it updated here?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When the patch is accepted, depending on consensus it may or may not need to be in a new opset. If it is in the new opset, we need to update pytorch code. It will be accepted here if it is just a clarification of the old opset.

Comment on lines +1113 to +1121
def Pad(
self,
data: T_Pad,
pads: INT64,
constant_value: Optional[T_Pad] = None,
axes: Optional[Tind_Pad] = None,
*,
mode: str = "constant",
) -> T_Pad:

Check warning

Code scanning / CodeQL

Signature mismatch in overriding method Warning

This method requires at least 3 positional arguments, whereas overridden
Opset1.Pad
requires 2.
This method requires at least 3 positional arguments, whereas overridden
Opset2.Pad
requires 2.
UINT8,
)

def Reshape(self, data: T_Reshape, shape: INT64, *, allowzero: int = 0) -> T_Reshape:

Check warning

Code scanning / CodeQL

Signature mismatch in overriding method Warning

This method requires 3 positional arguments, whereas overridden
Opset1.Reshape
requires 2.
UINT8,
)

def Unsqueeze(self, data: T_Unsqueeze, axes: INT64) -> T_Unsqueeze:

Check warning

Code scanning / CodeQL

Signature mismatch in overriding method Warning

This method requires 3 positional arguments, whereas overridden
Opset1.Unsqueeze
requires 2.
This method requires 3 positional arguments, whereas overridden
Opset11.Unsqueeze
requires 2.
@codecov
Copy link

codecov bot commented Jan 16, 2026

Codecov Report

❌ Patch coverage is 20.93023% with 136 lines in your changes missing coverage. Please review.
✅ Project coverage is 11.72%. Comparing base (e45724c) to head (b9462d2).

Files with missing lines Patch % Lines
onnxscript/onnx_opset/_impl/opset25.py 6.14% 107 Missing ⚠️
...nnx_opset/_impl/opset_ai_onnx_preview_training1.py 63.88% 13 Missing ⚠️
onnxscript/onnx_opset/_impl/opset26.py 26.66% 11 Missing ⚠️
onnxscript/onnx_opset/__init__.py 16.66% 5 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##             main    #2770       +/-   ##
===========================================
- Coverage   70.45%   11.72%   -58.73%     
===========================================
  Files         228      219        -9     
  Lines       27252    21906     -5346     
  Branches     2759     2227      -532     
===========================================
- Hits        19201     2569    -16632     
- Misses       7099    19328    +12229     
+ Partials      952        9      -943     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@github-advanced-security github-advanced-security bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lintrunner found more than 20 potential problems in the proposed changes. Check the Files changed tab for more details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Development

Successfully merging this pull request may close these issues.

Clarify DFT behavior when inverse=True

2 participants