-
Notifications
You must be signed in to change notification settings - Fork 3.7k
[QNN EP] Add FusedMatMul operator support #27044
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
[QNN EP] Add FusedMatMul operator support #27044
Conversation
Add support for the FusedMatMul operator in the QNN execution provider. FusedMatMul is a contrib operator in the Microsoft domain that performs a fused matrix multiplication with optional bias addition and activation. Implementation details: - Added FusedMatMulOpBuilder class that decomposes FusedMatMul into: 1. MatMul operation 2. Optional bias addition 3. Optional activation (Relu, Sigmoid, Tanh, Gelu) - Handles various attributes: transA, transB, alpha, and activation - Supports higher rank tensors and different data types Added comprehensive tests: - Basic functionality tests with various configurations - Tests for both CPU and HTP backends - QDQ (Quantize-Dequantize) tests for 8-bit and 16-bit precision
|
@edgchen1 and @yuslepukhin |
|
/azp run Linux QNN CI Pipeline,Windows ARM64 QNN CI Pipeline, Win_TRT_Minimal_CUDA_Test_CI, Windows GPU Doc Gen CI Pipeline |
|
Azure Pipelines successfully started running 4 pipeline(s). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR adds support for the FusedMatMul operator in the QNN execution provider by decomposing it into QNN-supported operations (MatMul, optional transpose, and optional alpha scaling).
Changes:
- Added FusedMatMulOpBuilder class that decomposes FusedMatMul into MatMul with optional batch transpose and alpha scaling operations
- Added comprehensive test coverage for FusedMatMul on both CPU and HTP backends, including QDQ tests
- Modified existing context tests to use FusedGemm instead of FusedMatMul (unrelated to the main purpose)
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
| onnxruntime/test/providers/qnn/qnn_ep_context_test.cc | Changed test model from FusedMatMul to FusedGemm with adjusted tensor shapes |
| onnxruntime/test/providers/qnn/fused_matmul_op_test.cc | New comprehensive test suite for FusedMatMul operator with various configurations |
| onnxruntime/test/contrib_ops/fused_matmul_op_test.cc | Added QNN EP to exclusion list for existing tests |
| onnxruntime/core/providers/qnn/builder/opbuilder/fused_matmul_op_builder.cc | New operator builder implementation decomposing FusedMatMul into QNN operations |
| onnxruntime/core/providers/qnn/builder/op_builder_factory.h | Added function declaration for FusedMatMul builder |
| onnxruntime/core/providers/qnn/builder/op_builder_factory.cc | Registered FusedMatMul operator builder |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
onnxruntime/core/providers/qnn/builder/opbuilder/fused_matmul_op_builder.cc
Show resolved
Hide resolved
| TensorInfo input_info_0{}; | ||
| TensorInfo input_info_1{}; | ||
| ORT_RETURN_IF_ERROR(qnn_model_wrapper.GetTensorInfo(inputs[0], input_info_0)); | ||
| ORT_RETURN_IF_ERROR(qnn_model_wrapper.GetTensorInfo(inputs[1], input_info_1)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: doesn't seem to be used
| return Status::OK(); | ||
| } | ||
|
|
||
| Status FusedMatMulOpBuilder::ProcessInputs(QnnModelWrapper& qnn_model_wrapper, const NodeUnit& node_unit, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could FusedMatMulOpBuilder use the default implementation in the base class (BaseOpBuilder::ProcessInputs)?
| Status BaseOpBuilder::ProcessInputs(QnnModelWrapper& qnn_model_wrapper, |
| QnnTensorWrapper matmul_output_tensor(matmul_output_name, | ||
| QNN_TENSOR_TYPE_NATIVE, | ||
| output_info.qnn_data_type, | ||
| QnnQuantParamsWrapper(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just curious - does a default quant param work with a quantized data type (like QNN_DATATYPE_SFIXED_POINT_8)? Does the backend compute the quantization params dynamically?
Description
Add support for the FusedMatMul operator in the QNN execution provider.
FusedMatMul is a contrib operator in the Microsoft domain that performs
a fused matrix multiplication with optional bias addition and activation.
Implementation details:
Added comprehensive tests:
Motivation and Context
Since QNN HTP doesn't support, decomposing it into QNN HTP supported operators to improve the inference time of customer models having FusedMatMul operator.