Skip to content

Conversation

@tirupath-qti
Copy link
Contributor

Description

Add support for the FusedMatMul operator in the QNN execution provider.
FusedMatMul is a contrib operator in the Microsoft domain that performs
a fused matrix multiplication with optional bias addition and activation.

Implementation details:

  • Added FusedMatMulOpBuilder class that decomposes FusedMatMul into:
    1. MatMul operation
    2. Optional bias addition
    3. Optional activation (Relu, Sigmoid, Tanh, Gelu)
  • Handles various attributes: transA, transB, alpha, and activation
  • Supports higher rank tensors and different data types

Added comprehensive tests:

  • Basic functionality tests with various configurations
  • Tests for both CPU and HTP backends
  • QDQ (Quantize-Dequantize) tests for 8-bit and 16-bit precision

Motivation and Context

Since QNN HTP doesn't support, decomposing it into QNN HTP supported operators to improve the inference time of customer models having FusedMatMul operator.

 Add support for the FusedMatMul operator in the QNN execution provider.
 FusedMatMul is a contrib operator in the Microsoft domain that performs
 a fused matrix multiplication with optional bias addition and activation.

Implementation details:
- Added FusedMatMulOpBuilder class that decomposes FusedMatMul into:
  1. MatMul operation
  2. Optional bias addition
  3. Optional activation (Relu, Sigmoid, Tanh, Gelu)
- Handles various attributes: transA, transB, alpha, and activation
- Supports higher rank tensors and different data types

Added comprehensive tests:
- Basic functionality tests with various configurations
- Tests for both CPU and HTP backends
- QDQ (Quantize-Dequantize) tests for 8-bit and 16-bit precision
@tirupath-qti
Copy link
Contributor Author

@edgchen1 and @yuslepukhin
If possible, can we get this reviewed and tracked for 1.24 release. This is needed for enabling one customer model.

@adrianlizarraga
Copy link
Contributor

/azp run Linux QNN CI Pipeline,Windows ARM64 QNN CI Pipeline, Win_TRT_Minimal_CUDA_Test_CI, Windows GPU Doc Gen CI Pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 4 pipeline(s).

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds support for the FusedMatMul operator in the QNN execution provider by decomposing it into QNN-supported operations (MatMul, optional transpose, and optional alpha scaling).

Changes:

  • Added FusedMatMulOpBuilder class that decomposes FusedMatMul into MatMul with optional batch transpose and alpha scaling operations
  • Added comprehensive test coverage for FusedMatMul on both CPU and HTP backends, including QDQ tests
  • Modified existing context tests to use FusedGemm instead of FusedMatMul (unrelated to the main purpose)

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
onnxruntime/test/providers/qnn/qnn_ep_context_test.cc Changed test model from FusedMatMul to FusedGemm with adjusted tensor shapes
onnxruntime/test/providers/qnn/fused_matmul_op_test.cc New comprehensive test suite for FusedMatMul operator with various configurations
onnxruntime/test/contrib_ops/fused_matmul_op_test.cc Added QNN EP to exclusion list for existing tests
onnxruntime/core/providers/qnn/builder/opbuilder/fused_matmul_op_builder.cc New operator builder implementation decomposing FusedMatMul into QNN operations
onnxruntime/core/providers/qnn/builder/op_builder_factory.h Added function declaration for FusedMatMul builder
onnxruntime/core/providers/qnn/builder/op_builder_factory.cc Registered FusedMatMul operator builder

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +85 to +88
TensorInfo input_info_0{};
TensorInfo input_info_1{};
ORT_RETURN_IF_ERROR(qnn_model_wrapper.GetTensorInfo(inputs[0], input_info_0));
ORT_RETURN_IF_ERROR(qnn_model_wrapper.GetTensorInfo(inputs[1], input_info_1));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: doesn't seem to be used

return Status::OK();
}

Status FusedMatMulOpBuilder::ProcessInputs(QnnModelWrapper& qnn_model_wrapper, const NodeUnit& node_unit,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could FusedMatMulOpBuilder use the default implementation in the base class (BaseOpBuilder::ProcessInputs)?

Status BaseOpBuilder::ProcessInputs(QnnModelWrapper& qnn_model_wrapper,

QnnTensorWrapper matmul_output_tensor(matmul_output_name,
QNN_TENSOR_TYPE_NATIVE,
output_info.qnn_data_type,
QnnQuantParamsWrapper(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just curious - does a default quant param work with a quantized data type (like QNN_DATATYPE_SFIXED_POINT_8)? Does the backend compute the quantization params dynamically?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants