Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Converting onnx to nnp opset version error #1170

Open
Hrithik2212 opened this issue Feb 18, 2023 · 2 comments
Open

Converting onnx to nnp opset version error #1170

Hrithik2212 opened this issue Feb 18, 2023 · 2 comments

Comments

@Hrithik2212
Copy link

Hrithik2212 commented Feb 18, 2023

Hi
I have been trying to build a Downsized U-Net model to deploy on the spresense board

I have been using pytorch
To deploy on the spresense board I need the model file to be .nnb format

Thus I use the following code to generate a .nnb model file

Pytorch code to genrate .ONNX file

     model = UNet()
     dummy_input = torch.randn(1, 1, 256,256)
     input_names = [ "actual_input" ]
     output_names = [ "output" ]
     torch.onnx.export(model,
                 dummy_input,
                 "/content/unetv0.onnx",
                 verbose=False,
                 input_names=input_names,
                 output_names=output_names,
                 export_params=True,
                 opset_version = 13
                 )

ONNX code to qauntize model to int8

   import onnx
   from onnxruntime.quantization import quantize_dynamic, QuantType

   model_fp32 = '/content/unetv0.onnx'
   model_quant = '/content/unetv0_qaunt.onnx'
   quantized_model = quantize_dynamic(model_fp32, model_quant)

Convert Opset Version of the ONNX model to 13

    from onnx import version_converter, helper
    model_path = "/content/unetv0_qaunt.onnx"
   original_model = onnx.load(model_path)
   converted_model = version_converter.convert_version(original_model,13)
   onnx.save(converted_model,'/content/unetv0_qaunt13.onnx')

CLI to convert ONNX file to NNP file

 !nnabla_cli convert /content/unetv0_qaunt.onnx /content/output.nnp

ISSUE


When I run the ONNX to NNP conversion command without qauntization it works , however if I qauntize the model I get the following opset error

Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/nnabla/utils/cli/cli.py", line 147, in cli_main
return_value = args.func(args)
File "/usr/local/lib/python3.8/dist-packages/nnabla/utils/cli/convert.py", line 111, in convert_command
nnabla.utils.converter.convert_files(args, args.files, output)
File "/usr/local/lib/python3.8/dist-packages/nnabla/utils/converter/commands.py", line 255, in convert_files
nnp = _import_file(args, ifiles)
File "/usr/local/lib/python3.8/dist-packages/nnabla/utils/converter/commands.py", line 41, in _import_file
return OnnxImporter(*ifiles).execute()
File "/usr/local/lib/python3.8/dist-packages/nnabla/utils/converter/onnx/importer.py", line 4449, in execute
return self.onnx_model_to_nnp_protobuf()
File "/usr/local/lib/python3.8/dist-packages/nnabla/utils/converter/onnx/importer.py", line 4426, in onnx_model_to_nnp_protobuf
raise ValueError(
ValueError: Unsupported opset from domain ai.onnx.m

@TomonobuTsujikawa
Copy link
Contributor

Thank you for reporting!
Currently, we have not supported importing quantized onnx to nnp, because it cannot convert some onnx layers.
But, please let us consider which layers are causing this error.

@TomonobuTsujikawa
Copy link
Contributor

nnabla does not support to import quantized onnx model.
And there is no information on when we can support it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants