Transform Larq dialect ops to Xcore #693
Replies: 1 comment 4 replies
-
This sounds good. We'd be very happy to accept a PR adding this build target.
Unfortunately I think in the current setup there is no easy way of avoiding this conversion. |
Beta Was this translation helpful? Give feedback.
-
Hi,
I'm working on a project to transform Larq dialect ops to ones in Xcore dialect to be deployed on XMOS xcore architecture.
There are two concerns I wanted to discuss -
There is a small patch we need to apply to be able to include and build with the compute-engine repo. This is also so that the lce_ops.td file can be included from the XMOS project. Here's the patch, https://github.com/xmos/ai_tools/blob/5f7412fe0e71e0f904dd539773f15fd51b54b899/experimental/xformer/patches/build_fix.patch
It would be good if this patch can be included in the compute-engine repo. I'm happy to put up a PR for this. Similar changes might be needed when the compute-engine repo is moved up to Tensorflow 2.7.
The second concern is a more general question on transforming the Larq dialect ops. Currently, we are using the lce python interface to convert a Keras model to a tflite model containing Larq ops such as LceQuantize and LceBconv2d. We would then load this model and do the necessary graph transformations to the Xcore dialect in our MLIR transformer.
The issue is that by the time we load the model, the Larq ops have already been converted to the TFL custom ops.
We would have to convert the custom ops back to ones in the Larq dialect by parsing the flexbuffer data and replacing them with new Larq ops. This seems quite error-prone since there is no way to ensure type-safety using flexbuffers as the medium if there are changes to the ops.
I'm interested in knowing if there is a better way of doing this. Is there a way to avoid this conversion so that the ops remain in the Larq dialect? Would we have to add the Larq ops to the TFL flatbuffer schema?
Thanks for your time!
Beta Was this translation helpful? Give feedback.
All reactions