-
Notifications
You must be signed in to change notification settings - Fork 19.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[POC][OV] Support OpenVINO as Keras 3 backend #19727
base: master
Are you sure you want to change the base?
[POC][OV] Support OpenVINO as Keras 3 backend #19727
Conversation
Signed-off-by: Kazantsev, Roman <[email protected]>
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR!
@@ -289,6 +291,9 @@ def __init__( | |||
self._convert_input_args = True | |||
# Whether to allow non-tensors as positional arguments in `call()`. | |||
self._allow_non_tensor_positional_args = False | |||
if backend.backend() == "openvino": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should not be backend-specific.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @fchollet, please advice how to avoid this code. Keras common requires it to be of KerasTensor
type during calling op-by-op. How to relax this check and allow to pass ov type object into ops wrappers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All you need is for backend.is_tensor(x)
to return True for the types you consider to be tensors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@fchollet, I redefined it for my backend but openvino.runtime.Output
does not have dtype
attribute and exception is raised here: https://github.com/keras-team/keras/blob/master/keras/src/dtype_policies/dtype_policy.py#L200
We retrieve dtype using method get_element_type
. I think backend.standardize_dtype
should accept x
instead of x.dtype
.
How do you think I can solve this exception? Thanks for your help in advance.
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #19727 +/- ##
==========================================
- Coverage 81.98% 81.05% -0.93%
==========================================
Files 514 525 +11
Lines 47120 47738 +618
Branches 7401 7455 +54
==========================================
+ Hits 38633 38696 +63
- Misses 6701 7251 +550
- Partials 1786 1791 +5
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Hi @rkazants Can you please resolve the conflicts? Thank you! |
I am working on this PR. I am trying to resolve comments. Best regards, |
Hi @rkazants Any update on this PR? Please. Thank you! |
This PR is stale because it has been open for 14 days with no activity. It will be closed if no further activity occurs. Thank you. |
I am just back from vacation. I plan to continue from the next week. |
This PR is stale because it has been open for 14 days with no activity. It will be closed if no further activity occurs. Thank you. |
Hi @rkazants Any update on this PR? Please. Thank you! |
Signed-off-by: Kazantsev, Roman <[email protected]>
Signed-off-by: Kazantsev, Roman <[email protected]>
Signed-off-by: Kazantsev, Roman <[email protected]>
Signed-off-by: Kazantsev, Roman <[email protected]>
Signed-off-by: Kazantsev, Roman <[email protected]>
Signed-off-by: Kazantsev, Roman <[email protected]>
Signed-off-by: Kazantsev, Roman <[email protected]>
Signed-off-by: Kazantsev, Roman <[email protected]>
Signed-off-by: Kazantsev, Roman <[email protected]>
Signed-off-by: Kazantsev, Roman <[email protected]>
@fchollet, please review. all other comments are addressed. |
Signed-off-by: Kazantsev, Roman <[email protected]>
@fchollet, Thanks, |
Signed-off-by: Kazantsev, Roman <[email protected]>
Signed-off-by: Kazantsev, Roman <[email protected]>
Signed-off-by: Kazantsev, Roman <[email protected]>
Signed-off-by: Kazantsev, Roman <[email protected]>
Signed-off-by: Kazantsev, Roman <[email protected]>
Hi @fchollet, look forward to receive your review:) Thanks in advance, |
@fchollet, kindly reminder about review. Thank you, |
Hi @fchollet, any update? Please inform me if any concerns. Thanks, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, only seeing this now. It looks better! It seems the set of ops currently implemented is quite small, however. What kinds of end-to-end workflows are currently working? What level of op coverage do you think you can achieve?
@@ -289,6 +291,9 @@ def __init__( | |||
self._convert_input_args = True | |||
# Whether to allow non-tensors as positional arguments in `call()`. | |||
self._allow_non_tensor_positional_args = False | |||
if backend.backend() == "openvino": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All you need is for backend.is_tensor(x)
to return True for the types you consider to be tensors.
Hi @fchollet, currently we support In future, from my quick glance we will be able to cover solid number of operations because OpenVINO opset is quite well developed and support a lot of TF, PyTorch, ONNX models from different hubs (TF, ONNX, HF, PyTorch, etc,) for inference and we permanently develop it to cover TF, PyTorch, ONNX opsets. You can find all OV operations here: https://docs.openvino.ai/2024/documentation/openvino-ir-format/operation-sets/operation-specs.html Let me know what amount of operations I should cover in this PR to get it merged. So we will have the base OpenVINO support here and continue ops coverage support in future PRs. For the base OpenVINO support, I propose to support operations required by BERT model without preprocessor from Keras Hub, for example, and demonstrate that it works. Example below. Will it be sufficient for merge? import os
os.environ["KERAS_BACKEND"] = "openvino"
import keras_nlp
features = {
"token_ids": np.ones(shape=(2, 12), dtype="int32"),
"segment_ids": np.array([[0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0]] * 2),
"padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0]] * 2),
}
# Load a BERT model.
classifier = keras_nlp.models.BertClassifier.from_preset(
"bert_base_en_uncased",
num_classes=2,
preprocessor=None
)
classifier.predict(features) In the description of this PR, I demonstrated very simple model inference. Please check. |
We will have to run only a very similar subset of unit tests on CI for OpenVINO since we cannot skip every test that isn't passing (too many). So we would have to create basically an OpenVINO-specific integration test that checks precisely the end-to-end workflows that are intended to be supported. Are you going to add support for CV workflows? I would recommend also having support for conv/pooling ops. |
Yes, I will add separate integration tests for OpenVINO to cover currently supported ops. For this PR, let me enable MobileNetV3 from KerasCV that will require to support conv/pooling operations. Summarizing it, my to-do list for this PR looks as follows:
@fchollet, will it be fine and sufficient for this PR merge? Thanks a lot, |
Yes, that sounds good to me! |
Signed-off-by: Kazantsev, Roman <[email protected]>
Details: Support OpenVINO as Keras 3 backend. This is inference-only backend. In order to switch on this, define environment variable as follows:
os.environ["KERAS_BACKEND"] = "openvino"
or useset_backend
.Here is an example how it works:
Install OpenVINO
pip install openvino -U