Skip to content

Commit 1d6a066

Browse files
authored
updated main and samples readme (#135)
1 parent 13bfbc5 commit 1d6a066

File tree

6 files changed

+213
-14
lines changed

6 files changed

+213
-14
lines changed

Readme.md

Lines changed: 32 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -169,21 +169,9 @@ By default, torch-ort-infer depends on PyTorch 1.12 and ONNX Runtime OpenVINO EP
169169

170170
- `python -m torch_ort.configure`
171171

172-
## Verify your installation
173-
174-
Once you have created your environment, execute the following steps to validate that your installation is correct.
175-
176-
1. Clone this repo
177-
178-
- `git clone [email protected]:pytorch/ort.git`
179-
<br/><br/>
180-
2. Install extra dependencies
181-
182-
- `pip install wget pandas transformers`
183-
<br/><br/>
184-
3. Run the inference script
172+
## Samples
185173

186-
- `python ./ort/torch_ort_inference/tests/bert_for_sequence_classification.py`
174+
To see OpenVINO™ integration with Torch-ORT in action, see [demos](/torch_ort_inference/demos), which shows you how to run inference on some of the most popular Deep Learning models.
187175

188176
## Add ONNX Runtime for PyTorch to your PyTorch inference script
189177

@@ -212,12 +200,42 @@ If no provider options are specified by user, OpenVINO™ Execution Provider is
212200
backend = "CPU"
213201
precision = "FP32"
214202
```
203+
215204
For more details on APIs, see [usage.md](/torch_ort_inference/docs/usage.md).
216205

217206
### Note
218207

219208
Experimental support on Intel® MyriadX VPU in this preview.
220209

210+
## Code Sample
211+
212+
Below is an example of how you can leverage OpenVINO™ integration with Torch-ORT in a simple NLP usecase.
213+
A pretrained [BERT model](https://huggingface.co/textattack/bert-base-uncased-CoLA) fine-tuned on the CoLA dataset from HuggingFace model hub is used to predict grammar correctness on a given input text.
214+
215+
216+
```python
217+
from transformers
218+
import AutoTokenizer, AutoModelForSequenceClassification
219+
import numpy as np
220+
from torch_ort import ORTInferenceModule
221+
tokenizer = AutoTokenizer.from_pretrained(
222+
"textattack/bert-base-uncased-CoLA")
223+
model = AutoModelForSequenceClassification.from_pretrained(
224+
"textattack/bert-base-uncased-CoLA")
225+
# Wrap model in ORTInferenceModule to prepare the model for inference using OpenVINO Execution Provider on CPU
226+
model = ORTInferenceModule(model)
227+
text = "Replace me any text by you'd like ."
228+
encoded_input = tokenizer(text, return_tensors='pt')
229+
output = model(**encoded_input)
230+
# Post processing
231+
logits = output.logits
232+
logits = logits.detach().cpu().numpy()
233+
# predictions
234+
pred = np.argmax(logits, axis=1).flatten()
235+
print("Grammar correctness label (0=unacceptable, 1=acceptable)")
236+
print(pred)
237+
```
238+
221239
## License
222240

223241
This project has an MIT license, as found in the [LICENSE](LICENSE) file.

torch_ort_inference/demos/bert.md

Lines changed: 85 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,85 @@
1+
# Bert for Sequence Classification
2+
3+
This demo shows how to use Intel® OpenVINO™ integration with Torch-ORT to check grammar in text with ONNX Runtime OpenVINO Execution Provider.
4+
5+
We use a sequence classification model [textattack/bert-base-uncased-CoLA](https://huggingface.co/textattack/bert-base-uncased-CoLA) from HuggingFace models. This model is trained on the BERT architecture to check grammar.
6+
7+
## Model Metadata
8+
| Domain | Application | Industry | Framework | Input Data Format |
9+
| ------------- | -------- | -------- | --------- | -------------- |
10+
| NLP | Sequence Classification | General | PyTorch | Text |
11+
12+
## Prerequisites
13+
14+
- Ubuntu 18.04, 20.04
15+
- Python* 3.7, 3.8 or 3.9
16+
17+
## Install in a local Python environment
18+
19+
1. Upgrade pip
20+
21+
- `pip install --upgrade pip`
22+
<br/><br/>
23+
24+
2. Install torch-ort-infer with OpenVINO dependencies
25+
26+
- `pip install torch-ort-infer[openvino]`
27+
<br/><br/>
28+
3. Run post-installation script
29+
30+
- `python -m torch_ort.configure`
31+
32+
## Verify your installation
33+
34+
Once you have created your environment, execute the following steps to validate that your installation is correct.
35+
36+
1. Clone this repo
37+
38+
- `git clone https://github.com/pytorch/ort.git`
39+
<br/><br/>
40+
2. Install extra dependencies
41+
42+
- `pip install wget pandas transformers`
43+
<br/><br/>
44+
3. Run the inference script with default options
45+
46+
- `python ./ort/torch_ort_inference/demos/bert_for_sequence_classification.py`
47+
<br/><br/>
48+
**Note**: OpenVINOExecutionProvider is enabled with CPU and FP32 by default.
49+
50+
## Run demo with custom options
51+
```
52+
usage: bert_for_sequence_classification.py [-h] [--pytorch-only] [--input INPUT] [--input-file INPUT_FILE] [--provider PROVIDER] [--backend BACKEND] [--precision PRECISION]
53+
54+
PyTorch BERT Sequence Classification Example
55+
56+
optional arguments:
57+
-h, --help show this help message and exit
58+
--pytorch-only disables ONNX Runtime inference
59+
--input "INPUT" input sentence, put it in quotes
60+
--input-file INPUT_FILE path to input file in .tsv format
61+
--provider PROVIDER ONNX Runtime Execution Provider
62+
--backend BACKEND OpenVINO target device (CPU, GPU).
63+
--precision PRECISION OpenVINO target device precision (FP16 or FP32)
64+
```
65+
66+
**Note**: Default options and inputs are selected if no arguments are given
67+
68+
## Expected output
69+
```
70+
OpenVINOExecutionProvider is enabled with CPU and FP32 by default.
71+
Input not provided! Using default input...
72+
73+
Number of sentences: 2
74+
Grammar correctness label (0=unacceptable, 1=acceptable)
75+
76+
'This is a BERT sample.' : 1
77+
'User input is valid not.' : 0
78+
79+
Average inference time: 25.2306ms
80+
Total Inference time: 50.4613ms
81+
```
82+
83+
**Note**: This demo has a warm-up run and then inference time is measured on the subsequent runs. The execution time of first run is in general higher compared to the next runs as it includes inline conversion to ONNX, many one-time graph transformations and optimizations steps.
84+
85+
For more details on APIs, see [usage.md](/torch_ort_inference/docs/usage.md)

torch_ort_inference/tests/bert_for_sequence_classification.py renamed to torch_ort_inference/demos/bert_for_sequence_classification.py

File renamed without changes.
9.98 MB
Loading
Lines changed: 96 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,96 @@
1+
2+
# ResNet-50 Image Classification
3+
4+
This demo shows how to use Intel® OpenVINO™ integration with Torch-ORT to classify objects in images with ONNX Runtime OpenVINO Execution Provider.
5+
6+
We use an image classification model [ResNet-50](https://pytorch.org/vision/stable/models/generated/torchvision.models.resnet50.html#torchvision.models.resnet50) from Torchvision and [ImageNet labels](https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt) to classify objects. In the labels file, you'll find 1,000 different categories that were used in the Imagenet competition.
7+
8+
9+
## Data
10+
The input to the model is a 224 x 224 image (airplane in our case), and the **output** is a list of estimated class probabilities.
11+
12+
<p align="center" width="100%"> <img src="plane.jpg" alt="drawing" height="300" width="400"/>
13+
14+
## Model Metadata
15+
| Domain | Application | Industry | Framework | Training Data | Input Data Format |
16+
| ------------- | -------- | -------- | --------- | --------- | -------------- |
17+
| Vision | Image Classification | General | Pytorch | [ImageNet](http://www.image-net.org/) | Image (RGB/HWC)|
18+
19+
## Pre-requisites
20+
21+
- Ubuntu 18.04, 20.04
22+
23+
- Python* 3.7, 3.8 or 3.9
24+
25+
## Install in a local Python environment
26+
27+
1. Upgrade pip
28+
29+
- `pip install --upgrade pip`
30+
<br/><br/>
31+
32+
2. Install torch-ort-infer with OpenVINO dependencies
33+
34+
- `pip install torch-ort-infer[openvino]`
35+
<br/><br/>
36+
3. Run post-installation script
37+
38+
- `python -m torch_ort.configure`
39+
40+
## Verify your installation
41+
42+
Once you have created your environment, execute the following steps to validate that your installation is correct.
43+
44+
1. Clone this repo
45+
46+
- `git clone https://github.com/pytorch/ort.git`
47+
<br/><br/>
48+
2. Install extra dependencies
49+
50+
- `pip install wget Pillow torchvision`
51+
<br/><br/>
52+
3. Run the inference script
53+
54+
- `python ./ort/torch_ort_inference/demos/resnet_image_classification.py --input-file ./ort/torch_ort_inference/demos/plane.jpg`
55+
<br/><br/>
56+
**Note**: OpenVINOExecutionProvider is enabled with CPU and FP32 by default.
57+
58+
## Run demo with custom options
59+
```
60+
usage: resnet_image_classification.py [-h] [--pytorch-only] [--labels LABELS] --input-file INPUT_FILE [--provider PROVIDER] [--backend BACKEND] [--precision PRECISION]
61+
62+
PyTorch Image Classification Example
63+
64+
optional arguments:
65+
-h, --help show this help message and exit
66+
--pytorch-only disables ONNX Runtime inference
67+
--labels LABELS path to labels file
68+
--input-file INPUT_FILE path to input image file
69+
--provider PROVIDER ONNX Runtime Execution Provider
70+
--backend BACKEND OpenVINO target device (CPU, GPU or MYRIAD)
71+
--precision PRECISION OpenVINO target device precision (FP16 or FP32)
72+
```
73+
74+
**Note**: Some default options are selected if no arguments are given
75+
76+
## Expected Output
77+
78+
For the input image of an airplane, you can see the output something similar to:
79+
80+
```
81+
Labels , Probabilities:
82+
airliner 0.9133861660957336
83+
wing 0.08387967199087143
84+
airship 0.001151240081526339
85+
warplane 0.00030989135848358274
86+
projectile 0.0002502237621229142
87+
```
88+
89+
Here, the network classifies the image as an airplane, with a high score of 0.91.
90+
91+
**Note**: This demo has a warm-up run and then inference time is measured on the subsequent runs. The execution time of first run is in general higher compared to the next runs as it includes inline conversion to ONNX, many one-time graph transformations and optimizations steps.
92+
93+
For more details on APIs, see [usage.md](/torch_ort_inference/docs/usage.md).
94+
95+
96+

torch_ort_inference/tests/resnet_image_classification.py renamed to torch_ort_inference/demos/resnet_image_classification.py

File renamed without changes.

0 commit comments

Comments
 (0)