Replies: 2 comments 6 replies
-
@Snitte I'm not familiar with an 'extract' function. Can you cite a source? We've written tutorials that cover most of the common YOLO use cases here: Tutorials |
Beta Was this translation helpful? Give feedback.
-
@Snitte export.py should work identically under windows, and export tests are part of the CI tests run on the 3 major operating systems, including windows. Please ensure you meet all dependency requirements if you are attempting to run YOLOv5 locally. If in doubt, create a new virtual Python 3.8 environment, clone the latest repo (code changes daily), and RequirementsPython 3.8 or later with all requirements.txt dependencies installed, including $ pip install -r requirements.txt EnvironmentsYOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are passing. These tests evaluate proper operation of basic YOLOv5 functionality, including training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu. |
Beta Was this translation helpful? Give feedback.
-
I am perhaps not too bright or have been running in circles for too long
I have been trying to speed up my inference and have found that the export function might be the key. I can however not figure out how to make use of the onnx or torchscript. I have tried installing and running NVIDIA TensorRT but only came up sort as it apparently does not run on windows.
Do any of you have a "for dummies" guide or the likes that would be much appreciated!
Beta Was this translation helpful? Give feedback.
All reactions