integrations/tflite/ #8856
Replies: 16 comments 61 replies
-
Mobile SSD models are expected to have exactly 4 outputs, found 1, I get this error trying to integrate on Android, i added metadata for input. How do I suppose to handle this, and can you provide example? |
Beta Was this translation helpful? Give feedback.
-
"TensorFlow SavedModel: export failure ❌ 49.4s: generic_type: cannot initialize type "StatusCode": an object with that name is already defined" Im having this error occur when I try to export my yolov8 weights to tflite format. Any idea how to solve this? |
Beta Was this translation helpful? Give feedback.
-
I am trying to export the yolov8 default model (yolov8n.pt) into TFlite but after inputing the code Load a modelmodel = YOLO('yolov8n.pt') # build a new model from YAML Train the modelmodel.export(format='tflite') I get the error message ERROR: The trace log is below.
What you should do instead is wrap
ERROR: input_onnx_file_path: yolov8n.onnx |
Beta Was this translation helpful? Give feedback.
-
Hey,I met a problem,these my conda list absl-py 2.1.0 When I trans the .pt to .tflite,it happened this PyTorch: starting from 'E:\Lady_YOLO\YAML\yolov8n.pt' with input shape (1, 3, 160, 128) BCHW and output shape(s) (1, 84, 420) (6.2 MB) This is my code,and the model is official from ultralytics import YOLO |
Beta Was this translation helpful? Give feedback.
-
I am trying to run a yolov8 model on tflite in c++ on a linux arm v8l. May I know if tflite can support this and the version that supports it? Would appreciate any help |
Beta Was this translation helpful? Give feedback.
-
Hi, i trained a model yolov8 with custom dataset containing 26 classess, but when i convert the model to tflite i noticed that it gives as output [1,30,8400] and this is what caused me errors when using my model with flutter. the error |
Beta Was this translation helpful? Give feedback.
-
Hello |
Beta Was this translation helpful? Give feedback.
-
Hi, The following is the content of my requirements.txt: and am using the following code for the conversion: Load the YOLOv8 modelmodel = YOLO("current_best4.pt") Export the model to TFLite formatmodel.export(format="tflite", int8=True, data='./datasets/data.yaml') # creates 'yolov8n_float32.tflite' Load the exported TFLite modeltflite_model = YOLO("yolov8n_int8.tflite") Run inferenceresults = tflite_model("./frame_365.jpg") results.show() Any help will be greatly appreciated |
Beta Was this translation helpful? Give feedback.
-
import cv2
cap.release() For this code I'm getting this error TensorFlow SavedModel: export failure ❌ 47.0s: No module named 'tensorflow_lite_support' |
Beta Was this translation helpful? Give feedback.
-
Hi! I trying to convert my model .pt to .tflite but i get this error: Ultralytics YOLOv8.2.73 🚀 Python-3.8.19 torch-2.1.2+cpu CPU (11th Gen Intel Core(TM) i5-1145G7 2.60GHz) PyTorch: starting from 'runs\classify\train\weights\last.pt' with input shape (1, 3, 64, 64) BCHW and output shape(s) (1, 4) (2.8 MB) TensorFlow SavedModel: starting export with tensorflow 2.13.0... I have already tried to install the requirements indicated there but I get a compatibility error. Can you help me please? |
Beta Was this translation helpful? Give feedback.
-
i trained a custom model of yolov8segm converted into tflite but when i imported it on my android studio project, it says that it's an invalid model. what should i do? the model works properly in colab and segments the object in the image |
Beta Was this translation helpful? Give feedback.
-
When exporting the pre-trained model to TFLite:
My export is crashing due to the process trying to allocate 26GB of RAM memory ( exceeding available limit) As I have understood it, the data-parameter in the export is to verify the quantization (int8-param) When using the default coco8.yaml file, the process in complaining that the verification is not optimal (It wants atleast 1000 images, but coco8 provides just 4). So to enahnce this, I thought: why not use the entire coco-dataset? The question being: Am I correct to use the entire coco-dataset if i want the quantization to become better, or should i just stick to the default. And if the answer is: yes, use the entire coco-set, how do I overcome the huge allocation of RAM for that? is there a dataset with say 1000 images, instead of the 120K+ images of the entire coco-set? |
Beta Was this translation helpful? Give feedback.
-
Issue with Incorrect Predictions from Quantized YOLOv8m-obb Model (TFLite) in TensorFlow Framework I have exported my YOLOv8m-obb model to TFLite format with INT8 quantization enabled, using an image size of 640x640 and a data.yaml for my dataset. When I use the quantized model for inference with the Ultralytics framework (Oriented Bounding Boxes), the predictions are correct. However, when I use the same model in the TensorFlow framework, I encounter several issues with the output:
I suspect there might be an issue with how the quantization parameters (scale and zero-point) are being applied in TensorFlow, or possibly with the way I'm handling the model's output or the way I exported the model. I would appreciate guidance on how to correctly handle the quantized model in TensorFlow and resolve the issues with incorrect predictions. |
Beta Was this translation helpful? Give feedback.
-
Issue with Incorrect Predictions from Quantized YOLOv8m-obb Model (TFLite) in TensorFlow Framework I have exported my YOLOv8m-obb model to TFLite format with INT8 quantization enabled, using an image size of 640x640 and a data.yaml for my dataset. When I use the quantized model for inference with the Ultralytics framework (Oriented Bounding Boxes), the predictions are correct. However, when I use the same model in the TensorFlow framework, I encounter several issues with the output:
I suspect there might be an issue with how the quantization parameters (scale and zero-point) are being applied in TensorFlow, or possibly with the way I'm handling the model's output or the way I exported the model. I would appreciate guidance on how to correctly handle the quantized model in TensorFlow and resolve the issues with incorrect predictions. |
Beta Was this translation helpful? Give feedback.
-
Thank you for your feedback! Unfortunately, the issue still persists. I
tested the model directly on the Ultralytics framework, and it works
perfectly, making predictions without any problems. This confirms that the
model is correctly trained and exported.
Regarding my implementation, I thoroughly checked the following steps:
1. *Image preprocessing*: Resizing and normalization are done correctly
using the formula quantized=scalereal+zero_point.
2. *Dequantization of outputs*: I applied the formula dequantized=(
quantized−zero_point)×scale, which seems consistent with the TFLite
specifications.
Despite these checks, the predictions obtained through my implementation
remain inconsistent or incorrect. I can’t pinpoint the root cause of this
issue. Do you have any additional suggestions or insights to help me
resolve this?
Thank you in advance for your assistance!
Le mer. 25 déc. 2024 à 20:12, Glenn Jocher ***@***.***> a
écrit :
… Your implementation looks solid, but the issues you're encountering may
arise from input preprocessing or output interpretation. Ensure the
quantization parameters (scale and zero_point) are consistently applied
during preprocessing and postprocessing. For oriented bounding boxes (OBB),
verify that the output tensor format aligns with the model's
specifications, and refer to the TFLite export guide
<https://docs.ultralytics.com/integrations/tflite/> for additional
insights. Let us know if the problem persists!
—
Reply to this email directly, view it on GitHub
<#8856 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ASDVJFIICL5CHUGNJGLU5JL2HL7TBAVCNFSM6AAAAABEQM3SIKVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTCNRWGQ4DCNA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
hi there @glenn-jocher I am trying to convert the the yolov10 model to int8 with tflite format. but same error is occurring again and again. I've tried the solutions described in this thread but nothing seems to work. I am using this for conversion: ` from ultralytics import YOLO model = YOLO("current_best4.pt") model.export(format="tflite", int8=True, data='./datasets/data.yaml', imgsz=640) # creates 'yolov8n_float32.tflite' tflite_model = YOLO("yolov8n_int8.tflite") results = tflite_model("./frame_365.jpg") results.show() ` and the error i am receiving is this:
i I have approx 5k images for calibration, so thats not the issue, all the packages are upto date as well. can you please suggest what solution should i opt for? |
Beta Was this translation helpful? Give feedback.
-
integrations/tflite/
Explore how to improve your Ultralytics YOLOv8 model's performance and interoperability using the TFLite export format suitable for edge computing environments.
https://docs.ultralytics.com/integrations/tflite/
Beta Was this translation helpful? Give feedback.
All reactions