-
-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
my model is optimizing the weights and giving me the option of preview and deployment #732
Comments
👋 Hello @PrakharJoshi54321, thank you for raising an issue about Ultralytics HUB 🚀! Please visit our HUB Docs to learn more:
If this is a 🐛 Bug Report, please provide screenshots and steps to reproduce your problem to help us get started working on a fix. If this is a ❓ Question, please provide as much information as possible, including dataset, model, environment details etc. so that we might provide the most helpful response. We try to respond to all issues as promptly as possible. Thank you for your patience! |
@PrakharJoshi54321 Hello! The "Optimizing weights" process can take a while. Let's wait for a bit to see if the process finishes successfully. If the process fails, could you share your model ID (URL) so I can investigate? |
Hello @PrakharJoshi54321, Thank you for providing the details and the screenshot. It looks like your model has completed the training process but encountered an issue during the weight optimization phase. Let's address this step-by-step:
For more detailed guidance, you can refer to the Ultralytics HUB Models Documentation. If the issue persists, please provide any error messages or logs you encounter, and we can further investigate the problem. Thank you for your patience and cooperation. The YOLO community and the Ultralytics team are here to help you! |
@PrakharJoshi54321 It looks like your model didn’t successfully upload the weights, which is why Ultralytics HUB is asking you to resume training from the last checkpoint (62). I suggest resuming training as recommended in the UI. |
''' Path to Tesseract executablepytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract.exe' Load the modelsspeed_model = YOLO("yolov8n.pt") # Model for speed detection and tracking Path to the video filevideo_path = 'video.mp4' # Replace with your video file path Initialize video capturecap = cv2.VideoCapture(video_path) w, h = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)), int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) Video writervideo_writer = cv2.VideoWriter("output_video.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h)) line_pts = [(0, h // 2), (w, h // 2)] # Update line points based on video resolution Init speed-estimation objectspeed_obj = solutions.SpeedEstimator( while cap.isOpened():
cap.release() comment-: ultralytics is just amazing any help will be apriciated |
check if the speed is greater than 50 km/hr store the vehicle no, speed and track id in the excel sheet |
Hello @PrakharJoshi54321, Thank you for your kind words about Ultralytics! We're thrilled to hear that you're enjoying using our tools. Let's enhance your script to store vehicle information in an Excel sheet when the speed exceeds 50 km/hr. Here's an updated version of your script that includes this functionality: import cv2
from ultralytics import YOLO, solutions
import pytesseract
from PIL import Image
import numpy as np
import pandas as pd
# Path to Tesseract executable
pytesseract.pytesseract.tesseract_cmd = r'C:\\Program Files\\Tesseract-OCR\\tesseract.exe'
# Load the models
speed_model = YOLO("yolov8n.pt") # Model for speed detection and tracking
plate_model = YOLO('epoch-68.pt') # Model for number plate detection
# Path to the video file
video_path = 'video.mp4' # Replace with your video file path
# Initialize video capture
cap = cv2.VideoCapture(video_path)
assert cap.isOpened(), "Error opening video file"
w, h = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)), int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = int(cap.get(cv2.CAP_PROP_FPS))
# Video writer
video_writer = cv2.VideoWriter("output_video.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
line_pts = [(0, h // 2), (w, h // 2)] # Update line points based on video resolution
# Init speed-estimation object
speed_obj = solutions.SpeedEstimator(
reg_pts=line_pts,
names=speed_model.model.names,
view_img=True,
)
# DataFrame to store vehicle information
vehicle_data = pd.DataFrame(columns=["Track ID", "Vehicle No", "Speed (km/hr)"])
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Error reading frame from video.")
break
# Speed detection and tracking
results = speed_model(im0)
if results:
print(f"Tracks detected: {len(results)}")
else:
print("No tracks detected in this frame.")
# Ensure tracks have valid data
for result in results:
for box in result.boxes:
x1, y1, x2, y2 = map(int, box.xyxy[0])
print(f"Vehicle detected at: {x1, y1, x2, y2}")
cropped_image = im0[y1:y2, x1:x2]
# Perform number plate detection
plate_results = plate_model(cropped_image)
for plate_result in plate_results:
plate_boxes = plate_result.boxes.xyxy.numpy()
if len(plate_boxes) == 0:
print("No number plate detected in this vehicle bounding box.")
for plate_box in plate_boxes:
px1, py1, px2, py2 = map(int, plate_box)
plate_cropped_image = cropped_image[py1:py2, px1:px2]
# Convert the cropped image to a format suitable for OCR
plate_cropped_image_rgb = cv2.cvtColor(plate_cropped_image, cv2.COLOR_BGR2RGB)
pil_image = Image.fromarray(plate_cropped_image_rgb)
# Use Tesseract to extract text
plate_text = pytesseract.image_to_string(pil_image, config='--psm 8').strip()
print(f'Detected Number Plate: {plate_text}')
# Draw the bounding box for the plate and add the text
cv2.rectangle(im0, (x1 + px1, y1 + py1), (x1 + px2, y1 + py2), (0, 255, 0), 2)
cv2.putText(im0, plate_text, (x1 + px1, y1 + py1 - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)
# Write the frame with detections and speed estimation
im0, speeds = speed_obj.estimate_speed(im0, results)
video_writer.write(im0)
# Store vehicle information if speed exceeds 50 km/hr
for track_id, speed in speeds.items():
if speed > 50:
vehicle_data = vehicle_data.append({
"Track ID": track_id,
"Vehicle No": plate_text,
"Speed (km/hr)": speed
}, ignore_index=True)
cap.release()
video_writer.release()
cv2.destroyAllWindows()
# Save the vehicle data to an Excel file
vehicle_data.to_excel("vehicle_data.xlsx", index=False) This script will now store the vehicle number, speed, and track ID in an Excel sheet if the speed exceeds 50 km/hr. The If you encounter any issues or have further questions, please let us know. The YOLO community and the Ultralytics team are always here to help! |
This code is throwing error as the function here is not returning two values and you are saying to store value in two variable. How this is possible? "im0, speeds = speed_obj.estimate_speed(im0, results)" |
pro.zip please do this for me all the efforts will be appreciated |
Hello @PrakharJoshi54321, Thank you for sharing your project files and providing details about your requirements. Let's address the integration of your number plate detection model and the speed tracking functionality, ensuring that vehicle information is stored in an Excel sheet when the speed exceeds 50 km/hr. First, let's correct the issue with the import cv2
from ultralytics import YOLO, solutions
import pytesseract
from PIL import Image
import numpy as np
import pandas as pd
# Path to Tesseract executable
pytesseract.pytesseract.tesseract_cmd = r'C:\\Program Files\\Tesseract-OCR\\tesseract.exe'
# Load the models
speed_model = YOLO("yolov8n.pt") # Model for speed detection and tracking
plate_model = YOLO('epoch-68.pt') # Model for number plate detection
# Path to the video file
video_path = 'video.mp4' # Replace with your video file path
# Initialize video capture
cap = cv2.VideoCapture(video_path)
assert cap.isOpened(), "Error opening video file"
w, h = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)), int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = int(cap.get(cv2.CAP_PROP_FPS))
# Video writer
video_writer = cv2.VideoWriter("output_video.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
line_pts = [(0, h // 2), (w, h // 2)] # Update line points based on video resolution
# Init speed-estimation object
speed_obj = solutions.SpeedEstimator(
reg_pts=line_pts,
names=speed_model.model.names,
view_img=True,
)
# DataFrame to store vehicle information
vehicle_data = pd.DataFrame(columns=["Track ID", "Vehicle No", "Speed (km/hr)"])
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Error reading frame from video.")
break
# Speed detection and tracking
results = speed_model(im0)
if results:
print(f"Tracks detected: {len(results)}")
else:
print("No tracks detected in this frame.")
# Ensure tracks have valid data
for result in results:
for box in result.boxes:
x1, y1, x2, y2 = map(int, box.xyxy[0])
print(f"Vehicle detected at: {x1, y1, x2, y2}")
cropped_image = im0[y1:y2, x1:x2]
# Perform number plate detection
plate_results = plate_model(cropped_image)
for plate_result in plate_results:
plate_boxes = plate_result.boxes.xyxy.numpy()
if len(plate_boxes) == 0:
print("No number plate detected in this vehicle bounding box.")
for plate_box in plate_boxes:
px1, py1, px2, py2 = map(int, plate_box)
plate_cropped_image = cropped_image[py1:py2, px1:px2]
# Convert the cropped image to a format suitable for OCR
plate_cropped_image_rgb = cv2.cvtColor(plate_cropped_image, cv2.COLOR_BGR2RGB)
pil_image = Image.fromarray(plate_cropped_image_rgb)
# Use Tesseract to extract text
plate_text = pytesseract.image_to_string(pil_image, config='--psm 8').strip()
print(f'Detected Number Plate: {plate_text}')
# Draw the bounding box for the plate and add the text
cv2.rectangle(im0, (x1 + px1, y1 + py1), (x1 + px2, y1 + py2), (0, 255, 0), 2)
cv2.putText(im0, plate_text, (x1 + px1, y1 + py1 - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)
# Write the frame with detections and speed estimation
im0, speeds = speed_obj.estimate_speed(im0, results)
video_writer.write(im0)
# Store vehicle information if speed exceeds 50 km/hr
for track_id, speed in speeds.items():
if speed > 50:
vehicle_data = vehicle_data.append({
"Track ID": track_id,
"Vehicle No": plate_text,
"Speed (km/hr)": speed
}, ignore_index=True)
cap.release()
video_writer.release()
cv2.destroyAllWindows()
# Save the vehicle data to an Excel file
vehicle_data.to_excel("vehicle_data.xlsx", index=False) This script now correctly handles the return values from the If you encounter any further issues or have additional questions, please let us know. The YOLO community and the Ultralytics team are here to support you! |
Is it working inyour system please share snip and detailed process its my college project |
Do the correct ocr |
Hello @PrakharJoshi54321, Thank you for reaching out! To assist you effectively, we need to ensure a few things:
Regarding your OCR integration, here’s a refined approach to ensure accurate OCR detection:
Here’s an example of how you can preprocess the image and configure Tesseract: import cv2
import pytesseract
from PIL import Image
# Path to Tesseract executable
pytesseract.pytesseract.tesseract_cmd = r'C:\\Program Files\\Tesseract-OCR\\tesseract.exe'
def preprocess_image(image):
# Convert to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Apply thresholding
_, thresh = cv2.threshold(gray, 150, 255, cv2.THRESH_BINARY)
return thresh
def extract_text_from_image(image):
# Preprocess the image
preprocessed_image = preprocess_image(image)
# Convert to PIL Image
pil_image = Image.fromarray(preprocessed_image)
# Use Tesseract to extract text
text = pytesseract.image_to_string(pil_image, config='--psm 8').strip()
return text
# Example usage
image = cv2.imread('path_to_image.jpg')
text = extract_text_from_image(image)
print(f'Detected Text: {text}') This example demonstrates how to preprocess the image before passing it to Tesseract for OCR. You can adjust the preprocessing steps based on your specific requirements. If you continue to face issues, please share the minimal reproducible example, and we’ll be happy to assist you further. The YOLO community and the Ultralytics team are here to help! |
i am taking 5 km/hr for testing and it is showing me this Vehicle detected at: (815, 196, 871, 255) 0: 640x608 1 0, 116.3ms |
Write the frame with detections and speed estimation
|
packages in environment at C:\Users\cairuser1\miniconda3\envs\speedss:Name Version Build Channelasttokens 2.4.1 pyhd8ed1ab_0 conda-forge list of packages |
Hello @PrakharJoshi54321, Thank you for providing the detailed list of packages in your environment. It looks like you're encountering an issue with the Step 1: Verify Package VersionsFirst, ensure that you are using the latest versions of pip install --upgrade torch ultralytics hub-sdk Step 2: Minimum Reproducible ExampleTo help us diagnose the issue more effectively, could you please provide a minimum reproducible code example? This will allow us to replicate the problem on our end and provide a more accurate solution. You can refer to our Minimum Reproducible Example Guide for more details. Step 3: Correcting the
|
Traceback (most recent call last): provide me fast please |
resolve this fast please |
Hello @PrakharJoshi54321, Thank you for your patience. Let's address the issue you're facing with the Step 1: Verify Package VersionsFirst, ensure you are using the latest versions of pip install --upgrade torch ultralytics hub-sdk Step 2: Correcting the
|
Is this correct |
Hello @PrakharJoshi54321, Thank you for reaching out! Let's address your issue step-by-step to ensure we provide the best possible support. Step 1: Minimum Reproducible ExampleTo help us diagnose the issue effectively, could you please provide a minimum reproducible code example? This will allow us to replicate the problem on our end and offer a more accurate solution. You can refer to our Minimum Reproducible Example Guide for more details. Having a reproducible example is crucial for us to investigate and resolve the issue efficiently. Step 2: Verify Package VersionsPlease ensure you are using the latest versions of pip install --upgrade torch ultralytics hub-sdk Using the most recent versions helps ensure that any known bugs are fixed and you have access to the latest features and improvements. Step 3: Correcting the
|
No number plate detected in this vehicle bounding box. 0: 640x608 1 0, 122.0ms |
0: 640x608 1 0, 133.6ms both the code are giving same result |
import cv2 Path to Tesseract executablepytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract.exe' Load the modelsspeed_model = YOLO("yolov8n.pt") # Model for speed detection and tracking Path to the video filevideo_path = 'video.mp4' # Replace with your video file path Initialize video capturecap = cv2.VideoCapture(video_path) w, h = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)), int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) Video writervideo_writer = cv2.VideoWriter("output_video.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h)) line_pts = [(0, h // 2), (w, h // 2)] # Update line points based on video resolution Init speed-estimation objectspeed_obj = solutions.SpeedEstimator( DataFrame to store vehicle informationvehicle_data = pd.DataFrame(columns=["Track ID", "Vehicle No", "Speed (km/hr)"]) def preprocess_image(image): def extract_text_from_image(image): while cap.isOpened():
cap.release() Save the vehicle data to an Excel filevehicle_data.to_excel("vehicle_data.xlsx", index=False) # Example usage of preprocess_image and extract_text_from_image functionsimage = cv2.imread('path_to_image.jpg')text = extract_text_from_image(image)print(f'Detected Text: {text}')this one is running but not doing right ocr and not showing the speed and bounding box |
is it necessary to wait for whole video to complete |
Hello @PrakharJoshi54321, Thank you for reaching out! To effectively address your question, it would be helpful to understand the specific context of your use case. However, I can provide some general guidance on handling video processing with YOLO models. Real-Time ProcessingIf your goal is to process video frames in real-time, you do not need to wait for the entire video to complete. You can process each frame as it is read from the video stream. Here's a basic example of how you can achieve this: import cv2
from ultralytics import YOLO
# Load the model
model = YOLO("yolov8n.pt")
# Path to the video file
video_path = 'video.mp4' # Replace with your video file path
# Initialize video capture
cap = cv2.VideoCapture(video_path)
assert cap.isOpened(), "Error opening video file"
while cap.isOpened():
success, frame = cap.read()
if not success:
break
# Perform detection on the current frame
results = model(frame)
# Process results (e.g., draw bounding boxes)
for result in results:
for box in result.boxes:
x1, y1, x2, y2 = map(int, box.xyxy[0])
cv2.rectangle(frame, (x1, y1), (x2, y2), (0, 255, 0), 2)
# Display the frame with detections
cv2.imshow('Frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows() Batch ProcessingIf you prefer to process the entire video at once, you can read all frames into memory, process them, and then save the results. This approach might be useful for post-processing tasks where real-time performance is not critical. Importance of Reproducible ExampleTo provide more specific assistance, it would be helpful if you could share a minimum reproducible example of your code. This will allow us to better understand your setup and provide a more accurate solution. You can refer to our Minimum Reproducible Example Guide for more details. Verify Package VersionsPlease ensure you are using the latest versions of pip install --upgrade torch ultralytics hub-sdk We hope this helps! If you have any further questions or need additional assistance, please let us know. The YOLO community and the Ultralytics team are here to support you! 😊 |
import cv2 Path to Tesseract executable Load the models Path to the video file Initialize video capture w, h = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)), int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) Video writer line_pts = [(0, h // 2), (w, h // 2)] # Update line points based on video resolution Init speed-estimation object DataFrame to store vehicle information def preprocess_image(image): def extract_text_from_image(image): while cap.isOpened(): Speed detection and trackingresults = speed_model(im0) if results: Initialize plate_text to an empty string for each frameplate_text = "" Ensure tracks have valid datafor result in results:
Write the frame with detections and speed estimationresult = speed_obj.estimate_speed(im0, results) Ensure speeds is a dictionaryif isinstance(speeds, dict): Save the vehicle data to an Excel file Example usage of preprocess_image and extract_text_from_image functionsimage = cv2.imread('path_to_image.jpg') |
Hello @PrakharJoshi54321, Thank you for sharing your code and detailed explanation. Let's address the issues you're facing with OCR accuracy and speed estimation. 1. Importance of a Reproducible ExampleTo better assist you, it would be helpful to have a minimum reproducible example. This allows us to replicate the issue on our end and provide a more accurate solution. You can refer to our Minimum Reproducible Example Guide for more details. 2. Verify Package VersionsPlease ensure you are using the latest versions of pip install --upgrade torch ultralytics hub-sdk 3. Improving OCR AccuracyTo improve OCR accuracy, consider additional preprocessing steps. Here's an enhanced version of your def preprocess_image(image):
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(gray, 150, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
return thresh
def extract_text_from_image(image):
preprocessed_image = preprocess_image(image)
pil_image = Image.fromarray(preprocessed_image)
text = pytesseract.image_to_string(pil_image, config='--psm 8').strip()
return text 4. Handling Speed EstimationThe # Write the frame with detections and speed estimation
result = speed_obj.estimate_speed(im0, results)
im0, speeds = result # Unpack the result
video_writer.write(im0)
# Ensure speeds is a dictionary
if isinstance(speeds, dict):
# Store vehicle information if speed exceeds 50 km/hr
for track_id, speed in speeds.items():
if speed > 5:
vehicle_data = vehicle_data.append({
"Track ID": track_id,
"Vehicle No": plate_text,
"Speed (km/hr)": speed
}, ignore_index=True)
else:
print("Speeds is not a dictionary. Please check the output of estimate_speed function.") 5. Real-Time ProcessingIf you want to process video frames in real-time, you do not need to wait for the entire video to complete. You can process each frame as it is read from the video stream. Example CodeHere’s a refined version of your script incorporating the above suggestions: import cv2
from ultralytics import YOLO, solutions
import pytesseract
from PIL import Image
import numpy as np
import pandas as pd
# Path to Tesseract executable
pytesseract.pytesseract.tesseract_cmd = r'C:\\Program Files\\Tesseract-OCR\\tesseract.exe'
# Load the models
speed_model = YOLO("yolov8n.pt") # Model for speed detection and tracking
plate_model = YOLO('epoch-68.pt') # Model for number plate detection
# Path to the video file
video_path = 'video.mp4' # Replace with your video file path
# Initialize video capture
cap = cv2.VideoCapture(video_path)
assert cap.isOpened(), "Error opening video file"
w, h = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)), int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = int(cap.get(cv2.CAP_PROP_FPS))
# Video writer
video_writer = cv2.VideoWriter("output_video.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
line_pts = [(0, h // 2), (w, h // 2)] # Update line points based on video resolution
# Init speed-estimation object
speed_obj = solutions.SpeedEstimator(
reg_pts=line_pts,
names=speed_model.model.names,
view_img=True,
)
# DataFrame to store vehicle information
vehicle_data = pd.DataFrame(columns=["Track ID", "Vehicle No", "Speed (km/hr)"])
def preprocess_image(image):
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(gray, 150, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
return thresh
def extract_text_from_image(image):
preprocessed_image = preprocess_image(image)
pil_image = Image.fromarray(preprocessed_image)
text = pytesseract.image_to_string(pil_image, config='--psm 8').strip()
return text
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Error reading frame from video.")
break
# Speed detection and tracking
results = speed_model(im0)
if results:
print(f"Tracks detected: {len(results)}")
else:
print("No tracks detected in this frame.")
# Initialize plate_text to an empty string for each frame
plate_text = ""
# Ensure tracks have valid data
for result in results:
for box in result.boxes:
x1, y1, x2, y2 = map(int, box.xyxy[0])
print(f"Vehicle detected at: {x1, y1, x2, y2}")
cropped_image = im0[y1:y2, x1:x2]
# Perform number plate detection
plate_results = plate_model(cropped_image)
for plate_result in plate_results:
plate_boxes = plate_result.boxes.xyxy.numpy()
if len(plate_boxes) == 0:
print("No number plate detected in this vehicle bounding box.")
for plate_box in plate_boxes:
px1, py1, px2, py2 = map(int, plate_box)
plate_cropped_image = cropped_image[py1:py2, px1:px2]
# Extract text using OCR
plate_text = extract_text_from_image(plate_cropped_image)
print(f'Detected Number Plate: {plate_text}')
# Draw the bounding box for the plate and add the text
cv2.rectangle(im0, (x1 + px1, y1 + py1), (x1 + px2, y1 + py2), (0, 255, 0), 2)
cv2.putText(im0, plate_text, (x1 + px1, y1 + py1 - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)
# Write the frame with detections and speed estimation
result = speed_obj.estimate_speed(im0, results)
im0, speeds = result # Unpack the result
video_writer.write(im0)
# Ensure speeds is a dictionary
if isinstance(speeds, dict):
# Store vehicle information if speed exceeds 50 km/hr
for track_id, speed in speeds.items():
if speed > 5:
vehicle_data = vehicle_data.append({
"Track ID": track_id,
"Vehicle No": plate_text,
"Speed (km/hr)": speed
}, ignore_index=True)
else:
print("Speeds is not a dictionary. Please check the output of estimate_speed function.")
cap.release()
video_writer.release()
cv2.destroyAllWindows()
# Save the vehicle data to an Excel file
vehicle_data.to_excel("vehicle_data.xlsx", index=False) We hope this helps! If you have any further questions or need additional assistance, please let us know. The YOLO community and the Ultralytics team are here to support you! 😊 |
0: 640x608 1 0, 124.8ms |
Hello @PrakharJoshi54321, Thank you for reaching out and providing details about the issue you're encountering. Let's work together to resolve this! Importance of a Reproducible ExampleTo better understand and diagnose the problem, it would be extremely helpful if you could provide a minimum reproducible example of your code. This allows us to replicate the issue on our end and offer a more accurate solution. You can refer to our Minimum Reproducible Example Guide for more details on how to create one. Verify Package VersionsPlease ensure you are using the latest versions of pip install --upgrade torch ultralytics hub-sdk Handling the
|
i have already provided the .pt file please provide me the full folder of the project |
Hello @PrakharJoshi54321, Thank you for reaching out! We appreciate your interest in our project. To provide you with the best possible assistance, it would be extremely helpful if you could share a minimum reproducible example of your code. This will allow us to better understand the issue and offer a more accurate solution. You can refer to our Minimum Reproducible Example Guide for more details on how to create one. Additionally, please ensure you are using the latest versions of pip install --upgrade torch ultralytics hub-sdk Regarding your request for the full folder of the project, we encourage users to build and customize their own projects based on the provided models and documentation. This approach allows for greater flexibility and understanding of the underlying processes. If you have any specific questions or need further assistance with your code, feel free to share more details here. The YOLO community and the Ultralytics team are here to support you! 😊 |
import cv2 Setup logginglogging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') ConfigurationCONFIDENCE_THRESHOLD = 0.5 Prompt for video file pathSOURCE_VIDEO_PATH = input("Enter the path to the video file (e.g., vehicles.mp4): ") if not os.path.exists(SOURCE_VIDEO_PATH): TARGET_VIDEO_PATH = f"result_{os.path.basename(SOURCE_VIDEO_PATH)}" Source and Target ROIs (you might need to adjust these based on your video)SOURCE = np.array([ TARGET_WIDTH = 25 TARGET = np.array([ Initialize video processingtry:
except Exception as e: finally:
|
import cv2 Setup logginglogging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') ConfigurationCONFIDENCE_THRESHOLD = 0.5 Prompt for video file pathSOURCE_VIDEO_PATH = input("Enter the path to the video file (e.g., vehicles.mp4): ") if not os.path.exists(SOURCE_VIDEO_PATH): TARGET_VIDEO_PATH = f"result_{os.path.basename(SOURCE_VIDEO_PATH)}" Source and Target ROIs (you might need to adjust these based on your video)SOURCE = np.array([ TARGET_WIDTH = 25 TARGET = np.array([ Initialize video processingtry:
except Exception as e: finally: 2.mp4 has resolution of 3840 *2160, frame rate 25 and it is detecting the speed and numberplate properly this one is giving results like this but another video 1.mp4 has resolution of 1280*720, frame rate 30 and it is not detecting the speed and numberplate it is giving results like this - |
Hello @PrakharJoshi54321, Thank you for sharing your detailed code and observations. It’s great to see the effort you’ve put into your project! Let's address the issues you're encountering with different video resolutions and frame rates. Importance of a Reproducible ExampleTo better diagnose and resolve the issue, it would be extremely helpful if you could provide a minimum reproducible example. This allows us to replicate the issue on our end and offer a more accurate solution. You can refer to our Minimum Reproducible Example Guide for more details on how to create one. Verify Package VersionsPlease ensure you are using the latest versions of pip install --upgrade torch ultralytics hub-sdk Addressing the IssueIt appears that the difference in video resolution and frame rate might be affecting the detection and tracking performance. Here are a few suggestions to help improve the consistency of your results:
Example Code AdjustmentsHere’s an example of how you might adjust the normalization and logging to better handle different video resolutions: # Normalize points to the original resolution
normalized_points = np.array([[x / MODEL_RESOLUTION * original_width, y / MODEL_RESOLUTION * original_height] for x, y in points])
# Debug: Print normalized points and transformed points
logging.info(f"Normalized Points: {normalized_points}")
# Calculate the detections position inside the target RoI
transformed_points = view_transformer.transform_points(points=normalized_points).astype(int)
# Debug: Print transformed points
logging.info(f"Transformed Points: {transformed_points}")
# Store detections position
for tracker_id, [_, y] in zip(detections.tracker_id, transformed_points):
coordinates[tracker_id].append(y) Handling Different Video ResolutionsEnsure that your code dynamically adjusts to different video resolutions. Here’s an example of how you might handle this: # Adjust model resolution based on input video resolution
if original_width <= 1280:
MODEL_RESOLUTION = 640
elif original_width <= 1920:
MODEL_RESOLUTION = 960
else:
MODEL_RESOLUTION = 1280
# Initialize Models with adjusted resolution
speed_model = YOLO(MODEL_NAME)
number_plate_model = YOLO(NUMBER_PLATE_MODEL_NAME) ConclusionWe hope these suggestions help improve the consistency of your results across different video resolutions and frame rates. If you have any further questions or need additional assistance, please let us know. The YOLO community and the Ultralytics team are here to support you! 😊 |
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help. For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed! Thank you for your contributions to YOLO 🚀 and Vision AI ⭐ |
Search before asking
Question
Additional
No response
The text was updated successfully, but these errors were encountered: