You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was running test on a few sample images and I have this one particular image that runs into OpenCV error of "error: (-215:Assertion failed) !ssize.empty() in function 'cv::resize'" when I feed it from the path directly into easyocr.Reader.readtext, instead of first converting the photo to numpy array using cv2.imread.
Upon some investigation, I found that it is because of width of one of the boxes detected is negative (x_max is smaller than x_min at 'get_image_list' function of utils.py file).
However, when I first convert the photo into the numpy array as an input to easyocr.Reader.readtext, the same error did not occur.
I have tried to compare both input types (image file/numpy array) of the other photos, and none of them run into the same error...
So I am just wondering is it a normal behaviour that different input types generating different detection boxes? If it is normal, should there be some further checking added on the box width...?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I was running test on a few sample images and I have this one particular image that runs into OpenCV error of "error: (-215:Assertion failed) !ssize.empty() in function 'cv::resize'" when I feed it from the path directly into easyocr.Reader.readtext, instead of first converting the photo to numpy array using cv2.imread.
Upon some investigation, I found that it is because of width of one of the boxes detected is negative (x_max is smaller than x_min at 'get_image_list' function of utils.py file).
However, when I first convert the photo into the numpy array as an input to easyocr.Reader.readtext, the same error did not occur.
I have tried to compare both input types (image file/numpy array) of the other photos, and none of them run into the same error...
So I am just wondering is it a normal behaviour that different input types generating different detection boxes? If it is normal, should there be some further checking added on the box width...?
Beta Was this translation helpful? Give feedback.
All reactions