Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Omniparser crashes after processing 7 images. #99

Open
techsparkling opened this issue Dec 4, 2024 · 6 comments
Open

Omniparser crashes after processing 7 images. #99

techsparkling opened this issue Dec 4, 2024 · 6 comments

Comments

@techsparkling
Copy link

2024-12-04 23:44:04 finish processing
2024-12-04 23:44:04
2024-12-04 23:44:04 image 1/1 /usr/src/app/imgs/saved_image_demo.png: 384x640 122 0s, 31.8ms
2024-12-04 23:44:04 Speed: 3.9ms preprocess, 31.8ms inference, 2.3ms postprocess per image at shape (1, 3, 384, 640)
2024-12-04 23:44:33 finish processing
2024-12-04 23:44:33
2024-12-04 23:44:33 image 1/1 /usr/src/app/imgs/saved_image_demo.png: 384x640 119 0s, 31.7ms
2024-12-04 23:44:33 Speed: 4.1ms preprocess, 31.7ms inference, 2.1ms postprocess per image at shape (1, 3, 384, 640)
2024-12-04 23:45:10 /usr/src/app/entrypoint.sh: line 8: 7 Killed python ./gradio_demo.py

Why does this occur. I am running omniparser locally via docker.

@abrichr
Copy link

abrichr commented Dec 4, 2024

Just guessing: looks like it's running out of memory. You can confirm with docker stats <container_id>. Try adjusting the Docker resources with e.g. --memory="4g".

@techsparkling
Copy link
Author

Image

What do i do?? Is there anyway to control this?

@techsparkling
Copy link
Author

@abrichr
Sir, I would greatly appreciate your assistance with this issue. I have been stuck on it for quite some time now. Despite carefully following all the instructions in #52 , my setup continues to run out of memory (OOM) after processing 3-4 images, even on a g4dn.xl EC2 instance.

I also tried enabling swap, but after doing so, the instance stopped sending or receiving requests. Any guidance on how to resolve this would be immensely helpful. Thank you in advance.

@abrichr
Copy link

abrichr commented Dec 5, 2024

@techsparkling from your screenshot it appears you are running multiple containers. Can you please clarify exactly the steps you took to arrive in this situation? As it stands you haven't provided enough information to resolve your issue. Please be as detailed as possible, otherwise it will be impossible for others to help you.

@techsparkling
Copy link
Author

@abrichr No, sir, there is only one container running.

I followed the steps mentioned in #52:

  • Cloned the GitHub repository from here.
  • Created a .env file and added the required variables.
  • Executed deploy.py start.
  • I verified that the GitHub Actions ran successfully.

After that, I used client.py to send images. However, after sending 5 images, the model crashes due to an out-of-memory (OOM) issue.

@omprakashnarayan
Copy link

I also had the same issue it was running fine once I upgraded from 32 to 128gb ram server then it was stable at 90gb ram utilisation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants