-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I encountered a problem when running python remwn.py. It seems that the lama model is not supported. #14
Comments
Please update your repository to the latest version, as the
The error you encountered ( Let me know if you have any further questions! |
You can also manually check all available models detected by from iopaint.download import scan_models
models = scan_models()
print("Available models for download:")
for model in models:
print(f"- {model.name}") Save this as python list_models.py |
yeah i got the similar error too. and I had to install extra libraries and pythorc directly from the information in the installation. i finally managed somehow but this time i got a llama error. $ python remwm.py "[SANSUR]/frame7.jpg" "[SANSUR]/frame7_processed.jpg" --overwrite --max-bbox-percent=10 --force-format=JPG C:[SANSUR]\miniconda3\Lib\site-packages\iopaint\model\ldm.py:279: FutureWarning: |
@utanmaz Are you in China? Could you try to run the |
@D-Ogi no i am not from china. and i tried that code you want this?, C:[SANSUR]>python
and also the gui app is dysfunctional, it doesn't even give an error message. completely dysfunctional. @D-Ogi do you have telegram? if you have can u give me so we can talk this error easly? |
Download: |
I encountered a problem when running python remwn.py. It seems that the lama model is not supported.
/home/ubuntu/watermark/watermark/lib/python3.10/site-packages/iopaint/model/ldm.py:279: FutureWarning:
torch.cuda.amp.autocast(args...)
is deprecated. Please usetorch.amp.autocast('cuda', args...)
instead.@torch.cuda.amp.autocast()
Using device: cuda
/home/ubuntu/watermark/watermark/lib/python3.10/site-packages/timm/models/layers/init.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {name} is deprecated, please import via timm.layers", FutureWarning)
2025-01-15 09:09:39.508 | INFO | main:main:110 - Florence-2 Model loaded
2025-01-15 09:09:39.511 | INFO | iopaint.model_manager:init_model:46 - Loading model: lama
Traceback (most recent call last):
File "/home/ubuntu/watermark/WatermarkRemover-AI/remwm.py", line 170, in
main()
File "/home/ubuntu/watermark/watermark/lib/python3.10/site-packages/click/core.py", line 1161, in call
return self.main(*args, **kwargs)
File "/home/ubuntu/watermark/watermark/lib/python3.10/site-packages/click/core.py", line 1082, in main
rv = self.invoke(ctx)
File "/home/ubuntu/watermark/watermark/lib/python3.10/site-packages/click/core.py", line 1443, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/ubuntu/watermark/watermark/lib/python3.10/site-packages/click/core.py", line 788, in invoke
return __callback(*args, **kwargs)
File "/home/ubuntu/watermark/WatermarkRemover-AI/remwm.py", line 113, in main
model_manager = ModelManager(name="lama", device=device)
File "/home/ubuntu/watermark/watermark/lib/python3.10/site-packages/iopaint/model_manager.py", line 39, in init
self.model = self.init_model(name, device, **kwargs)
File "/home/ubuntu/watermark/watermark/lib/python3.10/site-packages/iopaint/model_manager.py", line 48, in init_model
raise NotImplementedError(
NotImplementedError: Unsupported model: lama. Available models: ['cv2']
The text was updated successfully, but these errors were encountered: