Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I encountered a problem when running python remwn.py. It seems that the lama model is not supported. #14

Open
jianjiu99999 opened this issue Jan 15, 2025 · 6 comments
Assignees

Comments

@jianjiu99999
Copy link

I encountered a problem when running python remwn.py. It seems that the lama model is not supported.
/home/ubuntu/watermark/watermark/lib/python3.10/site-packages/iopaint/model/ldm.py:279: FutureWarning: torch.cuda.amp.autocast(args...) is deprecated. Please use torch.amp.autocast('cuda', args...) instead.
@torch.cuda.amp.autocast()
Using device: cuda
/home/ubuntu/watermark/watermark/lib/python3.10/site-packages/timm/models/layers/init.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {name} is deprecated, please import via timm.layers", FutureWarning)
2025-01-15 09:09:39.508 | INFO | main:main:110 - Florence-2 Model loaded
2025-01-15 09:09:39.511 | INFO | iopaint.model_manager:init_model:46 - Loading model: lama
Traceback (most recent call last):
File "/home/ubuntu/watermark/WatermarkRemover-AI/remwm.py", line 170, in
main()
File "/home/ubuntu/watermark/watermark/lib/python3.10/site-packages/click/core.py", line 1161, in call
return self.main(*args, **kwargs)
File "/home/ubuntu/watermark/watermark/lib/python3.10/site-packages/click/core.py", line 1082, in main
rv = self.invoke(ctx)
File "/home/ubuntu/watermark/watermark/lib/python3.10/site-packages/click/core.py", line 1443, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/ubuntu/watermark/watermark/lib/python3.10/site-packages/click/core.py", line 788, in invoke
return __callback(*args, **kwargs)
File "/home/ubuntu/watermark/WatermarkRemover-AI/remwm.py", line 113, in main
model_manager = ModelManager(name="lama", device=device)
File "/home/ubuntu/watermark/watermark/lib/python3.10/site-packages/iopaint/model_manager.py", line 39, in init
self.model = self.init_model(name, device, **kwargs)
File "/home/ubuntu/watermark/watermark/lib/python3.10/site-packages/iopaint/model_manager.py", line 48, in init_model
raise NotImplementedError(
NotImplementedError: Unsupported model: lama. Available models: ['cv2']

@D-Ogi
Copy link
Owner

D-Ogi commented Jan 15, 2025

Please update your repository to the latest version, as the environment.yml now correctly specifies Python 3.12. The issue you encountered may have been due to using an outdated or misconfigured environment or missing model files. Follow these steps to recreate the Conda environment and ensure the required model is available:

  1. Remove the existing environment (if already created):

    conda deactivate
    conda env remove -n py312aiwatermark
  2. Pull the latest changes from the repository:

    git pull
  3. Recreate and activate the Conda environment:

    conda env create -f environment.yml
    conda activate py312aiwatermark

    or

    . ./setup.sh --activate
  4. Ensure the lama model is available:
    The lama model must be downloaded for the script to work. To do this:

    • Run the following command to download the model:
      python -m iopaint.download lama
    • This should (but might not) automatically fetch the necessary files to make the model available for use.
  5. Run the script again:

    python remwm.py
  6. For users in China:
    If you are located in China, network restrictions might prevent downloading models directly. Use the Hugging Face Mirror ( https://hf-mirror.com/ ) for downloads:

    • Set the HF_ENDPOINT environment variable:
      export HF_ENDPOINT=https://hf-mirror.com
    • Add this to your ~/.bashrc file for persistence:
      echo "export HF_ENDPOINT=https://hf-mirror.com" >> ~/.bashrc
      source ~/.bashrc
    • Then, rerun the model download command:
      python -m iopaint.download lama

The error you encountered (NotImplementedError: Unsupported model: lama) likely resulted from missing model files or an outdated environment. These steps will ensure everything is properly configured and compatible with Python 3.12.

Let me know if you have any further questions!

@D-Ogi
Copy link
Owner

D-Ogi commented Jan 15, 2025

You can also manually check all available models detected by iopaint by running a custom script within conda environment:

from iopaint.download import scan_models

models = scan_models()
print("Available models for download:")
for model in models:
    print(f"- {model.name}")

Save this as list_models.py and run it:

python list_models.py

@D-Ogi D-Ogi self-assigned this Jan 15, 2025
@utanmaz
Copy link

utanmaz commented Jan 16, 2025

yeah i got the similar error too. and I had to install extra libraries and pythorc directly from the information in the installation. i finally managed somehow but this time i got a llama error.

$ python remwm.py "[SANSUR]/frame7.jpg" "[SANSUR]/frame7_processed.jpg" --overwrite --max-bbox-percent=10 --force-format=JPG

C:[SANSUR]\miniconda3\Lib\site-packages\iopaint\model\ldm.py:279: FutureWarning: torch.cuda.amp.autocast(args...) is deprecated. Please use torch.amp.autocast('cuda', args...) instead.
@torch.cuda.amp.autocast()
C:[SANSUR]\miniconda3\Lib\site-packages\timm\models\layers_init_.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {name} is deprecated, please import via timm.layers", FutureWarning)
2025-01-16 21:55:33.121 | INFO | main:main:110 - Florence-2 Model loaded
2025-01-16 21:55:33.131 | INFO | iopaint.model_manager:init_model:46 - Loading model: lama
Using device: cuda
Traceback (most recent call last):
File "C:[SANSUR]\WatermarkRemover-AI\remwm.py", line 170, in
main()
File "C:[SANSUR]\miniconda3\Lib\site-packages\click\core.py", line 1161, in call
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:[SANSUR]\miniconda3\Lib\site-packages\click\core.py", line 1082, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "C:[SANSUR]\miniconda3\Lib\site-packages\click\core.py", line 1443, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:[SANSUR]\miniconda3\Lib\site-packages\click\core.py", line 788, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:[SANSUR]\WatermarkRemover-AI\remwm.py", line 113, in main
model_manager = ModelManager(name="lama", device=device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:[SANSUR]\miniconda3\Lib\site-packages\iopaint\model_manager.py", line 39, in init
self.model = self.init_model(name, device, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:[SANSUR]\miniconda3\Lib\site-packages\iopaint\model_manager.py", line 48, in init_model
raise NotImplementedError(
NotImplementedError: Unsupported model: lama. Available models: ['cv2']

@D-Ogi
Copy link
Owner

D-Ogi commented Jan 16, 2025

@utanmaz Are you in China? Could you try to run the list_models.py script I pasted above?

@utanmaz
Copy link

utanmaz commented Jan 16, 2025

@D-Ogi no i am not from china. and i tried that code

you want this?,

C:[SANSUR]>python
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.

from iopaint.download import scan_models
C:[SANSUR]\AppData\Local\Programs\Python\Python310\lib\site-packages\iopaint\model\ldm.py:279: FutureWarning: torch.cuda.amp.autocast(args...) is deprecated. Please use torch.amp.autocast('cuda', args...) instead.
@torch.cuda.amp.autocast()

and also the gui app is dysfunctional, it doesn't even give an error message. completely dysfunctional. @D-Ogi do you have telegram? if you have can u give me so we can talk this error easly?

@D-Ogi
Copy link
Owner

D-Ogi commented Jan 18, 2025

Download:
https://github.com/Sanster/models/releases/download/add_big_lama/big-lama.pt
And save in /home/username/.cache/torch/hub/checkpoints/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants