Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when use LLM LoRA to avoid censored #8

Open
miaaiart opened this issue Dec 1, 2024 · 0 comments
Open

Error when use LLM LoRA to avoid censored #8

miaaiart opened this issue Dec 1, 2024 · 0 comments

Comments

@miaaiart
Copy link

miaaiart commented Dec 1, 2024


*************************** WD LLM CAPTION ***************************
*************************** Author: DukeG ****************************
***** GitHub: https://github.com/fireicewolf/wd-llm-caption-cli ******


To create a public link, set share=True in launch().
2024-12-01 10:42:06,834 - logger.py[line:47] - WARNING: save_log not enable or log file path not exist, log will only output in console.
2024-12-01 10:42:06,835 - caption.py[line:81] - INFO: Set log level to "INFO"
2024-12-01 10:42:06,835 - download.py[line:80] - INFO: Using config: I:\wd-llm-caption-cli\venv\lib\site-packages\wd_llm_caption\configs\default_llama_3.2V.json
2024-12-01 10:42:06,836 - download.py[line:110] - INFO: Models will be stored in I:\wd-llm-caption-cli\models\Llama-3.2-11B-Vision-Instruct.
2024-12-01 10:42:08,626 - download.py[line:190] - INFO: skip_local_file_exist is Enable, Skipping download chat_template.json...
2024-12-01 10:42:08,627 - download.py[line:190] - INFO: skip_local_file_exist is Enable, Skipping download config.json...
2024-12-01 10:42:08,628 - download.py[line:190] - INFO: skip_local_file_exist is Enable, Skipping download generation_config.json...
2024-12-01 10:42:08,628 - download.py[line:190] - INFO: skip_local_file_exist is Enable, Skipping download preprocessor_config.json...
2024-12-01 10:42:08,628 - download.py[line:190] - INFO: skip_local_file_exist is Enable, Skipping download tokenizer.json...
2024-12-01 10:42:08,629 - download.py[line:190] - INFO: skip_local_file_exist is Enable, Skipping download tokenizer_config.json...
2024-12-01 10:42:08,629 - download.py[line:190] - INFO: skip_local_file_exist is Enable, Skipping download special_tokens_map.json...
2024-12-01 10:42:08,630 - download.py[line:190] - INFO: skip_local_file_exist is Enable, Skipping download model.safetensors.index.json...
2024-12-01 10:42:08,631 - download.py[line:190] - INFO: skip_local_file_exist is Enable, Skipping download model-00001-of-00005.safetensors...
2024-12-01 10:42:08,631 - download.py[line:190] - INFO: skip_local_file_exist is Enable, Skipping download model-00002-of-00005.safetensors...
2024-12-01 10:42:08,631 - download.py[line:190] - INFO: skip_local_file_exist is Enable, Skipping download model-00003-of-00005.safetensors...
2024-12-01 10:42:08,632 - download.py[line:190] - INFO: skip_local_file_exist is Enable, Skipping download model-00004-of-00005.safetensors...
2024-12-01 10:42:08,632 - download.py[line:190] - INFO: skip_local_file_exist is Enable, Skipping download model-00005-of-00005.safetensors...
2024-12-01 10:42:08,632 - download.py[line:190] - INFO: skip_local_file_exist is Enable, Skipping download adapter_config.json...
2024-12-01 10:42:08,633 - download.py[line:190] - INFO: skip_local_file_exist is Enable, Skipping download adapter_model.safetensors...
2024-12-01 10:42:09,180 - inference.py[line:195] - INFO: Loading LLM Llama-3.2-11B-Vision-Instruct with GPU...
2024-12-01 10:42:09,181 - inference.py[line:213] - INFO: LLM dtype: torch.float16
2024-12-01 10:42:09,182 - inference.py[line:227] - INFO: LLM 4bit quantization: Enabled
2024-12-01 10:42:09,183 - inference.py[line:290] - WARNING: I:\wd-llm-caption-cli\models\Llama-3.2-11B-Vision-Instruct\llm\chat_template.json already patched.
2024-12-01 10:42:09,183 - inference.py[line:304] - WARNING: I:\wd-llm-caption-cli\models\Llama-3.2-11B-Vision-Instruct\patch\adapter_config.json already patched.
Traceback (most recent call last):
File "I:\wd-llm-caption-cli\venv\lib\site-packages\gradio\queueing.py", line 624, in process_events
response = await route_utils.call_process_api(
File "I:\wd-llm-caption-cli\venv\lib\site-packages\gradio\route_utils.py", line 323, in call_process_api
output = await app.get_blocks().process_api(
File "I:\wd-llm-caption-cli\venv\lib\site-packages\gradio\blocks.py", line 2019, in process_api
result = await self.call_function(
File "I:\wd-llm-caption-cli\venv\lib\site-packages\gradio\blocks.py", line 1566, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "I:\wd-llm-caption-cli\venv\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "I:\wd-llm-caption-cli\venv\lib\site-packages\anyio_backends_asyncio.py", line 2441, in run_sync_in_worker_thread
return await future
File "I:\wd-llm-caption-cli\venv\lib\site-packages\anyio_backends_asyncio.py", line 943, in run
result = context.run(func, *args)
File "I:\wd-llm-caption-cli\venv\lib\site-packages\gradio\utils.py", line 865, in wrapper
response = f(*args, **kwargs)
File "I:\wd-llm-caption-cli\venv\lib\site-packages\wd_llm_caption\gui.py", line 620, in caption_models_load
caption_init.load_models(args)
File "I:\wd-llm-caption-cli\venv\lib\site-packages\wd_llm_caption\caption.py", line 225, in load_models
self.my_llm.load_model()
File "I:\wd-llm-caption-cli\venv\lib\site-packages\wd_llm_caption\utils\inference.py", line 306, in load_model
self.llm = MllamaForConditionalGeneration.from_pretrained(self.llm_patch_path,
File "I:\wd-llm-caption-cli\venv\lib\site-packages\transformers\modeling_utils.py", line 3558, in from_pretrained
raise EnvironmentError(
OSError: Error no file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory I:\wd-llm-caption-cli\models\Llama-3.2-11B-Vision-Instruct\patch.

I had this error when use LLM LoRA to avoid censored, the patch file is already downloaded but it still requires for more files that are not exist in huggingface nor modelscope

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant