Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

选择pissa微调时,当提示训练完成后,转换为lora时会报错 #6331

Open
1 task done
therealoliver opened this issue Dec 13, 2024 · 0 comments
Open
1 task done
Labels
pending This problem is yet to be addressed

Comments

@therealoliver
Copy link

Reminder

  • I have read the README and searched the existing issues.

System Info

Traceback (most recent call last):
File "/usr/local/bin/llamafactory-cli", line 8, in
sys.exit(main())
File "/mnt/workspace/LLaMA-Factory/src/llamafactory/cli.py", line 112, in main
run_exp()
File "/mnt/workspace/LLaMA-Factory/src/llamafactory/train/tuner.py", line 50, in run_exp
run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
File "/mnt/workspace/LLaMA-Factory/src/llamafactory/train/sft/workflow.py", line 100, in run_sft
train_result = trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint)
File "/usr/local/lib/python3.10/site-packages/transformers/trainer.py", line 2122, in train
return inner_training_loop(
File "/usr/local/lib/python3.10/site-packages/transformers/trainer.py", line 2628, in _inner_training_loop
self.control = self.callback_handler.on_train_end(args, self.state, self.control)
File "/usr/local/lib/python3.10/site-packages/transformers/trainer_callback.py", line 471, in on_train_end
return self.call_event("on_train_end", args, state, control)
File "/usr/local/lib/python3.10/site-packages/transformers/trainer_callback.py", line 518, in call_event
result = getattr(callback, event)(
File "/mnt/workspace/LLaMA-Factory/src/llamafactory/train/callbacks.py", line 171, in on_train_end
model.save_pretrained(
File "/usr/local/lib/python3.10/site-packages/peft/peft_model.py", line 341, in save_pretrained
output_state_dict = save_mutated_as_lora(
File "/usr/local/lib/python3.10/site-packages/peft/peft_model.py", line 279, in save_mutated_as_lora
self.load_adapter(
File "/usr/local/lib/python3.10/site-packages/peft/peft_model.py", line 1112, in load_adapter
self.add_adapter(adapter_name, peft_config)
File "/usr/local/lib/python3.10/site-packages/peft/peft_model.py", line 873, in add_adapter
self.base_model.inject_adapter(self.base_model.model, adapter_name)
File "/usr/local/lib/python3.10/site-packages/peft/tuners/tuners_utils.py", line 431, in inject_adapter
self._create_and_replace(peft_config, adapter_name, target, target_name, parent, current_key=key)
File "/usr/local/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 224, in _create_and_replace
new_module = self._create_new_module(lora_config, adapter_name, target, **kwargs)
File "/usr/local/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 346, in _create_new_module
raise ValueError(
ValueError: Target module ModuleDict(
(default): Identity()
(pissa_init): Identity()
) is not supported. Currently, only the following modules are supported: torch.nn.Linear, torch.nn.Embedding, torch.nn.Conv2d, transformers.pytorch_utils.Conv1D.

Reproduction

下面是测试的脚本:

bf16: true
cutoff_len: 2048
dataset: mire_train
dataset_dir: data
ddp_timeout: 180000000
do_train: true
finetuning_type: lora
flash_attn: fa2
gradient_accumulation_steps: 2
learning_rate: 5.0e-06
logging_steps: 2
lora_alpha: 16
lora_dropout: 0
lora_rank: 128
lora_target: all
lr_scheduler_type: cosine
max_grad_norm: 1.0
max_samples: 8
model_name_or_path: Qwen/Qwen2-VL-2B-Instruct
model_revision: master
num_train_epochs: 1.0
optim: adamw_torch
output_dir: saves/Qwen2-VL-2B-Instruct/lora/train_short_sample_for_pissa_debug
packing: false
per_device_train_batch_size: 2
pissa_convert: true
pissa_init: true
plot_loss: true
preprocessing_num_workers: 16
report_to: none
save_steps: 2
stage: sft
template: qwen2_vl
warmup_steps: 2

Expected behavior

No response

Others

尝试了修改lora_target,但依旧没有用

@github-actions github-actions bot added the pending This problem is yet to be addressed label Dec 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pending This problem is yet to be addressed
Projects
None yet
Development

No branches or pull requests

1 participant