Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: The following model_kwargs are not used by the model: ['skip_special_tokens'] #6403

Closed
1 task done
paolovic opened this issue Dec 20, 2024 · 2 comments
Closed
1 task done
Labels
invalid This doesn't seem right

Comments

@paolovic
Copy link

Reminder

  • I have read the README and searched the existing issues.

System Info

llamafactory-cli env

  • llamafactory version: 0.9.2.dev0
  • Platform: Linux-4.18.0-553.27.1.el8_10.x86_64-x86_64-with-glibc2.28
  • Python version: 3.11.10
  • PyTorch version: 2.5.1+cu124 (GPU)
  • Transformers version: 4.46.1
  • Datasets version: 3.1.0
  • Accelerate version: 1.0.1
  • PEFT version: 0.12.0
  • TRL version: 0.9.6
  • GPU type: NVIDIA L40S-48C
  • Bitsandbytes version: 0.45.0

Reproduction

Hi @hiyouga ,
unfortunately #6391 is not fixed.

This is my checked out commit:

LLaMA-Factory]$ git log --pretty=format:'%H' -n 1
ffbb4dbdb09ba799af1800c78b2e9d669bccd24b
Traceback (most recent call last):
  File "/environments/llama_factory_uat/bin/torchrun", line 8, in <module>
    sys.exit(main())
             ^^^^^^
  File "/environments/llama_factory_uat/lib64/python3.11/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py",
line 355, in wrapper
    return f(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^
  File "/environments/llama_factory_uat/lib64/python3.11/site-packages/torch/distributed/run.py", line 919, in main
    run(args)
  File "/environments/llama_factory_uat/lib64/python3.11/site-packages/torch/distributed/run.py", line 910, in run
    elastic_launch(
  File "/environments/llama_factory_uat/lib64/python3.11/site-packages/torch/distributed/launcher/api.py", line 138, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/environments/llama_factory_uat/lib64/python3.11/site-packages/torch/distributed/launcher/api.py", line 269, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
/packages/LLaMA-Factory/src/llamafactory/launcher.py FAILED

Expected behavior

No response

Others

No response

@github-actions github-actions bot added the pending This problem is yet to be addressed label Dec 20, 2024
@hiyouga
Copy link
Owner

hiyouga commented Dec 20, 2024

I cannot find a valid traceback in your log

@hiyouga hiyouga closed this as completed Dec 20, 2024
@hiyouga hiyouga added invalid This doesn't seem right and removed pending This problem is yet to be addressed labels Dec 20, 2024
@paolovic
Copy link
Author

Hi @hiyouga
sorry, I will correct this and create a new ticket.
thank you in advance!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
invalid This doesn't seem right
Projects
None yet
Development

No branches or pull requests

2 participants