Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vllm单机多卡infer报 Some keys are not used by the HfArgumentParser: ['vllm_config'] #6330

Closed
QiQingY opened this issue Dec 13, 2024 · 1 comment
Labels
solved This problem has been already solved

Comments

@QiQingY
Copy link

QiQingY commented Dec 13, 2024

我使用最新推荐的vllm_infer.py脚本进行Qwen2.5_72B_Instruct infer的时候,报Some keys are not used by the HfArgumentParser: ['vllm_config']的错误,我的运行脚本如下:

python vllm_infer.py \
 --model_name_or_path /llm_base_model/Qwen/Qwen2___5-72B-Instruct \
 --adapter_name_or_path saves/QueryToPath/20241213_103756/Qwen2___5-72B-Instruct_adapter/ \
 --dataset QueryToPath_iid_test_sft_sample

完整报错如下:

/opt/conda/lib/python3.8/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: '/opt/conda/lib/python3.8/site-packages/torchvision/image.so: undefined symbol: _ZN3c1017RegisterOperatorsD1Ev'If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
  warn(
> /root/LLaMA-Factory/src/llamafactory/hparams/parser.py(372)get_infer_args()
-> model_args, data_args, finetuning_args, generating_args = _parse_infer_args(args)
(Pdb) c
Traceback (most recent call last):
  File "vllm_infer.py", line 144, in <module>
    fire.Fire(vllm_infer)
  File "/opt/conda/lib/python3.8/site-packages/fire/core.py", line 135, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/opt/conda/lib/python3.8/site-packages/fire/core.py", line 468, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "/opt/conda/lib/python3.8/site-packages/fire/core.py", line 684, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "vllm_infer.py", line 58, in vllm_infer
    model_args, data_args, _, generating_args = get_infer_args(
  File "/root/LLaMA-Factory/src/llamafactory/hparams/parser.py", line 372, in get_infer_args
    model_args, data_args, finetuning_args, generating_args = _parse_infer_args(args)
  File "/root/LLaMA-Factory/src/llamafactory/hparams/parser.py", line 152, in _parse_infer_args
    return _parse_args(parser, args)
  File "/root/LLaMA-Factory/src/llamafactory/hparams/parser.py", line 57, in _parse_args
    return parser.parse_dict(args)
  File "/opt/conda/lib/python3.8/site-packages/transformers/hf_argparser.py", line 377, in parse_dict
    raise ValueError(f"Some keys are not used by the HfArgumentParser: {sorted(unused_keys)}")
ValueError: Some keys are not used by the HfArgumentParser: ['vllm_config']
@github-actions github-actions bot added the pending This problem is yet to be addressed label Dec 13, 2024
@hiyouga
Copy link
Owner

hiyouga commented Dec 14, 2024

update llamafactory

@hiyouga hiyouga closed this as completed Dec 14, 2024
@hiyouga hiyouga added solved This problem has been already solved and removed pending This problem is yet to be addressed labels Dec 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
solved This problem has been already solved
Projects
None yet
Development

No branches or pull requests

2 participants