You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
below evaluation is hf settings for llama3.2 evaluation
lm_eval --model hf \
--model_args pretrained=/home/jovyan/data-vol-1/models/meta-llama__Llama3.2-1B-Instruct,dtype=auto \
--tasks leaderboard\
--batch_size auto\
--output_path result/meta-llama__Llama3.2-1B-Instruct_hf.json
Passed argument batch_size = auto:1. Detecting largest batch size
Starting from v4.46, the `logits` model output will have the same type as the model (except at train time, where it will always be FP32)
Determined largest batch size: 4
Running loglikelihood requests: 8%|█████▊ | 12011/152671 [01:23<11:20, 206.83it/s]
This is shows not only loglikelihood request, but also generation request.
Is this wrong with my evaluation setting?. I using A100 80GB single-GPU.
Thanks for help
The text was updated successfully, but these errors were encountered:
below evaluation is vllm settings for llama3.2 evaluation
below evaluation is hf settings for llama3.2 evaluation
This is shows not only loglikelihood request, but also generation request.
Is this wrong with my evaluation setting?. I using A100 80GB single-GPU.
Thanks for help
The text was updated successfully, but these errors were encountered: