-
Notifications
You must be signed in to change notification settings - Fork 99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow AdapterModels to have custom tokens #306
base: main
Are you sure you want to change the base?
Conversation
Hi! You'll also need to run ruff to fix the style. |
Re your comment, |
I rebased, reformatted, and created a smaller demo to trigger the adding/removing token messages https://colab.research.google.com/drive/1iZhdL-HTuStKGZ3ezspd53eDt_lhXbvm Unfortunately I can't test due to a Torch/TorchVision compatibility issue. If I install and run lighteval on CoLab I get this error:
After upgrading torch and torchvision, I get:
|
Hi! Yes, you can't use the latest of pytorch, it's got a breaking change. |
OK ✔️ I changed how the dependencies are installed and was able to run lighteval on the larger and smaller tokenizers Updated CoLab, same URL: https://colab.research.google.com/drive/1iZhdL-HTuStKGZ3ezspd53eDt_lhXbvm At the end of the notebook, I have added the error if I leave out
I added a commit to this PR to change the line to |
PEFT has a feature for adapters to add tokens to a model: https://github.com/huggingface/peft/blob/main/examples/causal_language_modeling/peft_lora_clm_with_additional_tokens.ipynb
When using an AdapterModel with new tokens in LightEval, the script fails because:
AdapterModel._create_auto_tokenizer
always uses the base model path, without checkingconfig.tokenizer
Notebook with error: https://colab.research.google.com/drive/1AMJ6_MiZGFTBf8KdRn-zj7soKyZrzpbf?usp=sharing
This PR would create the tokenizer from
config.tokenizer or config.base_model
and run
base.resize_token_embeddings(...)
beforePeftModel.from_pretrained(base, adapter_weights)
This is based on my fix for llm-evaluation-harness: EleutherAI/lm-evaluation-harness#1828
Notes:
You can't create a model without either a list of model_args or a model_config_path
when model_config_path was submited. #298 (cherry-picked here), their one-line fix passedmodel_config_path
to AdapterModel and this is necessary for most advanced modelsadapter_weights
anddelta_weights
should be optional so I do not needdelta_weights: false
in my config