-
Notifications
You must be signed in to change notification settings - Fork 207
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support env HF_ENDPOINT
?
#416
Comments
You can download the model locally and then use TEI to load the local model. model=/path/to/model/weights
volume=/path/to/model
docker run --gpus all -p 8080:80 -v $volume:/data --pull always ghcr.io/huggingface/text-embeddings-inference:1.5 --model-id /data/weights |
Hi @nbroad1881 could you give example for |
@r0kk , If it is blocked, then there is nothing that can be done until it gets unblocked. If you have a local copy, copy the code I have above.
|
Feature request
Is it possible to support HuggingFace mirror website? Such as env
HF_ENDPOINT
. Likehuggingface_hub
library, it has a environment variableHF_ENDPOINT
which can use huggingface mirror website to download models.export HF_ENDPOINT=https://hf-mirror.com huggingface-cli download --resume-download gpt2 --local-dir gpt2
https://hf-mirror.com/
Motivation
It's difficult to download model in China. Use HF mirror to accelerate model download.
Your contribution
Moral support
The text was updated successfully, but these errors were encountered: