-
I want to use
But I only got several RTX3090 cards. How to configure to perform inference on multiple GPUs under the architecture of this spacy-llm library? |
Beta Was this translation helpful? Give feedback.
Answered by
rmitsch
Jan 29, 2024
Replies: 1 comment 1 reply
-
Hey @yileitu, |
Beta Was this translation helpful? Give feedback.
1 reply
Answer selected by
svlandeg
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hey @yileitu,
spacy-llm
wrapstransformers
for all open source models. AFAIK you'll needaccelerate
for multi-GPU inference, see here. This workflow is unfortunately not supported byspacy-llm
at the moment.