How to use leann in a no gpu enviornment? #78
Replies: 2 comments
-
|
Yeah, that's a really good point, we have not think about that before, I guess local LM-studio to provide an endpoint to deal with the embedding is a cool idea, actually, what we have here to compute embedding is 3 methods:
I guess lm studio should be similar to ollama, but we only test them in the same server, it should be easy to add remote endpoint, maybe you can raise an issue to request that feature and we can make that asap. |
Beta Was this translation helpful? Give feedback.
-
|
Here is my solution - if you have a gaming desktop in your hosehold. :)
It's a guide for anyone who wants high-performance compute without the cloud bill. How I Secretly Turned Our Family Gaming PC into an AI Powerhouse |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I do my dev work in a vm, esp when using claude code incase it decides to transmit the contents of my filesystem back to anthropic for some reason.... I tried to run leann to index my codebase and it was so slow i gave up, it looked like it was going to need 20 hours...
Is it possible to leverage a local llm in my network? I run lm-studio on my m4 mbp which provides an openapi compatible endpoint, wondering if there was a way have lean utilize a remote device for creating the embedding?
Beta Was this translation helpful? Give feedback.
All reactions