You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[I dont know how to pr a wiki page so I ve opened a discussion]
So I remembered I have seen in a one issue(#144, #204) that someone wants to use local LLM.
It's pretty complicated for people to figure out llama.cpp, there's a easier way to do that which is using Ollama.
It basically just need to open the Ollama Application and click translate in Calibre.
It gives an API and unified port which enables a much easier way to use local LLMs.
Here is the config for doing that.
I have used for example the qwen:7b here. You only need to input the right model name in data field with the one you use ollama list to show in the Terminal.
I gotta mention this also,
Ollama doesnt support concurrent requests now, but I have noticed developers said its coming soon. So You have to set concurrent value to 1 in the plugin settings.
remember to set a higher timeout for LLMs in HTTP request.
If you wanna use a model not listed in Ollama, please dont ask here.
它提供了一个 API 和统一的端口,这使得使用本地 LLM 变得更加简单。
以下是实现此操作的配置信息。我之前例如使用 qwen:7b。
你只需在数据字段中输入你使用 Ollama 列出于终端显示的正确模型名称。
还有两点要说一下:
Ollama还不支持并行请求,但是开发者说快了。所以并行要设为1.
记得调高HTTP请求中的超时时间。
如果要使用Ollama没有直接提供的模型,但是不会操作请不要在这里提问
PS C:\Users\PPP> ollama list
NAME ID SIZE MODIFIED
TowerInstruct-0.1-7B-Q3KM:latest 8640708c21ac 3.3 GB 3 days ago
deepseek-llm:7b 9aab369a853b 4.0 GB 11 days ago
llama2-chinese:latest cee11d703eee 3.8 GB 11 days ago
llama3:8b 71a106a91016 4.7 GB 2 days ago
qwen:7b 2091ee8c8d8f 4.5 GB 11 days ago
wizardlm2:latest c9b1aff820f2 4.1 GB 27 hours ago
yi:latest a86526842143 3.5 GB 11 days ago
config tweaked a bit based on plugin author's notes in wiki and issues, so i basically just tell u u can use ollama to make it easier.
配置是基于插件作者在wiki和issue中的回答所进行了微小的修改,我啥也没干,归功于插件作者。
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
[I dont know how to pr a wiki page so I ve opened a discussion]
So I remembered I have seen in a one issue(#144, #204) that someone wants to use local LLM.
It's pretty complicated for people to figure out llama.cpp, there's a easier way to do that which is using Ollama.
It basically just need to open the Ollama Application and click translate in Calibre.
It gives an API and unified port which enables a much easier way to use local LLMs.
Here is the config for doing that.
I have used for example the qwen:7b here. You only need to input the right model name in data field with the one you use ollama list to show in the Terminal.
I gotta mention this also,
我记得在某个issue(#144, #204)中看到有人想要使用本地语言模型(LLM)。对于大多数人来说,理解
llama.cpp
是相当复杂的。实际上,有一个更简单的方法,那就是使用 Ollama。
基本上,你只需要打开 Ollamma 应用程序,并在 Calibre 中点击翻译。
它提供了一个 API 和统一的端口,这使得使用本地 LLM 变得更加简单。
以下是实现此操作的配置信息。我之前例如使用
qwen:7b
。你只需在数据字段中输入你使用 Ollama 列出于终端显示的正确模型名称。
还有两点要说一下:
Plugin config
插件配置
config tweaked a bit based on plugin author's notes in wiki and issues, so i basically just tell u u can use ollama to make it easier.
配置是基于插件作者在wiki和issue中的回答所进行了微小的修改,我啥也没干,归功于插件作者。
Beta Was this translation helpful? Give feedback.
All reactions