This library aims to provide a high-level interface to run large language models in Godot, following Godot's node-based design principles.
@onready var llama_context = %LlamaContext
var messages = [
{ "sender": "system", "text": "You are a pirate chatbot who always responds in pirate speak!" },
{ "sender": "user", "text": "Who are you?" }
]
var prompt = ChatFormatter.apply("llama3", messages)
var completion_id = llama_context.request_completion(prompt)
while (true):
var response = await llama_context.completion_generated
print(response["text"])
if response["done"]: break
- Platform and compute backend support:
Platform CPU Metal Vulkan CUDA macOS ✅ ✅ ❌ ❌ Linux ✅ ❌ ✅ 🚧 Windows ✅ ❌ 🚧 🚧 - Asynchronous completion generation
- Support any language model that llama.cpp supports in GGUF format
- GGUF files are Godot resources
- Chat completions support via dedicated library for jinja2 templating in zig
- Grammar support
- Multimodal models support
- Embeddings
- Vector database using LibSQL
- Download zig v0.13.0 from https://ziglang.org/download/
- Clone the repository:
git clone --recurse-submodules https://github.com/hazelnutcloud/godot-llama-cpp.git
- Copy the
godot-llama-cpp
addon folder ingodot/addons
to your Godot project'saddons
folder.cp -r godot-llama-cpp/godot/addons/godot-llama-cpp <your_project>/addons
- Build the extension and install it in your Godot project:
cd godot-llama-cpp zig build --prefix <your_project>/addons/godot-llama-cpp
- Enable the plugin in your Godot project settings.
- Add the
LlamaContext
node to your scene. - Run your Godot project.
- Enjoy!
This project is licensed under the MIT License - see the LICENSE file for details.