-
Notifications
You must be signed in to change notification settings - Fork 38
Description
Hello GaiaNet Team,
I'm reporting a recurring incompatibility issue with models based on the Nemotron and Gemma architectures. When attempting to load them, the gaianet start command fails with a backend error.
1. Models Affected
The issue occurs with the following GGUF models:
gaianet/NVIDIA-Nemotron-Nano-9B-v2-GGUF(and other quantizations)Tobivictor/NVIDIA-Nemotron-Nano-12B-v2-GGUFTobivictor/embeddinggemma-300m-GGUFgaianet/embeddinggemma-300m-GGUF
2. The Error Message
In all cases, the embedding or chat server fails to start with this precise error:
[error] llama_core::graph in crates/llama-core/src/graph.rs:219: Backend Error: WASI-NN Backend Error: Caller module passed an invalid argument
Error: Operation("Backend Error: WASI-NN Backend Error: Caller module passed an invalid argument")
3. System Details
- Operating System: macOS Sequoia 15.5
- GaiaNet CLI Version: GaiaNet CLI Tool v0.5.4
4. Steps to Reproduce
The error can be reproduced consistently with the following steps:
- Run
gaianet stop. - Configure the node for one of the affected models using
gaianet config --chat-url ...orgaianet config --embedding-url .... - Manually update the
chat_nameorembedding_namein~/gaianet/config.json. - Run
gaianet init. - Run
gaianet startand observe the error during the "Starting downstream servers" phase.
5. Important Context: What Works
My GaiaNet installation and system are working perfectly with other model architectures. The following models run without any issues:
- Chat Models:
Qwen3-4B-Instruct,Qwen2.5-Omni-7B,Jan-v1-4B,Qwen3-Coder-30B-A3B-Instruct - Embedding Models:
Nomic-embed-text-v1.5
6. Suggested Cause
The fact that multiple other models work correctly strongly suggests that the version of the LlamaEdge engine used by GaiaNet has not yet been updated with full support for the Nemotron and Gemma architectures.
Could the development team please look into updating the underlying llama.cpp dependency to a more recent version? I'm happy to provide more logs or assist with further testing.
Thank you for your hard work on this project!