You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there any way we can have support for Codestral Mamba? Codestral Mamba doesn't llama.cpp support (neither ollama).
So checking here if there is other viable solution that can be implemented to support Codestral Mamba.
The text was updated successfully, but these errors were encountered:
First you'd need to deploy the model somehow, neither Emacs nor gptel can help with that. Once it's running gptel can support it if the deployment offers a HTTP API endpoint.
Repository owner
locked and limited conversation to collaborators
Jan 2, 2025
Codestral, while having a great output quality is really slow.
Codestral Mamba looks to improve without sacrificing the result quality: https://mistral.ai/news/codestral-mamba/
Is there any way we can have support for Codestral Mamba? Codestral Mamba doesn't llama.cpp support (neither ollama).
So checking here if there is other viable solution that can be implemented to support Codestral Mamba.
The text was updated successfully, but these errors were encountered: