Skip to content

Conversation

@bittoby
Copy link
Contributor

@bittoby bittoby commented Jan 23, 2026

Describe Changes

  • Show embedding models in Settings → Model Providers → Llama.cpp (previously hidden)
  • Add distinctive amber "Embedding" badge to identify embedding models
  • Filter embedding models from chat model dropdown (they can't be used for chat)
  • Add tooltip: "Embedding Model (for RAG/vectors, not chat)"
1

Problem

Embedding models (e.g., all-MiniLM-L6-v2, bge-small-en) were imported successfully but not visible in the UI. When users tried to import again, they got "model already exists" error, causing confusion.

Fixes Issues

Self Checklist

  • Added relevant comments, esp in complex areas
  • Updated docs (for bug fixes / features)
  • Created issues for follow-up changes or refactoring needed

@bittoby
Copy link
Contributor Author

bittoby commented Jan 23, 2026

@louis-jan @urmauur Could you please review this PR?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

bug: Can not show loaded embedding model

1 participant