Skip to content

Comments

Add Fuel1 provider with active models#978

Open
dk-blackfuel wants to merge 1 commit intoanomalyco:devfrom
blackfuel-ai:add-fuel1-provider
Open

Add Fuel1 provider with active models#978
dk-blackfuel wants to merge 1 commit intoanomalyco:devfrom
blackfuel-ai:add-fuel1-provider

Conversation

@dk-blackfuel
Copy link

Summary

This PR adds the Fuel1 inference platform as a new provider to the Models.dev database.

Added Provider: Fuel1

  • Provider ID: fuel1
  • API Endpoint: https://api.fuel1.ai/v1 (OpenAI-compatible)
  • Documentation: https://docs.fuel1.ai

Models Added (10 total)

Embeddings:

  • BGE Multilingual Gemma2 (BAAI) - 9B multilingual embedding model
  • BGE Multilingual Gemma2 (EU) - EU-hosted variant

Chat Models:

  • GPT OSS 120B (OpenAI) - 120B parameter model with function calling
  • GPT OSS 120B (EU) - EU-hosted variant
  • Kimi K2.5 (Moonshot AI) - 1T MoE multimodal with vision & video
  • Llama 3.3 70B Instruct (Meta) - Meta's instruction-tuned model
  • MiniMax M2.5 (MiniMax) - 230B MoE coding model
  • Qwen3.5-397B-A17B (Qwen) - 403B MoE multimodal model
  • Qwen3-VL-30B-A3B-Instruct (Qwen) - Vision-language model

Configuration Details

  • All models include pricing (per 1M tokens USD)
  • Context window limits specified
  • Modality support (text/image/video)
  • Feature flags (tool_call, reasoning, structured_output, etc.)
  • Open weights status indicated

Validation

✅ All configurations validated with bun validate
✅ Follows Models.dev TOML schema
✅ Consistent with existing provider patterns

Adds the Fuel1 inference platform as a new provider with 10 models:
- BGE Multilingual Gemma2 (BAAI) - embedding model
- BGE Multilingual Gemma2 EU (BAAI) - embedding model with EU data residency
- GPT OSS 120B (OpenAI) - 120B parameter chat model with reasoning
- GPT OSS 120B EU (OpenAI) - EU-hosted variant
- Kimi K2.5 (Moonshot AI) - 1T MoE multimodal model with vision
- Llama 3.3 70B Instruct (Meta) - Meta's instruction-tuned model
- MiniMax M2.5 (MiniMax) - 230B MoE coding model
- Qwen3.5-397B-A17B (Qwen) - 403B MoE multimodal model
- Qwen3-VL-30B-A3B-Instruct (Qwen) - Vision-language model

All configurations include pricing, context limits, modalities, and feature flags.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant