The current list of compatible LLMs in l2p/llm/utils/llm.yaml is outdated and missing support for the latest state-of-the-art models.
I would like to see support added for the following:
New Models
OpenAI (5-series)
gpt-5.2: General purpose flagship (released Dec 2025).
gpt-5.2-pro: High-reasoning variant with configurable effort.
gpt-5.3-codex: Specialized agentic coding model (released Feb 2026).
Google (Gemini 3-series)
gemini-3.0-pro: Multimodal model with massive context window.
gemini-3.0-flash: Low-latency variant.
Proposal
I am happy to submit a PR for this.
Quick question regarding the implementation: Is it alright if I copy the same model_params (e.g. temperature, top_p) from the existing configurations as defaults for these new models, or should I use different values?