Skip to content

gpt-5-mini not being recognized as valid model argument even though it works when set to default model in CLI #423

@LaishGlenberg

Description

@LaishGlenberg

EDIT: It seems the issue is caused by my vpn, when active the fallback happens, when disabled gpt-5-mini is used as expected.

I keep trying to use gpt-5-mini as the model argument to the SDK session but for some reason it keeps falling back to my default model in CLI meaning it's not being recognized as a valid model. I've tried both gpt-5-mini and gpt-5 mini and both have resulted in a fallback. I know that gpt-5-mini is offered in the CLI because it's listed in /models, when I set it as my default it asked me to set the reasoning mode, this gave me the idea to try gpt-5-mini-medium but this also resulted in a fallback. With gpt-5-mini as my default, I was able to use it in the sdk and I can definitely confirm that it is under the ID "gpt-5-mini" from the usage event logs

[event] { "type": "assistant.usage", "data": { "model": "gpt-5-mini", "inputTokens": 9155, "outputTokens": 125, "cacheReadTokens": 1536, "cacheWriteTokens": 0, "cost": 0, "duration": 7120, "initiator": "user", "apiCallId": ...

So I can only use it when it's set as my default model since the actual model argument "gpt-5-mini" always falls back to the default model. Not sure if this is a bug or what's going on, FYI gpt-4.1 works fine as the model argument.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions