EDIT: It seems the issue is caused by my vpn, when active the fallback happens, when disabled gpt-5-mini is used as expected.
I keep trying to use gpt-5-mini as the model argument to the SDK session but for some reason it keeps falling back to my default model in CLI meaning it's not being recognized as a valid model. I've tried both gpt-5-mini and gpt-5 mini and both have resulted in a fallback. I know that gpt-5-mini is offered in the CLI because it's listed in /models, when I set it as my default it asked me to set the reasoning mode, this gave me the idea to try gpt-5-mini-medium but this also resulted in a fallback. With gpt-5-mini as my default, I was able to use it in the sdk and I can definitely confirm that it is under the ID "gpt-5-mini" from the usage event logs
[event] { "type": "assistant.usage", "data": { "model": "gpt-5-mini", "inputTokens": 9155, "outputTokens": 125, "cacheReadTokens": 1536, "cacheWriteTokens": 0, "cost": 0, "duration": 7120, "initiator": "user", "apiCallId": ...
So I can only use it when it's set as my default model since the actual model argument "gpt-5-mini" always falls back to the default model. Not sure if this is a bug or what's going on, FYI gpt-4.1 works fine as the model argument.