Skip to content

Conversation

sharon-wang
Copy link
Member

Description from @georgestagg's original PR below 👇


Addresses #9226.

I think this broke when the Code OSS language model proposed API changed to introduce prepareLanguageModelChat().

These changes essentially defer selection of the model ID until the very last second in e.g. provideLanguageModelChatResponse() and other methods, rather than trying to store a specific model ID on a class property such as this._config or this.model, since a single language model provider instance can now handle multiple model IDs.

QA

  • Open Positron

  • Open Assistant, select a non-default model for your provider, e.g. Claude 4 Opus.

  • Send a request, watching the Assistant log output pane.

  • Ensure that the right model is selected by watching the model ID reported in the logs.

  • Repeat for the Anthropic and the AWS Bedrock providers, to cover both the Anthropic SDK and Vercel AI SDK code paths.

  • We should probably also check the maxOutputTokens config setting still works, since I've had to tweak that code.

Screenshot 2025-08-29 at 14 50 24 Screenshot 2025-08-29 at 14 50 51

Addresses #9226.

I think this broke when the Code OSS language model proposed API changed
to introduce `prepareLanguageModelChat()`.

These changes essentially defer selection of the model ID until the very
last second in e.g. `provideLanguageModelChatResponse()` and other
methods, rather than trying to store a specific model ID on a class
property such as `this._config` or `this.model`, since a single language
model provider instance can now handle multiple model IDs.

## QA

* Open Positron
* Open Assistant, select a non-default model for your provider, e.g.
Claude 4 Opus.
* Send a request, watching the Assistant log output pane.
* Ensure that the right model is selected by watching the model ID
reported in the logs.

* Repeat for the Anthropic and the AWS Bedrock providers, to cover both
the Anthropic SDK and Vercel AI SDK code paths.

* We should probably also check the `maxOutputTokens` config setting
still works, since I've had to tweak that code.

<img width="950" height="63" alt="Screenshot 2025-08-29 at 14 50 24"
src="https://github.com/user-attachments/assets/82585446-7dac-46cd-9255-fc187647ddb4"
/>
<img width="945" height="59" alt="Screenshot 2025-08-29 at 14 50 51"
src="https://github.com/user-attachments/assets/364d1270-9f03-44af-84de-d19d7ed63761"
/>
@sharon-wang sharon-wang merged commit c3ee3a3 into main Aug 29, 2025
9 checks passed
@sharon-wang sharon-wang deleted the assistant/use-requested-model-port-to-main branch August 29, 2025 18:03
@github-actions github-actions bot locked and limited conversation to collaborators Aug 29, 2025
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants