Skip to content

Conversation

@timtmok
Copy link
Contributor

@timtmok timtmok commented Dec 3, 2025

Address #10818

This used Vercel's MCP server to assist in the migration to v5.

  • Removed the workaround for the temperature
  • Updated the field names that changed
  • Fixed the responses to match the v5 format

Release Notes

New Features

  • Update ai-sdk to v5

Bug Fixes

  • N/A

QA Notes

I did some light testing with tool calling with various providers. The chat response handling is where things changed the most.

@github-actions
Copy link

github-actions bot commented Dec 3, 2025

E2E Tests 🚀
This PR will run tests tagged with: @:critical

readme  valid tags

@timtmok timtmok marked this pull request as ready for review December 3, 2025 17:03
@timtmok timtmok requested a review from sharon-wang December 3, 2025 17:03
Copy link
Member

@sharon-wang sharon-wang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good so far with Amazon Bedrock, OpenAI and OpenRouter via Custom Provider 👍 getPlot tool is working with Amazon Bedrock, so I wonder if our plot image handling is compatible with Anthropic models but not the OpenAI models?

Providers that don't support the responses endpoint aren't working though -- I think we'll need some way to override the chat endpoint for those providers to /chat/completions

@sharon-wang sharon-wang requested a review from wch December 3, 2025 22:42
@timtmok timtmok requested a review from sharon-wang December 4, 2025 19:54
* AI SDK 5 supports images in tool results via the 'content' output type with 'media' parts.
*/
function getPlotToolResultToAiMessage(part: vscode.LanguageModelToolResultPart2): ai.CoreUserMessage {
function getPlotToolResultToAiMessage(part: vscode.LanguageModelToolResultPart2): ai.ToolModelMessage {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just wondering: is this function still needed? I think that convertToolResultToAiMessageExperimentalContent should be able to handle tool results with images.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed it and tested the change. Various models and providers were able to handle the tool output without this function. It also fixed the problem with GPT models on Snowflake where it couldn't use the get plot tool output.

Comment on lines 82 to 85
if (cacheBreakpoint && bedrockCacheBreakpoint) {
cacheBreakpoint = false;
markBedrockCacheBreakpoint(toolMessage);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This probably doesn't have to be addressed just yet, but if/when Anthropic goes through this code path (instead of the separate Anthropic provider code), then we'll want to support their cache breakpoint markers as well.

Comment on lines 131 to 134
// Fix empty role field
if (choice.delta.role === '') {
choice.delta.role = 'assistant';
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this check here because of Snowflake, or other providers that don't quite adhere to the OpenAI completions format?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah this was originally added for Snowflake, but it's possible other providers may omit this as well

}
}

transformedLines.push(`data: ${JSON.stringify(data)}`);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using the types from OpenAI, I suggest actually extracting a lot of the code above into a function which takes as input something that's close to a ChatCompletionChunk, but where the type takes into account the possibly-missing fields. The function would return something that actually is a ChatCompletionChunk.

Then you can really lean into type checking to make sure all the cases are taken care of.

For example, if you define this type, then the role could be "":

type PossiblyBrokenChatCompletionChunk = Omit<OpenAI.ChatCompletionChunk, 'choices'> & {
	choices: Array<Omit<OpenAI.ChatCompletionChunk['choices'][0], 'delta'> & {
		delta: Omit<OpenAI.ChatCompletionChunk['choices'][0]['delta'], 'role'> & {
			role?: OpenAI.ChatCompletionChunk['choices'][0]['delta']['role'] | ''
		}
	}>
};

I know that's kind of awkward but it does work. It could probably be made a bit simpler and easier to read by referencing the types by name instead of by path. (I would just ask an LLM to add in the other possibly-empty fields as well.)

Next, you'd modify the type guard function above to have the signature:

export function isPossiblyBrokenChatCompletionChunk(obj: unknown): obj is PossiblyBrokenChatCompletionChunk {
    // body of this function can remain the same
}

Then you could add a function with this signature:

function fixPossiblyBrokenChatCompletionChunk(chunk: PossiblyBrokenChatCompletionChunk): OpenAI.ChatCompletionChunk {
	// Implementation goes here
}

Then up above you'd use:

if (isPossiblyBrokenChatCompletionChunk(data)) {
  const fixedChunk = fixPossiblyBrokenChatCompletionChunk(data)
  transformedLines.push(`data: ${JSON.stringify(fixedChunk)}`);

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've refactored it so there's a function to check if it's a possible chat completion chunk and another function to fill in any missing fields.

Copy link
Member

@sharon-wang sharon-wang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can log in with Snowflake now ❄️ ✅

I'm not sure if this is my local state, but I lost the Snowflake icon in the models popup:

Image

The icon is there in a release build of 2026.01.0+24, so not sure if it's my dev build state or possibly something has changed?

Comment on lines 131 to 134
// Fix empty role field
if (choice.delta.role === '') {
choice.delta.role = 'assistant';
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah this was originally added for Snowflake, but it's possible other providers may omit this as well

Comment on lines +58 to +68
// Transform tools to be compatible with OpenAI-compatible providers
// Some providers don't support the 'strict' field in tool function definitions
if (requestBody.tools && Array.isArray(requestBody.tools)) {
log.debug(`[${providerName}] Request contains ${requestBody.tools.length} tools: ${requestBody.tools.map((t: any) => t.function?.name || t.name).join(', ')}`);
for (const tool of requestBody.tools) {
if (tool.function && tool.function.strict !== undefined) {
delete tool.function.strict;
log.debug(`[${providerName}] Removed 'strict' field from tool: ${tool.function.name}`);
}
}
log.trace(`[${providerName}] Tools payload: ${JSON.stringify(requestBody.tools)}`);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thoughts on consolidating the empty input schema handling that is currently in models.ts in AILanguageModel.provideLanguageModelChatResponse() with the handling here, so that all the tool modifications are in one place?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe moving it to the utils.ts for generic handling? Although I removed it, there was some model specific handling. I'm good either way as long as models.ts doesn't grow too large as it is already a all-in-one kind of file.

I really hope that providers adhere to a standard API. This special handling feels like a side effect of providers not implementing the API.

@timtmok
Copy link
Contributor Author

timtmok commented Dec 10, 2025

I tried these latest changes with different models and providers and things feel good. The bonus is Snowflake works with GPT models calling the GetPlot tool. Its response is brief but it is able to understand the image.

image

The response is from GPT-5 but was also similar with GPT-4.1.

image

The Snowflake icon seems okay for me as well but these changes shouldn't have changed that.

@timtmok timtmok requested review from sharon-wang and wch December 10, 2025 18:40
* @param obj The object to check
* @returns True if the object appears to be a ChatCompletionChunk (possibly malformed)
*/
export function isPossiblyBrokenChatCompletionChunk(obj: unknown): obj is PossiblyBrokenChatCompletionChunk {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that this type guard isn't correct for the definition of PossiblyBrokenChatCompletionChunk above. With that interface definition, even an empty object, {}, could count as a PossiblyBrokenChatCompletionChunk.

But this function requires it to be an object and have:

  • id: string
  • choices: Array<unknown>
  • object: "chat.completion.chunk"

With this mismatch between the type and the type guard, the type guard could tell you something is not a PossiblyBrokenChatCompletionChunk when in fact it is, and that could be a source of bugs in the future.

The type and the type guard should match (or at least be much closer). I suggest updating the type to include essential things like the id, choices, and object -- whatever things we can reasonably assume that an OpenAI-compatible implementation would include.

}
// Check if it's a possibly broken chunk and fix it
// Otherwise, keep the original line
if (isPossiblyBrokenChatCompletionChunk(data)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a follow on to the note above about the type and type guard mismatch: if isPossiblyBrokenChatCompletionChunk() actually matched the PossiblyBrokenChatCompletionChunk type, then it would always enter this if block and never go into the else. (Assuming that data is a JS object, as opposed to something like a string or null.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants