Added new multi model prompt executor for azure multi-models workflows #1374
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
I created multi-model executor for Azure purposes.
Azure Service has different strategy than other ai providers. You deploy some model and use it. Below is example base-uri for deployment:
https://<your-project>.openai.azure.com/openai/deployments/gpt-4o/The llm model parameter is ignored and response is always from deployed model.
Motivation and Context
I want to use 2 different models in my strategy/action e.g 4.1-mini for small and simple actions and gpt-4o for summarize and return to user.
I can create subgraph with small model 41-mini parameter but on azure provider this model from request is ignored. I can use only model which I put to base-uri in client. So I have to deploy multiple OpenAI clients on azure and create multiple AIClients in koog.
Breaking Changes
There is no breaking changes. I added new executor for specific cases.
Created new multi-model prompt executor - similar to MulitLLMPrompExecutor. With key in map LLMModel not LLMProvider.
Type of the changes
Checklist
developas the base branchAdditional steps for pull requests adding a new feature