Azure OpenAI Service Provider with configurable model deployment #336
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR introduces support for Azure OpenAI as a service provider within the application. It ensures that any missing models from the user's deployed models are gracefully skipped, allowing the application to continue functioning without interruption.
Key updates include:
Configuration: Added Azure OpenAI configuration parameters to config.toml (example provided in sample.config.toml). Users must specify their Azure OpenAI endpoint, model deployment details, and API key.
Model Loading: The implementation attempts to load available Azure OpenAI models, logging warnings for any missing or failed models while still proceeding with the available ones.
Security Considerations: The API key is visible in the Perplexica UI/Backend (e.g., in the UI Settings and when accessing http://127.0.0.1:3001/api/models). Users should be cautious and ensure that access to the UI/Backend is secured.
Please ensure that your Azure OpenAI endpoint and deployment details are correctly configured, and be aware of the potential visibility of sensitive keys in the application interface.