Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GitHub Actions fail when using Azure OpenAI's gpt-4 #1015

Open
no-yan opened this issue Jul 1, 2024 · 15 comments
Open

GitHub Actions fail when using Azure OpenAI's gpt-4 #1015

no-yan opened this issue Jul 1, 2024 · 15 comments

Comments

@no-yan
Copy link

no-yan commented Jul 1, 2024

Problem

GitHub Actions should be executed when the pull request is created, but they fail and leave a comment "Failed to generate code suggestions for PR".

This error first occurred on 6/22 and has been occurring every time since then.

No changes were made to the code or model before the error occurred.

Environments

Steps to reproduce:

  1. Deploy a model via Azure OpenAI
  2. Create .github/workflows/action.yaml and set secrets OPENAI_KEY
on:
  pull_request:
    types: [opened, reopened, ready_for_review]
  issue_comment:
jobs:
  pr_agent_job:
    if: ${{ github.event.sender.type != 'Bot' }}
    runs-on: ubuntu-latest
    permissions:
      issues: write
      pull-requests: write
      contents: write
    name: Run pr agent on every pull request, respond to user comments
    steps:
      - name: PR Agent action step
        id: pragent
        uses: Codium-ai/pr-agent@main
        env:
          OPENAI_KEY: ${{ secrets.OPENAI_KEY }}
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          CONFIG.MODEL: "gpt-4-1106-preview"
          OPENAI.API_TYPE: "azure"
          OPENAI.API_VERSION: "2023-05-15"
          OPENAI.API_BASE: "#########"
          OPENAI.DEPLOYMENT_ID: "##########"
  1. Open some PR

Expected behavior

Do Review, Describe, Suggest.

Actual behavior

GitHub Error messages are as follows:

Comment on GitHub

Failed to generate code suggestions for PR
image

Stack trace

LiteLLM:ERROR: main.py:399 - litellm.acompletion(): Exception occured - litellm.APIConnectionError: 'datetime.date' object has no attribute 'split'

Details

05:26:25 - LiteLLM:ERROR: main.py:399 - litellm.acompletion(): Exception occured - litellm.APIConnectionError: 'datetime.date' object has no attribute 'split'
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/litellm/main.py", line 871, in completion
    optional_params = get_optional_params(
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 3198, in get_optional_params
    optional_params = litellm.AzureOpenAIConfig().map_openai_params(
  File "/usr/local/lib/python3.10/site-packages/litellm/llms/azure.py", line 181, in map_openai_params
    api_version_times = api_version.split("-")
AttributeError: 'datetime.date' object has no attribute 'split'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 7164, in exception_type
    message=f"{exception_provider} APIConnectionError - {message}",
UnboundLocalError: local variable 'exception_provider' referenced before assignment

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/litellm/main.py", line 871, in completion
    optional_params = get_optional_params(
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 3198, in get_optional_params
    optional_params = litellm.AzureOpenAIConfig().map_openai_params(
  File "/usr/local/lib/python3.10/site-packages/litellm/llms/azure.py", line 181, in map_openai_params
    api_version_times = api_version.split("-")
AttributeError: 'datetime.date' object has no attribute 'split'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 7164, in exception_type
    message=f"{exception_provider} APIConnectionError - {message}",
UnboundLocalError: local variable 'exception_provider' referenced before assignment

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/litellm/main.py", line 370, in acompletion
    init_response = await loop.run_in_executor(None, func_with_context)
  File "/usr/local/lib/python3.10/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 625, in wrapper
    result = original_function(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/litellm/main.py", line 2577, in completion
    raise exception_type(
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 7225, in exception_type
    raise APIConnectionError(
litellm.exceptions.APIConnectionError: litellm.APIConnectionError: 'datetime.date' object has no attribute 'split'
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/litellm/main.py", line 871, in completion
    optional_params = get_optional_params(
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 3198, in get_optional_params
    optional_params = litellm.AzureOpenAIConfig().map_openai_params(
  File "/usr/local/lib/python3.10/site-packages/litellm/llms/azure.py", line 181, in map_openai_params
    api_version_times = api_version.split("-")
AttributeError: 'datetime.date' object has no attribute 'split'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 7164, in exception_type
    message=f"{exception_provider} APIConnectionError - {message}",
UnboundLocalError: local variable 'exception_provider' referenced before assignment

Additional information

  • This workflow was previously functioning without any issues, and there were no changes made to the code or model before the errors began occurring. The same error has occurred simultaneously across multiple repositories and multiple models.

  • The error persists even after downgrading the actions to the latest release (v0.22) or v0.2.

  • When I run the same configuration values in a Python script in a local environment, it operates without any problems. I am using the same API Key.

CLI Program and requirements.txt

from pr_agent import cli
from pr_agent.config_loader import get_settings
import litellm
litellm.set_verbose=True

def main():
    # Fill in the following values
    provider = "github" # GitHub provider
    user_token = "######"  # GitHub user token
    openai_key = "#######"  # OpenAI key
    pr_url = "#######"      # PR URL, for example 'https://github.com/Codium-ai/pr-agent/pull/809'
    command = "/review" # Command to run (e.g. '/review', '/describe', '/ask="What is the purpose of this PR?"', ...)

    model="gpt-4-1106-preview"
    api_type = "azure"
    api_version = '2023-05-15'  # Check Azure documentation for the current API version
    api_base = ""  # The base URL for your Azure OpenAI resource. e.g. "https://<your resource name>.openai.azure.com"
    deployment_id = ""  # The deployment name you chose when you deployed the engine

    # Setting the configurations
    get_settings().set("CONFIG.git_provider", provider)
    get_settings().set("CONFIG.model", model)
    get_settings().set("openai.key", openai_key)
    get_settings().set("openai.api_type", api_type)
    get_settings().set("openai.api_version", api_version)
    get_settings().set("openai.api_base", api_base)
    get_settings().set("openai.deployment_id", deployment_id)


    get_settings().set("github.user_token", user_token)

    # Run the command. Feedback will appear in GitHub PR comments
    cli.run_command(pr_url, command)


if __name__ == '__main__':
    main()
aiohttp==3.9.1
aiosignal==1.3.1
annotated-types==0.7.0
anthropic==0.21.3
anyio==4.4.0
async-timeout==4.0.3
atlassian-python-api==3.41.4
attrs==23.2.0
azure-core==1.30.2
azure-devops==7.1.0b3
azure-identity==1.15.0
blinker==1.7.0
boto3==1.33.6
botocore==1.33.13
Brotli==1.1.0
cachetools==5.3.3
certifi==2024.2.2
cffi==1.16.0
charset-normalizer==3.3.2
click==8.1.7
ConfigArgParse==1.7
cryptography==42.0.8
decorator==5.1.1
Deprecated==1.2.14
distlib==0.3.8
distro==1.9.0
dnspython==2.6.1
dynaconf==3.2.4
email_validator==2.2.0
exceptiongroup==1.2.1
fastapi==0.111.0
fastapi-cli==0.0.4
filelock==3.13.4
Flask==3.0.2
Flask-Cors==4.0.0
Flask-Login==0.6.3
frozenlist==1.4.1
fsspec==2024.6.0
gevent==24.2.1
geventhttpclient==2.0.11
gitdb==4.0.11
GitPython==3.1.32
google-api-core==2.17.1
google-auth==2.28.2
google-cloud-aiplatform==1.38.0
google-cloud-bigquery==3.25.0
google-cloud-core==2.4.1
google-cloud-resource-manager==1.12.3
google-cloud-storage==2.10.0
google-crc32c==1.5.0
google-resumable-media==2.7.0
googleapis-common-protos==1.63.0
greenlet==3.0.3
grpc-google-iam-v1==0.13.1
grpcio==1.64.1
grpcio-status==1.62.2
gunicorn==20.1.0
h11==0.14.0
httpcore==1.0.5
httptools==0.6.1
httpx==0.27.0
huggingface-hub==0.23.4
idna==3.6
ijson==3.3.0
importlib_metadata==8.0.0
iniconfig==2.0.0
isodate==0.6.1
itsdangerous==2.1.2
Jinja2==3.1.2
jmespath==1.0.1
jsonschema==4.22.0
jsonschema-specifications==2023.12.1
litellm==1.40.17
locust==2.24.0
loguru==0.7.2
markdown-it-py==3.0.0
MarkupSafe==2.1.5
mdurl==0.1.2
msal==1.29.0
msal-extensions==1.2.0
msgpack==1.0.8
msrest==0.7.1
multidict==6.0.5
numpy==2.0.0
oauthlib==3.2.2
openai==1.35.1
orjson==3.10.5
packaging==24.1
pipenv==2023.12.1
platformdirs==4.2.0
pluggy==1.5.0
portalocker==2.10.0
pr-agent @ git+https://github.com/Codium-ai/pr-agent@96ededd12ad463c8f7794dce7f660d74fa2f8c97
proto-plus==1.24.0
protobuf==4.25.3
psutil==5.9.8
py==1.11.0
pyasn1==0.5.1
pyasn1-modules==0.3.0
pycparser==2.22
pydantic==2.8.0
pydantic_core==2.20.0
PyGithub==1.59.1
Pygments==2.18.0
PyJWT==2.8.0
PyNaCl==1.5.0
pytest==7.4.0
python-dateutil==2.9.0.post0
python-dotenv==1.0.1
python-gitlab==3.15.0
python-multipart==0.0.9
PyYAML==6.0.1
pyzmq==25.1.2
referencing==0.35.1
regex==2024.5.15
requests==2.31.0
requests-oauthlib==2.0.0
requests-toolbelt==1.0.0
retry==0.9.2
rich==13.7.1
roundrobin==0.0.4
rpds-py==0.18.1
rsa==4.9
s3transfer==0.8.2
shapely==2.0.4
shellingham==1.5.4
six==1.16.0
smmap==5.0.1
sniffio==1.3.1
starlette==0.37.2
starlette-context==0.3.6
tenacity==8.2.3
tiktoken==0.7.0
tokenizers==0.19.1
tomli==2.0.1
tqdm==4.66.4
typer==0.12.3
typing_extensions==4.12.2
ujson==5.8.0
urllib3==2.0.7
uvicorn==0.22.0
uvloop==0.19.0
virtualenv==20.25.3
watchfiles==0.22.0
websockets==12.0
Werkzeug==3.0.1
wrapt==1.16.0
yarl==1.9.4
zipp==3.19.2
zope.event==5.0
zope.interface==6.2

  • I couldn't find any way to set litellm.set_verbose=True from GitHub Actions. I would appreciate it if the developers could provide additional information or measures on setting this log level, if possible.
  • Since the CLI is operational, there may be some issues with the integration between GitHub Actions and Azure OpenAI.
@mrT23
Copy link
Collaborator

mrT23 commented Jul 1, 2024

Hi @no-yan

i am not sure how to address this.
In general, github action works. i just validated:
Codium-ai/codium-code-examples#42 (comment)

I thought maybe the error is something specific to Litellm with azure openai.
But you say that CLI is working, with the same account.

  1. Are you sure your CLI is working with the latest litellm version we use ?
  2. Do you have access to other providers, just to try and see if they work?
  3. Do other commands (describe, review) work ?
image

@no-yan
Copy link
Author

no-yan commented Jul 2, 2024

@mrT23 Thank you for your response!

Working with LiteLLM v1.40.17

Are you sure your CLI is working with the latest litellm version we use?

Yes, I’ve confirmed that CLI is working with LiteLLM v1.40.17. Specifically, I have confirmed the describereview, and improve commands.

I used the following method to install it:

$ pip install git+https://github.com/Codium-ai/pr-agent@main
$ pip list | grep -e litellm -e pr-agent
litellm                       1.40.17
pr-agent                      0.2.2

At the time of the initial report, I was using v1.31.10, which is the LiteLLM version used by PR Agent v0.2.2(latest release).

Fine with OpenAI API

Do you have access to other providers, just to try and see if they work?

Yes. I tested it using OpenAI API] in the same verification repository, and it was successful.

Here is my setting.

        id: pragent
        uses: Codium-ai/pr-agent@main
        env:
          OPENAI_KEY: ${{ secrets.OPENAI_KEY_NOT_AZURE }}
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          CONFIG.MODEL: "gpt-4-0613"

CLI works with other commands

Do other commands (describe, review) work?

Yes, they worked without any issues. The describe and review commands were confirmed to work with both LiteLLM v1.31.10 (dependency of PR Agent v0.2.2) and v1.40.17 (dependency of the main branch).

@no-yan
Copy link
Author

no-yan commented Jul 2, 2024

Added requirements.txt to Issue description

@hariprasadiit
Copy link

hariprasadiit commented Jul 2, 2024

the error seems to be related API_VERSION env variable.
litellm is splitting it since some versions include -preview or others. When we provide just the date as version, which is the case for gpt-4o, the date string is parsed to date by default and split function is not available on it.

@mrT23
Copy link
Collaborator

mrT23 commented Jul 2, 2024

@hariprasadiit, thanks for the feedback!

@no-yan try this, and share if it helps

note also that you can give the model specifically via a configuration file, and maybe that way litellm will be able to "digest" the data

@JerzyBobrowski
Copy link

@hariprasadiit @mrT23 I can say it helped in my case :)

@no-yan
Copy link
Author

no-yan commented Jul 8, 2024

@hariprasadiit @mrT23 Thank you for your feedback.

I changed CLI to pass API_VERSION by environment variable, and it helped identify the fix from the error message. I also figured out how to make it work with GitHub Actions.

First, I changed the method of passing the API version to the PR Agent via an environment variable.

Changes Made

Failed Method:

// fail❌
+ os.environ['OPENAI.API_VERSION'] = '2023-05-15'
- api_version = '2023-05-15'
- get_settings().set("openai.api_version", api_versoin)

Successful Method:

However, changing it to the following method succeeded:

// Success🟢
- os.environ['OPENAI.API_VERSION'] = '2023-05-15'
+ os.environ['OPENAI_API_VERSION'] = '2023-05-15'

Similarly, I confirmed that the following code works in GitHub Actions. 

        uses: Codium-ai/pr-agent@main
        env:
          OPENAI_KEY: ${{ secrets.OPENAI_KEY }}
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          CONFIG.MODEL: "gpt-4-0613"
          OPENAI.API_TYPE: "azure"
-         OPENAI.API_VERSION: "2023-05-15"
+         OPENAI_API_VERSION: "2023-05-15"
          OPENAI.API_BASE: ${{ vars.API_ENDPOINT }}
          OPENAI.DEPLOYMENT_ID: ${{ vars.DEPLOYMENT_ID }}
+         AZURE_API_VERSION: "2023-05-15"

And adding the AZURE_API_VERSION environment variable was also necessary for the solution.
Additionally, applying this fix ensured it works with v0.22.

Note

I suspect there are two potential causes for this issue:

  1. Changed Environment Variable Keys : With Update requirements.txt #989, the environment variable keys retrieved by LiteLLM have changed. Specifically, with Azure OpenAI was separated from OpenAI, the environment variables for Azure OpenAI started pointing to different values.
  2. Unspecified Docker Image Version : The Dockerfile does not specify the image version in releases, causing it to always use the latest image regardless of the specific release being built.

The first cause seems to have been brought about by #989. This was merged one day before we encountered these errors.

@hariprasadiit
Copy link

hariprasadiit commented Jul 9, 2024

@no-yan are you able to get it working for gpt-4o. I getting 404 Resource not found error

Update: I got it working. was using wrong values for api version

@mark-hingston
Copy link

mark-hingston commented Jul 11, 2024

@no-yan are you able to get it working for gpt-4o. I getting 404 Resource not found error

Update: I got it working. was using wrong values for api version

@hariprasadiit are you able to share your config. I'm still getting a 404 with the following config (using gpt-4o):

on:
  pull_request:
    types: [opened, reopened, ready_for_review]
  issue_comment:

jobs:
  pr_agent_job:
    if: ${{ github.event.sender.type != 'Bot' }}
    runs-on: ubuntu-latest
    permissions:
      issues: write
      pull-requests: write
      contents: write
    name: Run PR Agent on every pull request, respond to user comments
    steps:
      - name: PR Agent action step
        id: pragent
        uses: Codium-ai/pr-agent@main
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          GITHUB_ACTION_CONFIG_AUTO_REVIEW: 'true'
          GITHUB_ACTION_CONFIG_AUTO_DESCRIBE: 'true'
          GITHUB_ACTION_CONFIG_AUTO_IMPROVE: 'true'
          OPENAI_KEY: ${{ secrets.OPENAI_KEY }}
          OPENAI_API_TYPE: 'azure'
          OPENAI_API_BASE: '********'
          OPENAI_DEPLOYMENT_ID: '********'
          OPENAI_API_VERSION: '2024-06-01'
          AZURE_API_VERSION: '2024-06-01'

@hariprasadiit
Copy link

@no-yan are you able to get it working for gpt-4o. I getting 404 Resource not found error
Update: I got it working. was using wrong values for api version

@hariprasadiit are you able to share your config. I'm still getting a 404 with the following config (using gpt-4o):

on:
  pull_request:
    types: [opened, reopened, ready_for_review]
  issue_comment:

jobs:
  pr_agent_job:
    if: ${{ github.event.sender.type != 'Bot' }}
    runs-on: ubuntu-latest
    permissions:
      issues: write
      pull-requests: write
      contents: write
    name: Run PR Agent on every pull request, respond to user comments
    steps:
      - name: PR Agent action step
        id: pragent
        uses: Codium-ai/pr-agent@main
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          GITHUB_ACTION_CONFIG_AUTO_REVIEW: 'true'
          GITHUB_ACTION_CONFIG_AUTO_DESCRIBE: 'true'
          GITHUB_ACTION_CONFIG_AUTO_IMPROVE: 'true'
          OPENAI_KEY: ${{ secrets.OPENAI_KEY }}
          OPENAI_API_TYPE: 'azure'
          OPENAI_API_BASE: '********'
          OPENAI_DEPLOYMENT_ID: '********'
          OPENAI_API_VERSION: '2024-06-01'
          AZURE_API_VERSION: '2024-06-01'

@mark-hingston I'm using API version 2024-02-01 along with below additional ENV vars apart from OPENAI ones

AZURE_API_TYPE: 'azure'
AZURE_API_KEY: ${{ secrets.AZURE_OPENAI_KEY }}
AZURE_API_VERSION: ${{ vars.AZURE_API_VERSION }}
AZURE_API_BASE: 'https://${{ vars.AZURE_DEPLOYMENT_ID }}.openai.azure.com'

@mark-hingston
Copy link

mark-hingston commented Jul 11, 2024

@hariprasadiit When I pass in the env with just the AZURE variables I get the following:

12:03:21 - LiteLLM:ERROR: main.py:399 - litellm.acompletion(): Exception occured - Error code: 401 - {'error': {'message': 'Incorrect API key provided: dummy_key. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/litellm/main.py", line 378, in acompletion
    response = await init_response
  File "/usr/local/lib/python3.10/site-packages/litellm/llms/openai.py", line 887, in acompletion
    raise e
  File "/usr/local/lib/python3.10/site-packages/litellm/llms/openai.py", line 872, in acompletion
    response = await openai_aclient.chat.completions.create(
  File "/usr/local/lib/python3.10/site-packages/openai/resources/chat/completions.py", line 1283, in create
    return await self._post(
  File "/usr/local/lib/python3.10/site-packages/openai/_base_client.py", line 1[80](https://github.com/*****/*****/actions/runs/9891017458/job/27320549726#step:3:81)5, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
  File "/usr/local/lib/python3.10/site-packages/openai/_base_client.py", line 1503, in request
    return await self._request(
  File "/usr/local/lib/python3.10/site-packages/openai/_base_client.py", line 1599, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: dummy_key. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}

It seems to be trying to use the OpenAI API instead of the Azure OpenAI instance endpoint.

After setting the variables up I've tried using:

env:
          AZURE_API_TYPE: 'azure'
          AZURE_API_KEY: ${{ secrets.AZURE_OPENAI_KEY }}
          AZURE_API_VERSION: ${{ vars.AZURE_API_VERSION }}
          AZURE_API_BASE: 'https://${{ vars.AZURE_DEPLOYMENT_ID }}.openai.azure.com'

and:

env:
          OPENAI.API_TYPE: 'azure'
          AZURE_API_TYPE: 'azure'
          AZURE_API_KEY: ${{ secrets.AZURE_OPENAI_KEY }}
          AZURE_API_VERSION: ${{ vars.AZURE_API_VERSION }}
          AZURE_API_BASE: 'https://${{ vars.AZURE_DEPLOYMENT_ID }}.openai.azure.com'

Both seem to be trying to connect to OpenAI and not Azure OpenAI.

@hariprasadiit
Copy link

@mark-hingston I kind of included all possible env vars and formats which is working. didn't test which combination working

		  OPENAI_KEY: ${{ secrets.AZURE_OPENAI_KEY }}
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          OPENAI.API_TYPE: 'azure'
          OPENAI_API_TYPE: 'azure'
          OPENAI_API_VERSION: ${{ vars.AZURE_API_VERSION }}
          OPENAI.API_BASE: 'https://${{ vars.AZURE_DEPLOYMENT_ID }}.openai.azure.com'
          OPENAI_API_BASE: 'https://${{ vars.AZURE_DEPLOYMENT_ID }}.openai.azure.com'
          OPENAI.DEPLOYMENT_ID: ${{ vars.AZURE_DEPLOYMENT_ID }}
          OPENAI_DEPLOYMENT_ID: ${{ vars.AZURE_DEPLOYMENT_ID }}
          CONFIG.MODEL: ${{ vars.CONFIG_MODEL }}
          CONFIG.MAX_MODEL_TOKEN: 64000
          CONFIG_MODEL: ${{ vars.CONFIG_MODEL }}
          CONFIG_MAX_MODEL_TOKEN: 64000
          AZURE_API_TYPE: 'azure'
          AZURE_API_KEY: ${{ secrets.AZURE_OPENAI_KEY }}
          AZURE_API_VERSION: ${{ vars.AZURE_API_VERSION }}
          AZURE_API_BASE: 'https://${{ vars.AZURE_DEPLOYMENT_ID }}.openai.azure.com'

@mark-hingston
Copy link

@hariprasadiit Awesome thanks, that worked for me.

@buddhamangler-cbre
Copy link

the error seems to be related API_VERSION env variable. litellm is splitting it since some versions include -preview or others. When we provide just the date as version, which is the case for gpt-4o, the date string is parsed to date by default and split function is not available on it.

This is what is happening, the settings object is automatically parsing the value as a datetime. I haven't dug into why that is the case, but that appears to be what is happening here. When the LiteLLMAIHandler sets the api_version, it is already a datetime and litellm is expecting a string.

litellm.api_version = get_settings().openai.api_version

@saquino0827
Copy link

the error seems to be related API_VERSION env variable. litellm is splitting it since some versions include -preview or others. When we provide just the date as version, which is the case for gpt-4o, the date string is parsed to date by default and split function is not available on it.

This is what is happening, the settings object is automatically parsing the value as a datetime. I haven't dug into why that is the case, but that appears to be what is happening here. When the LiteLLMAIHandler sets the api_version, it is already a datetime and litellm is expecting a string.

litellm.api_version = get_settings().openai.api_version

Agreed, adding the env variables AZURE_API_VERSION: '2023-03-15-preview' and OPENAI_API_VERSION: '2023-03-15' has fixed this issue for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants