Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: openai.BadRequestError: Error code: 400. Invalid value for 'content': expected a string, got null #17555

Open
upchunk opened this issue Jan 20, 2025 · 3 comments · May be fixed by #17556
Labels
bug Something isn't working triage Issue needs to be triaged/prioritized

Comments

@upchunk
Copy link
Contributor

upchunk commented Jan 20, 2025

Bug Description

OpenAI Didnt Accept ChatMessage Content with Null value, only Accept String.

This issue happened specially when Agent Tools return None or empy string, such as RetrieverTool with Empty Result

Version

0.12.9 to 0.12.11 (llama-index-llms-openai 0.3.10 to 0.3.13)

Steps to Reproduce

Create Agent With Retriever Tool
Add Postprocessors to filter out nodes (I Use LLM Rerank)
Adjust filter so that you get 0 Nodes in the end.

Relevant Logs/Tracbacks

Traceback (most recent call last):
  File "C:\Users\habi6\Desktop\PROJECT\aidoc-llm-service\.venv\Lib\site-packages\llama_index\core\instrumentation\dispatcher.py", line 367, in async_wrapper
    result = await func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\habi6\Desktop\PROJECT\aidoc-llm-service\.venv\Lib\site-packages\llama_index\core\chat_engine\types.py", line 234, in awrite_response_to_history       
    async for chat in self.achat_stream:
  File "C:\Users\habi6\Desktop\PROJECT\aidoc-llm-service\.venv\Lib\site-packages\llama_index\core\llms\callbacks.py", line 88, in wrapped_gen
    async for x in f_return_val:
  File "C:\Users\habi6\Desktop\PROJECT\aidoc-llm-service\.venv\Lib\site-packages\llama_index\llms\openai\base.py", line 724, in gen
    async for response in await aclient.chat.completions.create(
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\habi6\Desktop\PROJECT\aidoc-llm-service\.venv\Lib\site-packages\openai\resources\chat\completions.py", line 1720, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "C:\Users\habi6\Desktop\PROJECT\aidoc-llm-service\.venv\Lib\site-packages\openai\_base_client.py", line 1849, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\habi6\Desktop\PROJECT\aidoc-llm-service\.venv\Lib\site-packages\openai\_base_client.py", line 1543, in request
    return await self._request(
           ^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\habi6\Desktop\PROJECT\aidoc-llm-service\.venv\Lib\site-packages\openai\_base_client.py", line 1644, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid value for 'content': expected a string, got null.", 'type': 'invalid_request_error', 'param': 'messages.[3].content', 'code': None}}
INFO:     127.0.0.1:51858 - "POST /query HTTP/1.1" 200 OK
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "C:\Users\habi6\Desktop\PROJECT\aidoc-llm-service\.venv\Lib\site-packages\uvicorn\protocols\http\httptools_impl.py", line 409, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\habi6\Desktop\PROJECT\aidoc-llm-service\.venv\Lib\site-packages\uvicorn\middleware\proxy_headers.py", line 60, in __call__
    return await self.app(scope, receive, send)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\habi6\Desktop\PROJECT\aidoc-llm-service\.venv\Lib\site-packages\fastapi\applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "C:\Users\habi6\Desktop\PROJECT\aidoc-llm-service\.venv\Lib\site-packages\starlette\applications.py", line 113, in __call__
    await self.middleware_stack(scope, receive, send)
  File "C:\Users\habi6\Desktop\PROJECT\aidoc-llm-service\.venv\Lib\site-packages\starlette\middleware\errors.py", line 187, in __call__
    raise exc
  File "C:\Users\habi6\Desktop\PROJECT\aidoc-llm-service\.venv\Lib\site-packages\starlette\middleware\errors.py", line 165, in __call__
    await self.app(scope, receive, _send)
  File "C:\Users\habi6\Desktop\PROJECT\aidoc-llm-service\.venv\Lib\site-packages\starlette\middleware\base.py", line 185, in __call__
    with collapse_excgroups():
         ^^^^^^^^^^^^^^^^^^^^
  File "C:\Program Files\Python312\Lib\contextlib.py", line 158, in __exit__
    self.gen.throw(value)
  File "C:\Users\habi6\Desktop\PROJECT\aidoc-llm-service\.venv\Lib\site-packages\starlette\_utils.py", line 82, in collapse_excgroups
    raise exc
  File "C:\Users\habi6\Desktop\PROJECT\aidoc-llm-service\.venv\Lib\site-packages\starlette\middleware\base.py", line 188, in __call__
    await response(scope, wrapped_receive, send)
  File "C:\Users\habi6\Desktop\PROJECT\aidoc-llm-service\.venv\Lib\site-packages\starlette\middleware\base.py", line 222, in __call__
    async for chunk in self.body_iterator:
  File "C:\Users\habi6\Desktop\PROJECT\aidoc-llm-service\.venv\Lib\site-packages\starlette\middleware\base.py", line 179, in body_stream
    raise app_exc
  File "C:\Users\habi6\Desktop\PROJECT\aidoc-llm-service\.venv\Lib\site-packages\starlette\middleware\base.py", line 149, in coro
    await self.app(scope, receive_or_disconnect, send_no_error)
  File "C:\Users\habi6\Desktop\PROJECT\aidoc-llm-service\.venv\Lib\site-packages\starlette\middleware\base.py", line 185, in __call__
    with collapse_excgroups():
         ^^^^^^^^^^^^^^^^^^^^
  File "C:\Program Files\Python312\Lib\contextlib.py", line 158, in __exit__
    self.gen.throw(value)
  File "C:\Users\habi6\Desktop\PROJECT\aidoc-llm-service\.venv\Lib\site-packages\starlette\_utils.py", line 82, in collapse_excgroups
    raise exc
  File "C:\Users\habi6\Desktop\PROJECT\aidoc-llm-service\.venv\Lib\site-packages\starlette\responses.py", line 255, in wrap
    await func()
  File "C:\Users\habi6\Desktop\PROJECT\aidoc-llm-service\.venv\Lib\site-packages\starlette\responses.py", line 244, in stream_response
    async for chunk in self.body_iterator:
  File "C:\Users\habi6\Desktop\PROJECT\aidoc-llm-service\app\index_server.py", line 295, in gen
    async for token in my_query.async_response_gen():
  File "C:\Users\habi6\Desktop\PROJECT\aidoc-llm-service\.venv\Lib\site-packages\llama_index\core\chat_engine\types.py", line 310, in async_response_gen
    raise self.exception
  File "C:\Users\habi6\Desktop\PROJECT\aidoc-llm-service\.venv\Lib\site-packages\llama_index\core\instrumentation\dispatcher.py", line 367, in async_wrapper
    result = await func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\habi6\Desktop\PROJECT\aidoc-llm-service\.venv\Lib\site-packages\llama_index\core\chat_engine\types.py", line 234, in awrite_response_to_history       
    async for chat in self.achat_stream:
  File "C:\Users\habi6\Desktop\PROJECT\aidoc-llm-service\.venv\Lib\site-packages\llama_index\core\llms\callbacks.py", line 88, in wrapped_gen
    async for x in f_return_val:
  File "C:\Users\habi6\Desktop\PROJECT\aidoc-llm-service\.venv\Lib\site-packages\llama_index\llms\openai\base.py", line 724, in gen
    async for response in await aclient.chat.completions.create(
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\habi6\Desktop\PROJECT\aidoc-llm-service\.venv\Lib\site-packages\openai\resources\chat\completions.py", line 1720, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "C:\Users\habi6\Desktop\PROJECT\aidoc-llm-service\.venv\Lib\site-packages\openai\_base_client.py", line 1849, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\habi6\Desktop\PROJECT\aidoc-llm-service\.venv\Lib\site-packages\openai\_base_client.py", line 1543, in request
    return await self._request(
           ^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\habi6\Desktop\PROJECT\aidoc-llm-service\.venv\Lib\site-packages\openai\_base_client.py", line 1644, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid value for 'content': expected a string, got null.", 'type': 'invalid_request_error', 'param': 'messages.[3].content', 'code': None}}
@upchunk upchunk added bug Something isn't working triage Issue needs to be triaged/prioritized labels Jan 20, 2025
Copy link

dosubot bot commented Jan 20, 2025

To resolve the openai.BadRequestError with error code 400 due to an invalid value for 'content' when using the RetrieverTool with an empty result in version 0.12.11 of llama-index-llms-openai, ensure that the content field is not set to an empty string. You can handle this by setting content_txt to None if it is an empty string:

content_txt = None if content_txt == "" else content_txt

This line ensures that if the content_txt is empty, it is set to None instead of an empty string, which prevents the error when sending the message to OpenAI [1].

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@upchunk
Copy link
Contributor Author

upchunk commented Jan 20, 2025

no, openai cannot recieve None (null) value, their api only accept for string, empty string "" is allowed

@upchunk
Copy link
Contributor Author

upchunk commented Jan 20, 2025

i already fix this with my PR, please check my PR: #17556

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working triage Issue needs to be triaged/prioritized
Projects
None yet
1 participant