Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Solution to "InvalidStateError" and Handling of Closed Channels in OpenAI's AssistantLLMStream #1113

Open
JeisonNovoa opened this issue Nov 20, 2024 · 2 comments
Labels
question Further information is requested

Comments

@JeisonNovoa
Copy link

Hello everyone,

Recently, we encountered an error while using the OpenAI Assistants API in our project. Upon investigation, we noticed that several discussions in the community highlighted similar issues experienced by other developers. I’d like to share our experience and the solution we implemented, hoping it will be helpful to those facing the same problem.

Error Description:

When interacting with the assistant, the program intermittently threw the following error:

asyncio.exceptions.InvalidStateError: invalid state

This error occurred in the _main_task method of the AssistantLLMStream class within the "assistant_llm.py" file, specifically when trying to set the result of a Future that had already been completed:

self._done_future.set_result(None)

Additionally, exceptions related to attempts to send data through a closed channel (ChanClosed) were also observed, causing data flow interruptions and the assistant to stop responding.

Cause of the Error:

  1. State of the Future: The InvalidStateError occurs when attempting to modify the state of a Future that has already been completed or canceled. In our case, the Future self._done_future could already be finished when set_result(None) was called.

  2. Improper Exception Handling: Exceptions in asynchronous methods were not being properly managed, leading to attempts to send data through a closed channel and additional exceptions that impacted the program's stability.

Solution Implemented:

  1. Check the State of the Future Before Modifying It:

We added a verification step to ensure that the Future was not already completed before calling set_result(None):

finally:
if not self._done_future.done():
self._done_future.set_result(None)

This prevents attempts to modify a Future in an invalid state, avoiding the InvalidStateError.

  1. Proper Exception Handling and Channel Closure:

In the _main_task method, we added an except block to capture any exceptions during execution, set self._exception, and close the event channel self._event_ch. This ensures that the stream iterator is aware of the exception and handles the flow appropriately.

except Exception as e:
logger.error(f"Unexpected error in _main_task: {e}")
self._exception = e
self._event_ch.close()

Additionally, in the on_text_delta method of the EventHandler class, we handled exceptions while sending data through the channel, ensuring the program flow is not interrupted if the channel is closed:

async def on_text_delta(self, delta: TextDelta, snapshot: Text):
try:
assert self.current_run is not None
self._event_ch.send_nowait(
llm.ChatChunk(
request_id=self.current_run.id,
choices=[
llm.Choice(
delta=llm.ChoiceDelta(role="assistant", content=delta.value)
)
],
)
)
except utils.aio.ChanClosed:
# The channel is closed, no action needed
pass
except Exception as e:
logger.error(f"Exception in on_text_delta: {e}")
self._llm_stream._exception = e
self._llm_stream._event_ch.close()

Results:

After applying these changes, the InvalidStateError stopped occurring, and the assistant now functions correctly, responding consistently to user interactions. Proper exception handling and validation of asynchronous object states improved the stability and robustness of the application.

@JeisonNovoa JeisonNovoa added the question Further information is requested label Nov 20, 2024
@ftsef
Copy link

ftsef commented Nov 21, 2024

Thanks for your effort. Works without any exceptions/missbehavior so far

@fhrzn
Copy link

fhrzn commented Dec 1, 2024

by any chance, have you ever faced an issue when the assistant executes a tool but the generated response doesn't reach the livekit?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants