Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question]: ValueError: loop argument must agree with lock #17528

Open
1 task done
chiyingyunhua opened this issue Jan 16, 2025 · 3 comments
Open
1 task done

[Question]: ValueError: loop argument must agree with lock #17528

chiyingyunhua opened this issue Jan 16, 2025 · 3 comments
Labels
question Further information is requested

Comments

@chiyingyunhua
Copy link

Question Validation

  • I have searched both the documentation and discord for an answer.

Question

python:3.10.0
llama-index:0.12.11

code is :
import asyncio
import os
from typing import Any, List

from llama_index.core.agent.react import ReActChatFormatter, ReActOutputParser
from llama_index.core.agent.react.types import (
ActionReasoningStep,
ObservationReasoningStep,
)
from llama_index.core.llms.llm import LLM
from llama_index.core.memory import ChatMemoryBuffer
from llama_index.core.tools.types import BaseTool
from llama_index.core.workflow import (
Context,
Workflow,
StartEvent,
StopEvent,
step,
)
from llama_index.llms.openai import OpenAI
from llama_index.core.llms import ChatMessage
from llama_index.core.tools import ToolSelection, ToolOutput, FunctionTool
from llama_index.core.workflow import Event

import nest_asyncio

nest_asyncio.apply()

class PrepEvent(Event):
pass

class InputEvent(Event):
input: list[ChatMessage]

class ToolCallEvent(Event):
tool_calls: list[ToolSelection]

class FunctionOutputEvent(Event):
output: ToolOutput

class ReActAgent(Workflow):
def init(
self,
*args: Any,
llm: LLM | None = None,
tools: list[BaseTool] | None = None,
extra_context: str | None = None,
**kwargs: Any,
) -> None:
super().init(*args, **kwargs)
self.tools = tools or []

    self.llm = llm or OpenAI()

    self.memory = ChatMemoryBuffer.from_defaults(llm=llm)
    self.formatter = ReActChatFormatter(context=extra_context or "")
    self.output_parser = ReActOutputParser()
    self.sources = []

@step
async def new_user_msg(self, ctx: Context, ev: StartEvent) -> PrepEvent:
    # clear sources
    self.sources = []

    # get user input
    user_input = ev.input
    user_msg = ChatMessage(role="user", content=user_input)
    self.memory.put(user_msg)

    # clear current reasoning
    await ctx.set("current_reasoning", [])

    return PrepEvent()

@step
async def prepare_chat_history(
        self, ctx: Context, ev: PrepEvent
) -> InputEvent:
    # get chat history
    chat_history = self.memory.get()
    current_reasoning = await ctx.get("current_reasoning", default=[])
    llm_input = self.formatter.format(
        self.tools, chat_history, current_reasoning=current_reasoning
    )
    return InputEvent(input=llm_input)

@step
async def handle_llm_input(
        self, ctx: Context, ev: InputEvent
) -> ToolCallEvent | StopEvent:
    chat_history = ev.input

    response = await self.llm.achat(chat_history)

    try:
        reasoning_step = self.output_parser.parse(response.message.content)
        (await ctx.get("current_reasoning", default=[])).append(
            reasoning_step
        )
        if reasoning_step.is_done:
            self.memory.put(
                ChatMessage(
                    role="assistant", content=reasoning_step.response
                )
            )
            return StopEvent(
                result={
                    "response": reasoning_step.response,
                    "sources": [*self.sources],
                    "reasoning": await ctx.get(
                        "current_reasoning", default=[]
                    ),
                }
            )
        elif isinstance(reasoning_step, ActionReasoningStep):
            tool_name = reasoning_step.action
            tool_args = reasoning_step.action_input
            return ToolCallEvent(
                tool_calls=[
                    ToolSelection(
                        tool_id="fake",
                        tool_name=tool_name,
                        tool_kwargs=tool_args,
                    )
                ]
            )
    except Exception as e:
        (await ctx.get("current_reasoning", default=[])).append(
            ObservationReasoningStep(
                observation=f"There was an error in parsing my reasoning: {e}"
            )
        )

    # if no tool calls or final response, iterate again
    return PrepEvent()

@step
async def handle_tool_calls(
        self, ctx: Context, ev: ToolCallEvent
) -> PrepEvent:
    tool_calls = ev.tool_calls
    tools_by_name = {tool.metadata.get_name(): tool for tool in self.tools}

    # call tools -- safely!
    for tool_call in tool_calls:
        tool = tools_by_name.get(tool_call.tool_name)
        if not tool:
            (await ctx.get("current_reasoning", default=[])).append(
                ObservationReasoningStep(
                    observation=f"Tool {tool_call.tool_name} does not exist"
                )
            )
            continue

        try:
            tool_output = tool(**tool_call.tool_kwargs)
            self.sources.append(tool_output)
            (await ctx.get("current_reasoning", default=[])).append(
                ObservationReasoningStep(observation=tool_output.content)
            )
        except Exception as e:
            (await ctx.get("current_reasoning", default=[])).append(
                ObservationReasoningStep(
                    observation=f"Error calling tool {tool.metadata.get_name()}: {e}"
                )
            )

    # prep the next iteraiton
    return PrepEvent()

async def main():
# 设置OpenAI API密钥
os.environ["OPENAI_API_KEY"] = "your_openai_api_key"

def add(x: int, y: int) -> int:
    """Useful function to add two numbers."""
    return x + y

def multiply(x: int, y: int) -> int:
    """Useful function to multiply two numbers."""
    return x * y

tools = [
    FunctionTool.from_defaults(add),
    FunctionTool.from_defaults(multiply),
]
llm = OpenAI(api_base='http://xxx:8090/v1')

agent = ReActAgent(
    llm=llm, tools=tools, timeout=120, verbose=True
)

ret = await agent.run(input="Hello!")

if name == "main":
asyncio.run(main())

error is:
Traceback (most recent call last):
File "C:\workfile\code\python\test_llama_index\llama_workflow\react_agent.py", line 204, in
asyncio.run(main())
File "C:\Users\fuhuaz\AppData\Local\miniconda3\envs\py310\lib\site-packages\nest_asyncio.py", line 30, in run
return loop.run_until_complete(task)
File "C:\Users\fuhuaz\AppData\Local\miniconda3\envs\py310\lib\site-packages\nest_asyncio.py", line 98, in run_until_complete
return f.result()
File "C:\Users\fuhuaz\AppData\Local\miniconda3\envs\py310\lib\asyncio\futures.py", line 201, in result
raise self._exception
File "C:\Users\fuhuaz\AppData\Local\miniconda3\envs\py310\lib\asyncio\tasks.py", line 232, in __step
result = coro.send(None)
File "C:\workfile\code\python\test_llama_index\llama_workflow\react_agent.py", line 200, in main
ret = await agent.run(input="Hello!")
File "C:\Users\fuhuaz\AppData\Local\miniconda3\envs\py310\lib\site-packages\llama_index\core\instrumentation\dispatcher.py", line 321, in wrapper
result = func(*args, **kwargs)
File "C:\Users\fuhuaz\AppData\Local\miniconda3\envs\py310\lib\site-packages\llama_index\core\workflow\workflow.py", line 393, in run
ctx, run_id = self._start(
File "C:\Users\fuhuaz\AppData\Local\miniconda3\envs\py310\lib\site-packages\llama_index\core\workflow\workflow.py", line 184, in _start
ctx = Context(self, stepwise=stepwise)
File "C:\Users\fuhuaz\AppData\Local\miniconda3\envs\py310\lib\site-packages\llama_index\core\workflow\context.py", line 44, in init
self._step_condition: asyncio.Condition = asyncio.Condition(
File "C:\Users\fuhuaz\AppData\Local\miniconda3\envs\py310\lib\asyncio\locks.py", line 234, in init
raise ValueError("loop argument must agree with lock")
ValueError: loop argument must agree with lock。

@chiyingyunhua chiyingyunhua added the question Further information is requested label Jan 16, 2025
Copy link

dosubot bot commented Jan 16, 2025

Hey @chiyingyunhua! I'm here to help you out with any bugs, answer your questions, and guide you on how to become a contributor. Let's tackle this together!

The error you're encountering, ValueError: loop argument must agree with lock, typically occurs when there's a mismatch between the event loop being used and the one expected by the asyncio.Condition or other asyncio synchronization primitives.

Here are a few steps you can take to resolve this issue:

  1. Ensure Compatibility with nest_asyncio: The nest_asyncio library is used to allow nested event loops, which is useful in environments like Jupyter notebooks. However, it can sometimes cause issues with event loop management. Make sure that nest_asyncio.apply() is called before any event loop operations are performed.

  2. Check Event Loop Usage: Ensure that the event loop being used is consistent throughout your application. You can explicitly set the event loop using asyncio.set_event_loop() if needed.

  3. Python Version Compatibility: Since you're using Python 3.10, ensure that all libraries, including nest_asyncio and any other dependencies, are compatible with this version.

  4. Avoid Explicit Loop Arguments: If you are passing the loop argument explicitly to any asyncio functions or classes, try removing it. The default behavior should automatically use the current event loop.

  5. Update Libraries: Ensure that all your libraries, including llama_index and nest_asyncio, are up to date. There might be bug fixes or improvements that address this issue.

By following these steps, you should be able to resolve the ValueError related to the event loop. If the problem persists, consider checking for any specific issues related to the llama_index version you are using.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@logan-markewich
Copy link
Collaborator

I've never seen this before tbh. Maybe remove the nest asyncio call, that's only needed for notebooks. Running as a pure script it'll be fine

@chiyingyunhua
Copy link
Author

I've never seen this before tbh. Maybe remove the nest asyncio call, that's only needed for notebooks. Running as a pure script it'll be fine

no, remove the nest aysncio call, the error also exists。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants