-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix AssistantAgent Tool Call Behavior #4602
Conversation
python/packages/autogen-agentchat/src/autogen_agentchat/agents/_assistant_agent.py
Outdated
Show resolved
Hide resolved
python/packages/autogen-agentchat/src/autogen_agentchat/agents/_assistant_agent.py
Outdated
Show resolved
Hide resolved
python/packages/autogen-agentchat/src/autogen_agentchat/agents/_assistant_agent.py
Outdated
Show resolved
Hide resolved
python/packages/autogen-agentchat/src/autogen_agentchat/agents/_assistant_agent.py
Outdated
Show resolved
Hide resolved
python/packages/autogen-agentchat/src/autogen_agentchat/agents/_assistant_agent.py
Outdated
Show resolved
Hide resolved
python/packages/autogen-agentchat/src/autogen_agentchat/agents/_assistant_agent.py
Outdated
Show resolved
Hide resolved
python/packages/autogen-agentchat/src/autogen_agentchat/agents/_assistant_agent.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it warrants unit tests for this.
We can split the disable of parallel tool calling for handoff in a separate PR if the requirements for that is still under-specified.
I added unit tests. I think now this PR is just for fixing the repeated tool calls, I will let the handoffs for another PR. I think this is crucial to merge first before the other stuff. |
python/packages/autogen-agentchat/src/autogen_agentchat/agents/_assistant_agent.py
Outdated
Show resolved
Hide resolved
python/packages/autogen-agentchat/src/autogen_agentchat/agents/_assistant_agent.py
Outdated
Show resolved
Hide resolved
…/_assistant_agent.py Co-authored-by: Eric Zhu <[email protected]>
python/packages/autogen-agentchat/src/autogen_agentchat/agents/_assistant_agent.py
Show resolved
Hide resolved
PR on hold. Ideally AssistantAgent only does 1 LLM call with tools (if any). Return type of message should depend on tool call and allow other agents to easily convert that message to a string. Other agents and teams should be updated after AssistantAgent is updated |
I have updated the code, so we limit just 1 tool call iteration always. I have updated the examples too. @husseinmozannar @victordibia please verify the behavior is working with other scenarios. |
Behavior looks fine in a few sample teams I tried (multiple AssistantAgents in a round robin with varying access to tools), and tried the video surfer. |
* 1 tool call iteration default * handoff first * return_only_response * add and remove tools * print out tool calls * pass checks * fix issues * add test * add unit tests * remove extra print * Update python/packages/autogen-agentchat/src/autogen_agentchat/agents/_assistant_agent.py Co-authored-by: Eric Zhu <[email protected]> * documentation and none max_tools_calls * Always limit # tool call to 1 * Update notebooks for the changing behavior of assistant agent. * Merge branch 'main' into assistant_Agent_tools * add reflect_on_tool_use parameter to format the tool call result * wip * wip * fix pyright * Add unit tests * Merge remote-tracking branch 'origin/main' into assistant_Agent_tools * Update with custom formatting of tool call summary * format * Merge branch 'main' into assistant_Agent_tools
Hi, @husseinmozannar, I'm trying to understand the full intent with this change as all tools results are now returned in TextMessage (vs. only in ToolCallResultMessage) which makes it difficult to differentiate tool call results from LLM responses.
As per "by default", was this intended to be optional behavior? It was good to be able to easily differentiate tool call responses to the client from the client's response to the caller. Thanks. |
Hey! There was a problem with the previous version of AssistantAgent. GPT-4 when it decides to call a tool, only returns the tool call and no other response. In previous version of AssistantAgent, it called the LLM as many as time as needed so that the final LLM response was not a tool call i.e. a string. Referencing the API doc
Moreover the innermessages of the final textmessage will have the toolcall +results. Does this make more sense now? |
Thanks, I understand the intent, the challenge for me is that both tool call results and normal responses are now returned as :class:~autogen_agentchat.messages.TextMessage with no differentiating characteristics, making it difficult to determine the difference between a textmessage from the agent and the additional tool call response (or summary) from the tool without tracking additional state (e.g. did I just receive a toolcallresult prior to this message, if so, then the next TextMessage is the tool response, not from the LLM). I think It would be better to not mix the types. There is already toolcallresultmessage which was was previously the unambiguous way to identify tool results; now tool results are coming in two forms (toolcallresultmessage and another copy in a different format as textmessage) and it is now requires extra logic to try and determine if the textmessage is the tool results or from the LLM. Can we create a clear message types for the new messages (e.g. toolcallresultsummarymessage) or some other disambiguating way in the message the determine the type of message? |
@husseinmozannar I think it's a valid point and we should create a new type of chat message for this. It can be important for orchestration or termination condition. When inner messages are not emitted, we need to rely on typing to figure out what happened. |
@husseinmozannar, I'm think I'm finding more side effects of this change. Using a simple RoundRobin with two agents:
With the tool results now being returned as a TextMessage, I'm seeing the speaker move from writer->editor immediately after receiving the TextMessage tool response; so rather than the writer receiving the tool reply and using it to write the paper, the editor takes over prematurely. Before: task->writer runs tool -> writer writes paper -> editor provides feedback -> writer <-> editor ... -> editor approves Now:
|
We observed this effect as well. One way to fix this is by setting the Alternatively, set |
Resolves #4514
reflect_on_tool_use
to optionally reformat the tool call result using a model inference.