-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Made the message role of ReAct observation configurable #17521
Made the message role of ReAct observation configurable #17521
Conversation
This assumes that the LLM you are using supports a tool role though? If your LLM supports a tool role already, you should be using FunctionCallingAgent instead of react anyways? 🤔 |
I'm not sure if a model supports tool role, it supports function/tool calling too. Things are a bit complicated in my situation. I use Llama 3.1 and 3.3 hosted by vLLM in my company. The both supports the tool role, but I can't make vLLM produces the correct tool calling responses. The problem might be related to the tool parser plugin, and I currently have no idea how to fix it. Anyway, I think it's better to set the message role to the appropriate one. If legacy models don't support the tool role, should I make it configurable? |
@jamesljlster the vllm issue sounds like the real issue. Assuming vLLM is launched in openai-compatible server mode, it should be straightforward
|
I've launched vLLM in openai-compatible server mode with docker. The chatting function works very well but tool calling just doesn't work. It always produces the tool calls in I believe that the ReAct agent is designed to work with any LLMs with reasoning ability. When a new model arrives, the inference server projects may not provide the full functionality for the model immediately. (For example, chatting works but tool calling is not yet ready). At this moment, the ReAct agent may be the best choice. I may add a default argument |
@jamesljlster yea sure, I'd rather it be configurable. |
@logan-markewich I've made the message role of observations configurable. I would like to let the observation role parameter documented in https://docs.llamaindex.ai/en/stable/api_reference/agent/react/#llama_index.core.agent.react.ReActChatFormatter, so I use Pydantic Field to annotate it. However, I met a problem building the documentation: The error is not related to my changes. How can I resolve it? |
Nevermind! I would use the latest release to develop the documentation. The problem does not appear in the latest release code base. |
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
@logan-markewich I think this pull request is finished. Would you review it for me? |
@logan-markewich May I request your feedback on my latest changes? |
Description
This pull request made the message role of ReAct observation configurable.
Original title: Changed the message role of ReAct observation to tool
The purpose of this pull request was changed after discussing it with @logan-markewich , please refer to the conversation below.
I am developing a chatbot with the ReAct agent. Sometimes the chatbot gives strange responses to the user. After observing with tracer, I believe the problem is related to the inappropriate message role of tool message (observation), making the ReAct agent chat itself:
After setting the observation message role to
tool
, the stability got improved:Related pull request: #17273 (the above test was made after #17273 was merged)
Test model: https://ollama.com/library/qwen2.5
Tracer: https://docs.arize.com/phoenix/tracing/integrations-tracing/llamaindex
Test code:
New Package?
Did I fill in the
tool.llamahub
section in thepyproject.toml
and provide a detailed README.md for my new integration or package?Version Bump?
Did I bump the version in the
pyproject.toml
file of the package I am updating? (Except for thellama-index-core
package)Type of Change
Please delete options that are not relevant.
How Has This Been Tested?
Your pull-request will likely not be merged unless it is covered by some form of impactful unit testing.
Suggested Checklist:
make format; make lint
to appease the lint gods