Error code: 400 - {'error': {'message': "An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'. The following tool_call_ids did not have response messages: call_iY0nNb7BlKHtLUT3BRmK28r8", 'type': 'invalid_request_error', 'param': 'messages', 'code': None}} #23733
Replies: 6 comments 8 replies
-
To resolve the error indicating that an assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id', you need to ensure that each tool call in the assistant message is followed by a corresponding tool message that responds to the 'tool_call_id'. Here's an example of how you can structure your messages to include the necessary tool messages: from langchain_core.messages import AIMessage, ToolMessage
# Example assistant message with tool calls
assistant_message = AIMessage(
content='',
tool_calls=[
{
'id': 'call_iY0nNb7BlKHtLUT3BRmK28r8',
'function': {
'name': 'YourToolFunctionName',
'arguments': '{"arg1":"value1","arg2":"value2"}'
},
'type': 'function'
}
]
)
# Corresponding tool message responding to the tool call
tool_message = ToolMessage(
role='tool',
content='{"result":"Your tool result"}',
name='YourToolFunctionName',
tool_call_id='call_iY0nNb7BlKHtLUT3BRmK28r8'
)
# Ensure the tool message follows the assistant message
messages = [assistant_message, tool_message] Additionally, you can use the from langchain_core.tools import tool
@tool
def add(a: int, b: int) -> int:
"""Adds a and b."""
return a + b
@tool
def multiply(a: int, b: int) -> int:
"""Multiplies a and b."""
return a * b
tools = [add, multiply]
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)
llm_with_tools = llm.bind_tools(tools) When invoking the model, ensure that tool calls are properly handled and tool results are passed back to the model using from langchain_core.messages import HumanMessage, ToolMessage
query = "What is 3 * 12? Also, what is 11 + 49?"
messages = [HumanMessage(query)]
ai_msg = llm_with_tools.invoke(messages)
messages.append(ai_msg)
for tool_call in ai_msg.tool_calls:
selected_tool = {"add": add, "multiply": multiply}[tool_call["name"].lower()]
tool_output = selected_tool.invoke(tool_call["args"])
messages.append(ToolMessage(tool_output, tool_call_id=tool_call["id"]))
messages This ensures that the tool messages are correctly appended and the model can process them accordingly [1][2][3][4][5]. |
Beta Was this translation helpful? Give feedback.
-
I had this problem because my call_model() function was incorrectly dealing with tool call messages. My prompt template was turning messages with type tool into type human
Obviously this is not correct. I should have been leveraging the ("placeholder", "{messages}") to send all of my messsages. As placeholder maintains the type of the message. I should have just used placeholder as intended:
Just thought I would post this incase it gives anyone guidence on solving the same error |
Beta Was this translation helpful? Give feedback.
-
I change the thread and it works |
Beta Was this translation helpful? Give feedback.
-
I observed an interesting phenomenon: I am switching the blending model from 4o-mini to 4o, and the issue disappeared. This is quite fascinating. |
Beta Was this translation helpful? Give feedback.
-
Last year, while working with Langgraph, I encountered an issue related to how tool calls and their corresponding outputs were handled. Here's a breakdown of the problem and its solution: Problem DescriptionWhen a tool call is made, the OpenAI-compatible API expects the next role to be {
"role": "tool",
"content": "tool output"
} However, in cases where multiple tool calls occur, the API expects multiple {
"role": "tool",
"content": "tool output"
}
{
"role": "tool",
"content": "tool output"
} The differentiation is managed through Issue EncounteredIn my case:
This mismatch between the number of tool calls and tool outputs raised an error. SolutionTo resolve this, I updated my code to ensure that multiple tool calls are handled sequentially rather than in parallel. The fix was implemented as follows: llm.bind_tools(tools, parallel_tool_calls=False) By disabling parallel tool calls, I ensured that:
Key TakeawayFor any scenario involving tool calls:
By implementing the above fix, the issue was resolved, and I haven't encountered it again since. |
Beta Was this translation helpful? Give feedback.
-
I was spending months with that error, doing workarounds. I used Runnables with Messages history, like Redis, to store my messages. But it turned out that the problem was just simple like: We were using that RunnableMemory wrapper in the Agents nodes, but not in the ToolNodes, and in consequence, the tool result wasn't uploaded to redis, so , whenever the runnablememory do an invoke, it retrieves the messages from the store, and so, there is the AI MEssage requesting the tool call, but the Tool result wasn't there, so we do the following:
We just create a Wrapper for tool node to once the tool call is solved, append to the message history. An that solved our problem from the root :). |
Beta Was this translation helpful? Give feedback.
-
Checked other resources
Commit to Help
Example Code
Description
Anybody knows what error this is?
Error code: 400 - {'error': {'message': "An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'. The following tool_call_ids did not have response messages: call_iY0nNb7BlKHtLUT3BRmK28r8", 'type': 'invalid_request_error', 'param': 'messages', 'code': None}}
System Info
langchain==0.2.6
langchain-community==0.2.6
langchain-core==0.2.10
langchain-experimental==0.0.62
langchain-openai==0.1.13
langchain-qdrant==0.1.0
langchain-text-splitters==0.2.1
Beta Was this translation helpful? Give feedback.
All reactions