Replies: 1 comment
-
Completed with meta-llama/llama-stack-client-python#121 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
-- moving over discussions from #955
Problem
We want to standardize the steps necessary for users to build a ReACT agent with the ability to interleave between generating thoughts and taking task specific actions dynamically.
The current agent orchestration loop requires ad hoc logic for intercepting agent outputs and parsing outputs from output messages to fit a ReACT framework (example). This proposes changes to LlamaStack client SDKs and server APIs for better ergonomics to build an ReACT agent.
Proposed Solution
We want to have the flexibility to configure custom prompts and custom output parsers in agent loop execution.
Our current agent loop with custom tool calls will loop and call tools until there’s no more tool response. In ReACT framework, action output typically maps to a tool call. We can re-utilize the agent loop, but add a parsing logic right after agent outputs to populate “action” into ToolCall to enable ReACT.
Current Agent Types Summary
Agent
instance is defined by anAgentConfig
Agent
instance can be categorized into several classesPass tool response as next turn (built-in tool & custom tool differ)
force_retrieval=?
output_parser=react_output_parser
Pass tool response as next turn
Proof of Concept Implementation
Beta Was this translation helpful? Give feedback.
All reactions