-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Decouple model_context from AssistantAgent #4681
Conversation
@microsoft-github-policy-service agree |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot reviewed 1 out of 1 changed files in this pull request and generated no suggestions.
Comments skipped due to low confidence (1)
python/packages/autogen-agentchat/src/autogen_agentchat/agents/_assistant_agent.py:317
- The error message 'No tools are available.' could be more descriptive. Consider changing it to 'No tools or handoff tools are available for execution.'
raise ValueError("No tools are available.")
python/packages/autogen-agentchat/src/autogen_agentchat/agents/_assistant_agent.py
Outdated
Show resolved
Hide resolved
python/packages/autogen-agentchat/src/autogen_agentchat/agents/_assistant_agent.py
Outdated
Show resolved
Hide resolved
...on/packages/autogen-core/src/autogen_core/model_context/_buffered_chat_completion_context.py
Outdated
Show resolved
Hide resolved
...on/packages/autogen-core/src/autogen_core/model_context/_buffered_chat_completion_context.py
Outdated
Show resolved
Hide resolved
@victordibia please take a look at this and see the implication to #4438 . @akurniawan thanks for the PR, since this is very closely related to the aforementioned PR, please change this to DRAFT state and we will need to coordinate the work. |
thanks @akurniawan , @ekzhu, @jackgerrits Current PRFrom my understanding this PR does the following:
@akurniawan .. can you confirm the above is correct? Also, if possible can you update the PR description with an example usage ...e.g., below?
PR #4438 - Memory in AgentChat#4438 focuses on
My feeling is that there are differences - while
Thoughts welcome. |
My feeling is that there can be two things:
I think that the current model context interface is not quite complete, because a model context should also include tools or tool schema -- something a memory module should also provide augmentation for. In this regard, I think the model context object could be potentially transient -- the agent only create one on demand when model inference is performed. |
1942071
to
1966d7e
Compare
@victordibia yes that's correct and I have updated the PR description as suggested |
353ac69
to
5787568
Compare
...s/autogen-core/src/autogen_core/model_context/_unbounded_buffered_chat_completion_context.py
Outdated
Show resolved
Hide resolved
...s/autogen-core/src/autogen_core/model_context/_unbounded_buffered_chat_completion_context.py
Outdated
Show resolved
Hide resolved
python/packages/autogen-agentchat/src/autogen_agentchat/agents/_assistant_agent.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@akurniawan thanks.
@victordibia and I have discussed and we think this PR can be marked as Ready for Review.
@akurniawan I think this PR is almost at the finish line. I believe we just need to resolve the few remaining issues. Would you like us to take over and get it done? |
@akurniawan I am going to push some changes. |
Hey sorry was not available yesterday. I can complete the changes you suggested today. |
it seems you have already changed it, thanks for the help! |
You are welcome 😃 Join our office hours and discord to discuss more. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, thanks.
to add memory, we can do something like
...
# Inner messages.
inner_messages: List[AgentEvent | ChatMessage] = []
# Transform the model context for each "memory source"
if memory and isinstance(memory, list):
for m in memory:
self._model_context = await m.transform(self._model_context, transform=True)
@ekzhu , what do you think?
python/packages/autogen-core/samples/common/agents/_chat_completion_agent.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can either use a list or the ChatCompletionContext without anything breaking right?
Yes. That's what I was thinking. The model context is essentially the input or the "working set" that model consumes. |
Right now no one is using this new constructor argument so it will not be breaking change. |
Perhaps the memory can directly mutate the context. Right now the code snippet just shows direct update. We want be prepared for a scenario in which a remote model context hosted on a service. Eg OpenAI Assistant API's thread. |
Makes sense ... we can discuss this on the memory PR once this is merged. |
Why are these changes needed?
This is to decouple model_context from AssistantAgent to provide flexibility in augmenting incoming message to the agent. This could be useful when we have a specific template for an agent that is a part of a GroupChat, discussion here: #4668
Usage example
Related issue number
Checks