You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello there, thank you for defining this new protocol ! 🤗
I see a parallel between the A2A protocol and Kserve's Open Inference Protocol which enable abstractions like InferenceService which totally abstract the underlying ML framework or inference server. I strongly believe we could do the same with this protocol by defining an A2AServer as a Custom Resource Definition to standardize agent deployment.
Random ideas:
Framework Compatibility: Various agent libraries (CrewAI, Google ADK, PydanticAI) could implement A2A-compliant servers, or developers could wrap their code in an API similar to the A2A examples.
Agent Abstraction: A2A servers could abstract two types of agents:
Simple agents (LLM + Tools + Prompt)
Complex agents (represented as interaction graphs, like those built with CrewAI or LangGraph)
Gateway Exposure: A2A servers running in the cluster could be exposed behind a Gateway with an automatically populated .well-known/agent.json based on deployed and ready agents.
Built-in Registry: Utilize etcd as an out-of-the-box agent registry since A2A servers would be represented as Kubernetes resources.
Kubernetes Ecosystem Benefits (the most important): Take advantage of Kubernetes features for reliability and efficiency:
Multiple agent replicas
Scale-to-zero capabilities
Service mesh integration (mTLS, advanced traffic management, ...)
Advanced deployment strategy (crucial since small prompt changes can drastically alter agent behavior)
...
I would be greatly honored if these ideas resonated with you and if you would like to discuss them further. Please feel free to share your thoughts if you believe this project deserves to be developed. 😊
Everything is not crystal clear in my head, which is why I started developing a proof of concept of such a system to help me clarify everything.
Note: I've heard about kagent which is designed to build agents on Kubernetes, but I believe that we should focus our discussion about "How to standardize the way we deploy agents on Kubernetes?", since many frameworks exist to build agent and the A2A protocol defines standards to make them all "discuss" together.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello there, thank you for defining this new protocol ! 🤗
I see a parallel between the A2A protocol and Kserve's Open Inference Protocol which enable abstractions like
InferenceService
which totally abstract the underlying ML framework or inference server. I strongly believe we could do the same with this protocol by defining anA2AServer
as a Custom Resource Definition to standardize agent deployment.Random ideas:
.well-known/agent.json
based on deployed and ready agents.I would be greatly honored if these ideas resonated with you and if you would like to discuss them further. Please feel free to share your thoughts if you believe this project deserves to be developed. 😊
Everything is not crystal clear in my head, which is why I started developing a proof of concept of such a system to help me clarify everything.
Note: I've heard about
kagent
which is designed to build agents on Kubernetes, but I believe that we should focus our discussion about "How to standardize the way we deploy agents on Kubernetes?", since many frameworks exist to build agent and the A2A protocol defines standards to make them all "discuss" together.Beta Was this translation helpful? Give feedback.
All reactions