Replies: 3 comments 5 replies
-
@gNtino Love this framing of “intent” as the coordination layer—it aligns closely with some thinking I’ve been doing around how privacy context could be understood and enforced across agents. In the same way human teams operate within a shared intent (e.g., a project goal) and implicit norms for appropriate data use, I think agents need a similar contextual awareness: what behavior is aligned with purpose, acceptable within privacy and compliance boundaries, and still effective. Curious if others have explored how “intent” could be used to scope what agents are allowed to do—not just technically, but ethically and legally. How do we give agents the right context for purpose limitation, especially when multiple agents are collaborating? |
Beta Was this translation helpful? Give feedback.
-
This kind of sounds like something sematnic routing would typically cover, however in regards to Intent, its tricky to get right, unless coupled with some form of RL. Then the system groks mismatched outcomes and learns and adjusts accordingly. To give a bad example: An Agent gets an Uber Agent Car to take me to "Clive's Cuts" for a haircut, and I arrive at "Clive's Cuts" the Steakhouse, where I end up with a 12oz ribeye rather than a trim to my noggle. In a traditional semantic routing system, this might just be logged as an error. However, with the RL component, the system would apply a penalty to this routing decision, effectively learning that despite potential semantic similarities (both establishments named "Clive's," both involve "cuts"), the context and intent are fundamentally different. Over time, these penalties would train the system to better distinguish between superficially similar but functionally distinct destinations or actions. The RL aspect allows the system to not just recognize patterns, but to actively improve its routing decisions based on real-world outcomes - moving beyond pure semantic similarity to incorporate practical consequences and user satisfaction into its decision-making process. I guess this would be YELP for agents lol. Apologies if I am taking this off-track, its a great point and I love seeing disucssions like this 🙏 |
Beta Was this translation helpful? Give feedback.
-
I’m not yet fully familiar with the entire scope, advantages, and disadvantages of the protocol being proposed here, but I believe similar research has been conducted in the field of classical (not in a negative sense) NLU. It might be worthwhile to look into prior work. If the protocol proves to be useful, those findings could also be valuable when drafting the specification. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi all,
First off—thank you for the incredible work on A2A. The protocol is already solving a major barrier in agent ecosystems: interoperability, secure communication, and capability discovery across heterogeneous agents.
As I’ve been working with A2A, one pattern keeps emerging: skills alone aren’t enough for multi-agent coordination when intent spans multiple trust boundaries.
1. Skills Aren’t APIs (as raised in #921)
As @ognis1205 highlighted in #921, skills are descriptions, not APIs.
This leads to:
These are manageable in single-agent contexts. But when real-world intents involve multiple agents across organisational boundaries, this ambiguity quickly becomes a blocker.
2. What’s Missing: Shared Semantics and Roles (Example: Move Home)
Let’s ground this in a concrete scenario:
Say I tell my AI agent: “Move my family to Zone 1, London, by 15 August.”
This intent spans multiple independent actors and trust boundaries:
Today, A2A can connect these agents. But it doesn’t define:
Without this structure, even with A2A connectivity, coordination devolves into ad hoc messaging and private state silos.
3. Intent Protocols: Structured Coordination on Top of A2A
Here’s how the same "Move Home" intent looks expressed as an Intent Protocol (Buy Property):

In this view:
-- Identity verified (buyer)
-- Offer accepted (seller)
-- Property valuation (for mortgage)
Each role could be filled by an A2A-compatible agent. But instead of exchanging opaque messages, they coordinate by updating shared semantic objects (e.g., mortgageOffer.status), bound by clear preconditions and consent rules.
This transforms “agent chatter” into auditable, structured multi-agent coordination.
4. Why We Call Them Intent Protocols
We call these Intent Protocols because they are defined around a shared intent (goal) and specify:
In enterprise contexts, these can also be framed as Business Protocols:
5. Proposal: Layering Intent Protocols on A2A
We see value in layering Intent Protocols over A2A to provide:
This would complement A2A:
6. Questions for the Community
Closing
I see this as complementary: A2A handles the pipes. Intent Protocols define the grammar for coordinated, trustworthy participation.
Thanks for the great work here—excited to hear your thoughts!
Beta Was this translation helpful? Give feedback.
All reactions