Service Dependency Injection for A2A Agents #962
matoushavlena
started this conversation in
Ideas
Replies: 1 comment 1 reply
-
I believe the AgentExecutor is a good fit for this. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Problem
Agents frequently rely on external services. The most ubiquitous is LLM inference, but other common dependencies include MCP servers, vector databases, document text extraction tools (e.g., Docling), and object storage systems (e.g., S3-compatible services).
These external services are often billed, rate-limited, and may impose various usage constraints. While commercial agents can bundle access to specific services as part of their offering, open-source agents typically require user-provided credentials (e.g., API keys or tokens) to function. From a user's perspective, having control over the choice of external services is often necessary — for example:
This pattern of user-controlled service selection is already common: most LLM-based UIs today offer model selection via dropdowns or configuration files. However, A2A currently lacks a standardized mechanism for clients to inject service dependencies into agents in a structured and interoperable way.
Solution
This problem maps directly to the well-established software pattern of dependency injection (DI), or inversion of control. In DI, a component declares its required interfaces, while the actual implementations are provided externally at runtime.
We propose bringing this model to A2A agents by allowing them to declare their required services explicitly and enabling clients to fulfill these dependencies dynamically.
This mechanism is implemented as an A2A extension, which fits well with the existing protocol model. The interaction involves three steps:
Declaration:
The agent specifies its required services in the
AgentCard
using an extension (e.g., "LLM inference service").Fulfillment:
The client provides service credentials or endpoints (e.g., API URL, access key) in the extension metadata sent with the user message.
Usage:
The agent interacts with the service using a known interface (e.g., OpenAI-compatible API, S3-compatible API).
Extensions can be marked as:
required=True
: The agent cannot operate without this service.required=False
: A default fallback exists, or the service is optional.Implementation
A working reference implementation of this mechanism is available in the
beeai-sdk
:👉 https://github.com/i-am-bee/beeai-platform/blob/main/apps/beeai-sdk/src/beeai_sdk/a2a/extensions/services/llm.py
This implementation demonstrates LLM service injection, but the pattern generalizes to other service types.
Caveats and Considerations
Correctness and Compatibility:
An agent expecting a high-capacity LLM may behave poorly if provided with a minimal or incompatible model. It is the client's responsibility to ensure the injected services meet the agent's stated requirements.
Security and Data Handling:
Injected services are fully controlled by the client or user. Consequently, agents must assume that all data passed to these services is exposed to the user. Agents should not send sensitive data unless the user is authorized to view it.
Fallbacks and Robustness:
Agents should handle missing or inadequate services gracefully where possible, either by falling back to defaults or by returning an informative error message.
Credential Security
Clients should generate scoped and/or short-lived service credentials for agents in order to minimize potential misuse. This can be achieved by using a gateway/proxy for the external service.
Proposal
We propose standardizing this dependency injection mechanism at the A2A protocol level, starting with LLM inference as the first officially supported service type.
This can be done by:
Defining canonical extension specifications, e.g.:
https://a2a.google.com/services/llm
Optionally, evolving the core A2A protocol to include native support for service declarations and fulfillment.
This mechanism would bring modularity, configurability, and interoperability to the agent ecosystem, making it easier for open-source and enterprise users alike to adapt agents to their environments.
This proposal was co-authored by @JanPokorny and the BeeAI team.
Beta Was this translation helpful? Give feedback.
All reactions