AI agent #9273
Replies: 6 comments 19 replies
-
Would this be possible to setup that the Agent Runtime(eliza) can be changed out easily? ie: eliza2 is coming out soon, supposed to be better, less ugly to setup, and lighter. (or some other package) maybe selectable by the user? (but definitly by us?) |
Beta Was this translation helpful? Give feedback.
-
Stunning draft @0xApotheosis!
My vote would be on no for the sake of simplicity, feels like inference APIs should do the thing and we should just be able to stream it, eliza handles all the heavy lifting AFAIK. Though, I think this only applies to chat UI (to be double-checked) and we may need to deploy Eliza's REST API if we want to integrate it into an app.
Are we set on whether or not we're EVM only? My understanding is yes, but just double-checking here. If so, we may not even need anything custom which slows things down 10x(cough hdwallet), could just use viem/wagmi connector.
Would be amazing to have such a UI, and something I was thinking in web for quite some time - both Alchemy and Tenderly alike provide this, and for such an app, that sounds like a requirement more than a nice-to-have indeed.
Absolutely, as discussed, think we won't hit limit until there's actual proper usage, which is a later thing (hopefully!)
Looks like this one may actually be a good contender for a back-end? |
Beta Was this translation helpful? Give feedback.
-
@twblack88 suggested using Privy for wallet infrastructure. They have proper support for ETH, BTC, and even Solana if we'll want to add that as a stretch. Looks like it also has UI elements for signing UI which we could use to ease some lift: https://docs.privy.io/wallets/using-wallets/ui-library Though if we go with EVMs only, perhaps we want to BYOW instead? I think that's mostly a product question, since there are many ramifications to an integrated wallet that BYOW won't have, which makes me think it's the better one for MVP. |
Beta Was this translation helpful? Give feedback.
-
to wit: |
Beta Was this translation helpful? Give feedback.
-
This is an interesting question because Eliza is using a PostegreSQL database, we will need to deploy it as a server and host a database or use a database service So it means that we might need to think right now the way we want this database, as it would also be consumed by the backend service you are talking about I like eliza because there are tons of plugins that would enable us to have even more feature such as searching on internet for example (using APIs like SERP) Have we considered GAME from Virtuals (credit to @gomesalexandre that shared this with me) It does support any open ai compliant LLM APIs so it would work with Venice Drawback is that it would require way more works, it has way more less plugins, but maybe we could achieve a POC on client side without even a backend component (I'm personally interested about the pros and cons if we have considered it) |
Beta Was this translation helpful? Give feedback.
-
Notes from first spike on UI elements, stack, and use cases |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Architecture Overview
The AI agent’s architecture is composed of:
Note
Open question: Do we want to add a backend component here to handle long-running tasks such as a DCA strategy? If yes, do we want to ignore it for now and add it once the initial PoC is built?
Front-end UI web app (Vite?)
Provides chat interface, wallet connect UI, and displays results (including simulation previews and transaction details), ability to sign transactions. Runs in browser.
Wallet connector
Facilitates user wallet connectivity and signing.
Simulation engine
Used to simulate transactions off-chain and predict outcomes before execution. Provides a preview of state changes.
E.g. uses Tenderly Simulation API to show token output of a swap and changes in balances, helping users avoid malicious or failing tx.
A limitation here is that we cannot simulate off-chain or multi-step non-atomic actions, e.g. THORChain trades.
Session Memory Store
Keeps track of conversation context, user preferences (like selected wallet/network, active accounts), and partial results across browser reloads etc.
We'll want to ensure relevant data is passed to Eliza (see the Knowledge Management section of the docs).
Agent
Core (ElizaOS)
Orchestrates agent behavior, conversation management, and plugin usage. Maintains agent persona and session memory.
Uses a Venice or Chutes API Key to access inference. We might want to start with Venice because it's effectively free (for us), and move to Chutes when we hit limitations. The implementation will be inference-provider agnostic, so there is no concern with changing as we go.
Plugins
https://eliza.how/docs/core/plugins
Each plugin registers a set of actions or tools the agent can use. The LLM is prompted with a description of these actions. When the LLM chooses an action, ElizaOS executes the underlying plugin code, then returns the result (if any) back into the model’s context. For instance, the workflow for a swap might be:
Example plugins we might use:
Character and persona
https://eliza.how/docs/core/project#bio--style
We can configure a specific persona for the agent (helpful, knowledgeable about DeFi, cautious with security, etc.) using ElizaOS’s character definition. This influences how the LLM responds to the user (e.g. the agent might explain steps verbosely or ask for confirmation in a certain style).
LLM Inference Backend
Handles natural language understanding and generation. The agent’s prompts are sent here to produce actions or responses.
Once the user’s message and current context are assembled, the agent sends this to the LLM backend. The agent’s request to the model will include relevant plugin documentation (so the model knows what tools it can use) and the conversation history (for context).
Repo
We should probably use a monorepo for this project to maximise development velocity and shared TypeScript code (types etc).
We might even want to look into TurboRepo or even nx if we want to be really AI native.
Beta Was this translation helpful? Give feedback.
All reactions