Replies: 5 comments
-
You could do something like that, but keep in mind that serena will need access to the code, which thus needs to mounted to the container. Generally, running the MCP remotely and connecting from a client on another system is an absolutely valid use case. We will focus more on testing and solving docker and SSE issues in the upcoming weeks, as we have higher priority topics at the moment. If you want to help out, we'll gladly accept PRs that improve the setup and/or documentation |
Beta Was this translation helpful? Give feedback.
-
Just getting to know Serena, but have thoughts on this... An alternative to requiring mounted volumes would be to expose an 'update' API. A small file watcher daemon could be started in a project repo to send code updates using the api. |
Beta Was this translation helpful? Give feedback.
-
Sure, that would work, but why is that better than mounting? :) |
Beta Was this translation helpful? Give feedback.
-
Off the top of my head:
|
Beta Was this translation helpful? Give feedback.
-
Converting to discussion, that's not really an issue |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I was thinking I could run a single remote Serena MCP server in a docker container, and point all MCP clients (which are remote connections) to a single Serena MCP server instance. This way there is no dependency of a remote github repository, and I don't have to have docker installed everywhere I want to use Serena. Is this not the intended purpose of the docker container version, or is it just intended to run locally as a container everywhere a MCP client is configured. I thought I had it partially working with SSE transport, but not everything worked as expected, so I was wondering if the use case I am suggesting is even possible.
Thank You.
Beta Was this translation helpful? Give feedback.
All reactions