Replies: 3 comments 1 reply
-
Longer-Term Vision: Service Manifest for Full LifecycleThis is admittedly ambitious, but if the # Future harbor.yaml - full service manifest
upstream:
source: ./upstream/docker-compose.yaml
prefix: myservice
overlays:
ollama: { ... }
nvidia: { ... }
# Lifecycle hooks (beyond Docker's native post_start/pre_stop)
hooks:
pre_up:
- script: ./scripts/check-dependencies.sh
post_up:
- script: ./scripts/seed-database.sh
pre_down:
- script: ./scripts/backup-data.sh
# Secrets management (SOPS/age, Vault, etc.)
secrets:
provider: sops
files:
- .env.secrets.enc # decrypted at runtime
# Backup & persistence
backup:
schedule: "0 2 * * *"
volumes: [postgres-data, ollama-models]
destination: s3://harbor-backups/{{date}}
# Multi-environment support
environments:
dev:
overlays: [nvidia]
env_file: .env.dev
prod:
overlays: [nvidia, monitoring]
secrets:
provider: vaultThis would position Harbor as a compose-based orchestration platform - something I've been searching for across many alternatives:
Harbor is the closest I've found to treating Docker Compose as a first-class orchestration primitive rather than just a deployment artifact. The dynamic file-matching, cross-service config merging, and service-aware CLI are exactly the patterns needed for complex multi-service stacks (especially AI services that need to wire together backends, frontends, and satellites). The (This vision may be too ambitious for Harbor's current scope as an LLM toolkit - happy to keep it focused on the immediate Next Steps (if accepted)
|
Beta Was this translation helpful? Give feedback.
-
|
Thank you so much for using Harbor and spending the time on the PR and the proposal! I promise to leave more substantial feedback later, but didn't want to leave you waiting here. I've been thinking about similar functionality on and off since the @ahundt's proposal about native services. Something similar is definitely wanted. My current ideas revolve around syntax that is a strict superset of Docker Compose and "compiles" to it, including ability to inherit/merge remote configs, better YAML anchors and more. However, it was nowhere near being actionable at this moment in time. I'd like to know what you think about such an approach to the orchestration layer, if that makes any sense, we can start designing how some of the abstractions would look like. |
Beta Was this translation helpful? Give feedback.
-
|
Disclaimer: I'm not a programmer by trade - more of a data architect. I understand these patterns at a conceptual level, so please correct me if I'm misframing anything technically. I used Claude Opus to help implement and test the code. Thanks for the fast response! I'm excited to explore this further. I think we're aligned on the need for richer orchestration capabilities. Let me share how I'm thinking about the architecture: Two distinct layers of desired state:
I'd resist calling either of these a "compose superset" because:
On using One approach would be embedding orchestration hints directly in Compose files via
However, # Final merged docker-compose.yaml (output)
services:
dify2-api:
image: langgenius/dify-api:1.11.1
x-harbor-source: ./upstream/docker-compose.yaml
x-harbor-overlays-applied: [ollama, nvidia]
x-harbor-prefix: dify2
environment:
- OPENAI_API_BASE=http://ollama:11434/v1 # injected by overlayThis is similar to how Kustomize adds Conditional overlays in the orchestration layer: # harbor.yaml - conditions live here, not scattered in compose files
overlays:
ollama:
when: [ollama] # apply when ollama service is active
services:
api:
environment:
- OPENAI_API_BASE=http://ollama:11434/v1
nvidia:
when: [nvidia] # apply when nvidia capability is active
services:
api:
deploy:
resources:
reservations:
devices: [{ driver: nvidia, count: all, capabilities: [gpu] }]Concrete separation: # harbor.yaml - Compose transformation (like Kustomize)
upstream:
source: ./upstream/docker-compose.yaml
prefix: dify
overlays:
ollama: { when: [ollama], ... }
# Output: valid docker-compose.yaml with x-harbor-* annotations# harbor-runtime.yaml - Orchestration state (only with daemon/controller)
placement:
dify-api: node-gpu-01
secrets:
provider: sops
rotate: weekly
backup:
volumes: [postgres-data]
schedule: "0 2 * * *"
# Output: controller instructionsThis keeps Compose as the portable artifact while allowing arbitrarily rich orchestration logic in a separate layer. The Compose spec can evolve independently - we're not chasing it or extending it. It's similar to how Elestio uses Does this framing resonate? Happy to sketch out what the abstraction boundaries might look like. Current PR status: The Tested with Dify's stock compose file:
The PR is ready for review on the transformation layer. The runtime orchestration layer ( |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Title: RFC: Stock Docker Compose Integration via
harbor.yamlSummary
I'd like to propose a feature that enables Harbor to use stock Docker Compose files from upstream projects with zero modifications. This would significantly reduce maintenance burden when integrating complex services like Dify, Langflow, or Lobe-Chat that have their own multi-service compose files.
Problem
Currently, when adding a new service to Harbor, we need to manually rewrite the upstream's Docker Compose file to:
api→dify-api)container_namewith Harbor's prefix conventiondepends_onandnetwork_mode: service:Xreferencesharbor-networkto all servicesThis is error-prone, time-consuming, and creates a maintenance burden whenever upstream updates their compose file.
Proposed Solution
Introduce a
harbor.yamlconfiguration file per service that declares transformation rules:The CLI would automatically transform the stock compose at runtime:
api{prefix}-apicontainer_name: X${HARBOR_CONTAINER_PREFIX}.{prefix}-{original}depends_on: [redis]depends_on: [{prefix}-redis]network_mode: service:Xnetwork_mode: service:{prefix}-Xmydata{prefix}-mydataharbor-networkto all servicesenv_file.envandoverride.envBenefits
git pullharbor.yamlwork exactly as beforeharbor.yamlschema can grow to include service metadata for the Harbor App, config merging declarations, etc.Implementation Status
I have a working proof-of-concept implementation:
feature/upstream-compose-integration(kundeng/harbor)routines/upstream.ts(~390 lines)dify2/with stock Dify compose fileThe implementation integrates into the existing
mergeComposeFiles.tsflow - upstream transforms are loaded before regular compose files and merged using the existing deepMerge logic.Questions for Discussion
harbor.yamlfile live in the service directory (proposed) or somewhere else?metadata:andconfigs:sections for Harbor App integration?Future Direction: Declarative Cross-Service Overlays
Currently, Harbor uses file-naming conventions for cross-service integration:
compose.x.aider.ollama.yml- applied when bothaiderANDollamaare runningcompose.litellm.langfuse.postgres.yml- applied when ANY of those services runThis works well but has limitations:
The
harbor.yamlapproach could evolve to make this declarative:This would:
The file-naming convention could eventually become syntactic sugar that generates
harbor.yamlentries, or both systems could coexist.Why Not Native Docker Compose Features?
Docker Compose has
profilesandinclude, but neither can replace Harbor's dynamic composition:include: if: conditionsyntax.Harbor's file-matcher implements a rule engine that neither feature can replicate. The
harbor.yamlapproach formalizes this as a declarative configuration layer.Next Steps (if accepted)
profiles:passthroughoverlays:syntax for cross-service declarationsHappy to discuss and iterate on this design. Would love to hear thoughts from the community!
Beta Was this translation helpful? Give feedback.
All reactions