-
Notifications
You must be signed in to change notification settings - Fork 42
[WIP] SEP: Master topology #73
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Obviously WIP, but not clear what the proposed idea actually is yet.
# Motivation | ||
[motivation]: #motivation | ||
|
||
There is a well proven want and need of Salt users to be able to group minions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[citation needed]
[motivation]: #motivation | ||
|
||
There is a well proven want and need of Salt users to be able to group minions | ||
into a single master logically and/or physically. Historically Synic has been |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
*Syndic
## Alternatives | ||
[alternatives]: #alternatives | ||
|
||
What other designs have been considered? What is the impact of not doing this? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here would be a good place to show the equivalent Syndic solution for the example problem, highlighting its issues.
## Alternatives | ||
[alternatives]: #alternatives | ||
|
||
What other designs have been considered? What is the impact of not doing this? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The targeting problem can be trivially solved with minion ids that reflect the topology.
What specifically would this proposal add?
My two cents... I don't think this should be a problem solved by strictly defining a hierarchy of the components so much as lessening the reliance on what's happening on a bespoke master. If we break down the core of what a master does for us, we get something like:
If we seek to decentralize these functions and make them fault-tolerant, we might solve each like so:
We already have solutions in the code for 2-4, but the current solutions for number 1 are not ideal for having a "cluster" of masters where we don't care which master services a given minion. I'm really trying to get to a point where I can throw masters in a horizontal pod autoscaler or autoscale group and only care about the configuration being passed to them... not any artifacts or identifying items like master private keys. The architecture below wouldn't work for all transports, but it's certainly feasible given the current work on the websocket transport: graph BT
%% Define styles
classDef defaultStyle fill:#333333, color:#ffffff, stroke:#ffffff, stroke-width:1px;
classDef nodeStyle fill:#555555, color:#ffffff, stroke:#ffffff, stroke-width:1px;
classDef specialStyle fill:#444444, color:#ffdd57, stroke:#ffffff, stroke-width:1px;
%% Subgraph for Region 1
subgraph region1
minions1([Minions - Region 1])
dns1{DNS Resolver 1}
lb1(Load Balancer 1)
master11(Master 1.1)
master12(Master 1.2)
end
%% Subgraph for Region 2
subgraph region2
minions2([Minions - Region 2])
dns2{DNS Resolver 2}
lb2(Load Balancer 2)
master21(Master 2.1)
master22(Master 2.2)
end
%% Shared Job Queue
queue[Shared Job Queue]
%% Connections
minions1 -->|Uses DNS to resolve 'salt'| dns1
dns1 -->|Points to| lb1
lb1 --> master11
lb1 --> master12
minions2 -->|Uses DNS to resolve 'salt'| dns2
dns2 -->|Points to| lb2
lb2 --> master21
lb2 --> master22
master11 --> queue
master12 --> queue
master21 --> queue
master22 --> queue
%% Apply styles
class minions1,minions2 nodeStyle;
class dns1,dns2 specialStyle;
class lb1,lb2,master11,master12,master21,master22 defaultStyle;
class queue specialStyle;
|
No description provided.