Skip to content
This repository has been archived by the owner on Jan 24, 2024. It is now read-only.

[WIP] SEP: Master topology #73

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Conversation

dwoz
Copy link
Contributor

@dwoz dwoz commented Sep 13, 2023

No description provided.

@dwoz dwoz requested a review from a team as a code owner September 13, 2023 00:35
@dwoz dwoz requested review from twangboy and removed request for a team September 13, 2023 00:35
@dwoz dwoz changed the title Master topology [WIP] SEP: Master topology Sep 13, 2023
Copy link

@OrangeDog OrangeDog left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Obviously WIP, but not clear what the proposed idea actually is yet.

# Motivation
[motivation]: #motivation

There is a well proven want and need of Salt users to be able to group minions

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[citation needed]

[motivation]: #motivation

There is a well proven want and need of Salt users to be able to group minions
into a single master logically and/or physically. Historically Synic has been

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

*Syndic

## Alternatives
[alternatives]: #alternatives

What other designs have been considered? What is the impact of not doing this?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here would be a good place to show the equivalent Syndic solution for the example problem, highlighting its issues.

## Alternatives
[alternatives]: #alternatives

What other designs have been considered? What is the impact of not doing this?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The targeting problem can be trivially solved with minion ids that reflect the topology.

What specifically would this proposal add?

@nicholasmhughes
Copy link

My two cents... I don't think this should be a problem solved by strictly defining a hierarchy of the components so much as lessening the reliance on what's happening on a bespoke master. If we break down the core of what a master does for us, we get something like:

  1. Manage connections to minions (PKI)
  2. Provide access to configuration code and file artifacts (Salt Filesystem)
  3. Provide access to secrets (Pillar)
  4. Provide access to a Job Queue for targeting

If we seek to decentralize these functions and make them fault-tolerant, we might solve each like so:

  1. Use mTLS for connection security. This could allow us to rely less on specific master keys and more on a trusted certificate authority.
  2. Artifacts can be housed on a shared filesystem or in Git.
  3. Secrets are probably best kept in a dedicated system like Vault, but can also be located on a shared filesystem or Git.
  4. A job queue detached from the masters would be ideal.

We already have solutions in the code for 2-4, but the current solutions for number 1 are not ideal for having a "cluster" of masters where we don't care which master services a given minion.

I'm really trying to get to a point where I can throw masters in a horizontal pod autoscaler or autoscale group and only care about the configuration being passed to them... not any artifacts or identifying items like master private keys.

The architecture below wouldn't work for all transports, but it's certainly feasible given the current work on the websocket transport:

graph BT
    %% Define styles
    classDef defaultStyle fill:#333333, color:#ffffff, stroke:#ffffff, stroke-width:1px;
    classDef nodeStyle fill:#555555, color:#ffffff, stroke:#ffffff, stroke-width:1px;
    classDef specialStyle fill:#444444, color:#ffdd57, stroke:#ffffff, stroke-width:1px;

    %% Subgraph for Region 1
    subgraph region1
        minions1([Minions - Region 1])
        dns1{DNS Resolver 1}
        lb1(Load Balancer 1)
        master11(Master 1.1)
        master12(Master 1.2)
    end

    %% Subgraph for Region 2
    subgraph region2
        minions2([Minions - Region 2])
        dns2{DNS Resolver 2}
        lb2(Load Balancer 2)
        master21(Master 2.1)
        master22(Master 2.2)
    end

    %% Shared Job Queue
    queue[Shared Job Queue]

    %% Connections
    minions1 -->|Uses DNS to resolve 'salt'| dns1
    dns1 -->|Points to| lb1
    lb1 --> master11
    lb1 --> master12
    minions2 -->|Uses DNS to resolve 'salt'| dns2
    dns2 -->|Points to| lb2
    lb2 --> master21
    lb2 --> master22
    master11 --> queue
    master12 --> queue
    master21 --> queue
    master22 --> queue

    %% Apply styles
    class minions1,minions2 nodeStyle;
    class dns1,dns2 specialStyle;
    class lb1,lb2,master11,master12,master21,master22 defaultStyle;
    class queue specialStyle;
Loading

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants