Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

k3kcli UX enhancement #206

Open
enrichman opened this issue Jan 22, 2025 · 2 comments
Open

k3kcli UX enhancement #206

enrichman opened this issue Jan 22, 2025 · 2 comments
Labels
cli enhancement New feature or request priority/1

Comments

@enrichman
Copy link
Collaborator

enrichman commented Jan 22, 2025

Now we have improved a bit the Cluster spec, and we should think about aligning the k3kcli to the new changes.

Some proposed changes:

k3kcli cluster create

Currently the Cluster name is a flag (--name value). This is now the only required field, together with the namespace. It would make sense to move it as the only required argument instead:

k3kcli cluster create mycluster

Regarding the --namespace, if not specified it looks cleaner to have a dedicated namespace. I.e. the above k3kcli cluster create mycluster command will create the Cluster inside a new k3k-mycluster namespace. If the --namespace flag is defined we will try to create the resource in that namespace, without checking its existence. We can think about adding a --create-namespace flag eventually (like Helm).

The --version flag help should be updated to indicate that the k8s host version will be used by default.

By default the NodePort is set. I think we should provide an --expose flag, and maybe not exposing the Cluster by default, suggesting to use the kubectl port-forward. The flag could have the options like --expose nodeport. This needs some thoughts.

We can provide a --cluster-kubeconfig flag to let the user choose the output file where to extract the kubeconfig.

As per other issue the default kubeconfig should be taken into account, it the KUBECONFIG var is empty.

k3kcli cluster list

A list command will be nice to have.

k3kcli cluster delete

We should test and fix what is not working, and align this to the create command.

other

We should document how to embed the automplete (I wasn't able to).

Related issues:

@enrichman enrichman added enhancement New feature or request cli labels Jan 22, 2025
@VestigeJ
Copy link

+1 for mirroring how existing kubectl functions for naming of a resource ie

k3k cluster create my_cluster_name

+1 for mirroring the use of a default namespace as its also an existing expectation from anyone familiar with kubectl commands against a cluster

k3k cluster create my_cluster_name
-> deploying my_cluster_name to k3k-default...

As it's an expectation for k3k to run under an existing cluster it would make the most sense to create a directory adjacent to the existing config

sudo ls /etc/rancher/k3s/k3k.d/my_cluster_name.yaml

sudo ls /etc/rancher/rke2/k3k.d/my_cluster_name.yaml

ls ~/.kube/k3k.d/my_cluster_name.yaml

If KUBECONFIG isn't set the command should fail the same way it does against a regular cluster.

k3k cluster list == kubectl get clusters (assuming no clusters are created with cluster-api here maybe check their type)

$ kubectl config --help

Modify kubeconfig files using subcommands like "kubectl config set
current-context my-context".

 The loading order follows these rules:

  1.  If the --kubeconfig flag is set, then only that file is loaded. The flag
may only be set once and no merging takes place.
  2.  If $KUBECONFIG environment variable is set, then it is used as a list of
paths (normal path delimiting rules for your system). These paths are merged.
When a value is modified, it is modified in the file that defines the stanza.
When a value is created, it is created in the first file that exists. If no
files in the chain exist, then it creates the last file in the list.
  3.  Otherwise, ${HOME}/.kube/config is used and no merging takes place.

Available Commands:
  current-context   Display the current-context
  delete-cluster    Delete the specified cluster from the kubeconfig
  delete-context    Delete the specified context from the kubeconfig
  delete-user       Delete the specified user from the kubeconfig
  get-clusters      Display clusters defined in the kubeconfig
  get-contexts      Describe one or many contexts
  get-users         Display users defined in the kubeconfig
  rename-context    Rename a context from the kubeconfig file
  set               Set an individual value in a kubeconfig file
  set-cluster       Set a cluster entry in kubeconfig
  set-context       Set a context entry in kubeconfig
  set-credentials   Set a user entry in kubeconfig
  unset             Unset an individual value in a kubeconfig file
  use-context       Set the current-context in a kubeconfig file
  view              Display merged kubeconfig settings or a specified kubeconfig
file

Usage:
  kubectl config SUBCOMMAND [options]

Use "kubectl config <command> --help" for more information about a given
command.
Use "kubectl options" for a list of global command-line options (applies to all
commands).

@enrichman
Copy link
Collaborator Author

@VestigeJ thanks for the feedback.

+1 for mirroring the use of a default namespace as its also an existing expectation from anyone familiar with kubectl commands against a cluster

k3k cluster create my_cluster_name
-> deploying my_cluster_name to k3k-default...

Here the idea was to creating a dedicated namespace "per cluster", something like k3k-my-cluster-name, to avoid having multiple clusters in the same namespace. That is still possible, but it is something the user should be aware of (because of the networking implication). So if every Cluster will have its own namespace by default we try to isolate them.


As it's an expectation for k3k to run under an existing cluster it would make the most sense to create a directory adjacent to the existing config

sudo ls /etc/rancher/k3s/k3k.d/my_cluster_name.yaml

sudo ls /etc/rancher/rke2/k3k.d/my_cluster_name.yaml

ls ~/.kube/k3k.d/my_cluster_name.yaml

That will be nice. I would use the XDG Specification, and have a $HOME/.config/.k3k directory, but I like the idea to have a consistent place where to store/generate the configs, if not specified otherwise.


If KUBECONFIG isn't set the command should fail the same way it does against a regular cluster.

Like the kubectl config --help output you posted this is not the expected behavior. We are failing, but we should first fallback to the ${HOME}/.kube/config. If the user is currently using the kubectl with the default kubeconfig I'm expecting the same cluster to be targeted.

  1. If the --kubeconfig flag is set, then only that file is loaded. The flag
    may only be set once and no merging takes place.
  2. If $KUBECONFIG environment variable is set, then it is used as a list of
    paths (normal path delimiting rules for your system). These paths are merged.
    When a value is modified, it is modified in the file that defines the stanza.
    When a value is created, it is created in the first file that exists. If no
    files in the chain exist, then it creates the last file in the list.
  3. Otherwise, ${HOME}/.kube/config is used and no merging takes place.

Since the $KUBECONFIG var is a list of config paths we can even think about having some utilities to manage that, and load all the kubeconfigs in the .k3k folder, for example.

We should also remember to fix a small issue I just thought about. It looks like currently the kubeconfig is using the <cluster-name>-kubeconfig.yaml pattern. This means in case of two clusters with the same name but in different namespaces we could have a conflict.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cli enhancement New feature or request priority/1
Projects
None yet
Development

No branches or pull requests

3 participants