Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding TLS secret to a namespace that does not exist yet #14

Open
blaziq opened this issue Nov 11, 2024 · 7 comments
Open

Adding TLS secret to a namespace that does not exist yet #14

blaziq opened this issue Nov 11, 2024 · 7 comments
Assignees

Comments

@blaziq
Copy link

blaziq commented Nov 11, 2024

When using manual TLS configuration, the deployment guide says to create a TLS secret in a namespace:

kubectl create secret tls <secret-name> \
  --cert=path/to/tls.crt \
  --key=path/to/tls.key \
  -n <namespace>

But at this point the namespace does not exist yet, it only gets created with helm afterwards, e.g.

helm upgrade -i minio minio/minio
--version 5.2.0
--values server/generated-values.yaml
--namespace minio
--create-namespace

@james-hinton
Copy link
Contributor

I can see this would happen. Either a fix could come in the Manual TLS part to be generic, or to have the creation of namespaces come as the first step for each Building Block and to remove the --create-namespace from the helm upgrade.

I can use this issue to create a branch from.

Using MinIO as an example with creating the namespace as the first step:


Deployment Steps

1. Create the Namespace

Before proceeding with the configuration and deployment, create the minio namespace.

kubectl create namespace minio

2. Configure MinIO

Run the configuration script:

bash configure-minio.sh

During the script execution, you will be prompted for:

  • INGRESS_HOST: Base domain for ingress hosts.
    • Example: example.com
  • CLUSTER_ISSUER (if using cert-manager): Name of the ClusterIssuer.
    • Example: letsencrypt-prod
  • STORAGE_CLASS: Storage class for persistent volumes.
    • Example: default

Important Notes:

  • If you choose not to use cert-manager, you will need to create the TLS secrets manually before deploying.
    • The required TLS secret names are:
      • minio-tls
      • minio-console-tls
    • For instructions on creating TLS secrets manually, please refer to the Manual TLS Certificate Management section in the TLS Certificate Management Guide.

@rconway , what do you think?

@spinto
Copy link

spinto commented Nov 12, 2024

we could also take a simpler approach, and just say in the "Manual TLS" that you need to install a wildcard certificate for your ingress for all the *. and that is it. And we remove all the info about creating a certificate for each domain. Issue #15 would ensure everything would work with the single subdomain. In the end, the letsencrypt is the version for production, the manual TLS is more for demo environments, development environments or internal environments, and there security concerns are lower and people may actually be happier to just have to create one certificate for all and do it once when configuring the ingress and that is it

@rconway
Copy link
Contributor

rconway commented Nov 13, 2024

@james-hinton The manual TLS can be a post-requisite, so that it is performed after the BB deploy - this is effectively what happens when letsencrypt is used.

@spinto I agree that #15 is a good way to go - but I do not think this will help with the namespace problem - since the secret must be duplicated into the namespace of the ingress.

@spinto
Copy link

spinto commented Nov 13, 2024

@spinto I agree that #15 is a good way to go - but I do not think this will help with the namespace problem - since the secret must be duplicated into the namespace of the ingress.

but the namespace of the ingress is created only once, during cluster configuration, Rancher actually uses for this an already existing namespace, and also provides in the GUI means to very easily add a certificate to the ingress, at the beginning or later (see https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/kubernetes-resources-setup/encrypt-http-communication for example), In our "manual TLS" page we should just point to, for non-production and internal environments with no internet access, to the rancher page mentioned above for the configuration of a wildcard TLS certificate.

@james-hinton james-hinton self-assigned this Nov 13, 2024
@rconway
Copy link
Contributor

rconway commented Nov 13, 2024

I'm not familiar with the Rancher UI.
But to my understanding (maybe incorrect), the Ingress must be in the same namespace as the Service to which it relates, and the tls Secret must be in the same namespace as the Ingress.

@spinto
Copy link

spinto commented Nov 13, 2024

Ah, yes, sorry, I was not clear.

Yes, the Ingress needs to be in the same namespace as the Service to which it relates, but the Ingress Controller no. In the Manual TLS scenario you would install the certificate in the Ingress Controller directly, for all the Ingress in all namespaces which will be served by that controller.

So, what I am proposing is that, in the pre-requisite page, we could say that EOEPCA recommends, for production, an Ingress controller with a load balancer providing an external IP (accessible from the internet), to which a DNS widlcard entry *. is mapped, plus a certmanager configured with a let'sencrypt certificate provider. For development/testing/internal deployments, where there is no external IP, you can instead manually install a wildcard certificate for *. in your default Ingress controller, and that is it.

If then the CLUSTER_ISSUER variable is empty, you would just not set the cert-manager.io/cluster-issuer in the helm charts. In theory, you could also leave it to an empty value, although this will generate some errors in the K8S deployment as K8S will try to find a certmanager issuer which does not exist...

What we have in the ESA Cloud is, for example, DNS entry for *. resolving an internal IP address, which maps to an HAProxy LB on the internal network, which then forwards in round-robin to the 4 nodes of the K8S cluster hosting the Nginx Ingress Controller (K8S daemonset) which then will forward to the internal virtual K8S namespace network to the pod according to the Ingress rules specified in the helm charts...

@rconway
Copy link
Contributor

rconway commented Nov 14, 2024

Yes, the Ingress needs to be in the same namespace as the Service to which it relates, but the Ingress Controller no. In the Manual TLS scenario you would install the certificate in the Ingress Controller directly, for all the Ingress in all namespaces which will be served by that controller.

Yes, you are right - I was forgetting about this possibility.

I am currently looking at the Ingress Controller - to take account of the possible use of APISIX for better IAM integration. In doing so I can try to take account of your suggested approach.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

When branches are created from issues, their pull requests are automatically linked.

4 participants