-
Notifications
You must be signed in to change notification settings - Fork 456
Description
/kind bug
What steps did you take and what happened:
I want to expose controlplane API on port 443 (due to security reasons, we have allowed only communication on 443). For the controlplane itself it is no problem through kubeadmcontrolplane manifest.
However i run to issue with azureloadbalancer.
In azure cluster, i used spec.controlPlaneEndpoint.port: 443, and I would expect the loadbalancer to follow that and create rule and healthprobe to match it. But still LB is created with rule 6443:6443 using probe on port 6443 so I need to manually modify it.
I have used also the newly implemented additionalAPIServerLBPorts, but result is the same, original rule is still for 6443 with same health probe. Only diff is new lb rule 443:443 but again using original health probe to 6443
azurecluster:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AzureCluster
metadata:
name: {{ include "cluster.name" .}}
labels:
cluster.x-k8s.io/cluster-name: {{ include "cluster.name" .}}
spec:
identityRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AzureClusterIdentity
name: cluster-identity
location: "{{ .Values.azure.location }}"
controlPlaneEndpoint:
port: 443
networkSpec:
additionalAPIServerLBPorts:
- name: test
port: 443
privateDNSZoneName: "{{ .Values.global.clusterName }}"
apiServerLB:
type: "Internal"
name: "{{ include "cluster.name" .}}-api-loadbalancer"
frontendIPs:
- name: "{{ include "cluster.name" .}}-api-loadbalancer-ip"
privateIP: {{ .Values.global.controlPlaneVip }}
What did you expect to happen:
I would expect created loadbalancer to follow controlPlaneEndpoint.port and create rule and healthprobe to match it
Anything else you would like to add:
N/A
Environment:
- cluster-api-provider-azure version: v1.20.2
- Kubernetes version: (use
kubectl version
): 1.31.6 - OS (e.g. from
/etc/os-release
): ubuntu 24.04.1
Metadata
Metadata
Assignees
Labels
Type
Projects
Status