You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the past we configured K8gb to communicate with our bind server to update/delete/add CNAME records. To give you a little perspective, we have multiple ingress controller instances on a give Kubernetes cluster. An application on a given cluster can sit behind any of one those ingress controllers in that cluster. At the time we had configured K8gb externalDNS deployment to write CNAME records into the bind server. This all worked very well and we were able to successfully use the K8gb yo loadbalance and failover for our services running on multiple GSLB clusters.
We have now switched to using InfoBlox as our edgeDNS. Based on some documentation we read and the external-dns template in the K8gb helm chart (v.0.14.0) is configured to deploy an instance of externalDNS only when using one of a handful of dns providers, InfoBlox not listed in the if condition as being one of them.
{{- if or .Values.ns1.enabled .Values.route53.enabled .Values.rfc2136.enabled .Values.azuredns.enabled .Values.cloudflare.enabled }}
I believe the K8gb controller is managing the API interaction itself to Infoblox and we found it appears that there are some limitations within externalDNS for Infoblox support that led to this decesion. Where can I specify the specific external IP I want to assign to this LoadBalancer rather than have it pick the next available.
What we see is the K8gb controller writing a NS record for the DNS to delegate to for our configured zone (K8gb CoreDNS), and then writing/updating the same A record in InfoBlox with the IP address for the DNS (K8gb CoreDNS) that it is delegating to for the zone we've defined.
This all would be fine, however, we notice that each time we configure an application in the cluster to be globally loadbalanced, the K8gb controller tries to update that same A record to then reflect the ingress controller IP associated with the ingress-class defined in the corresponding Ingress resource for that gslb application. This is obviously not desirable in a scenario where our cluster applications can sit behind anyone of our multiple IngressControllers configured on that cluster.
There is an option we had been pondering but not sure exactly how to configure K8gb to do this:
Essentially, not have K8gb update Infoblox at all and we would manually add a NS record and an A record in Infblox for the K8gb CoreDNS to delegate to. The K8gb CoreDNS would then return the correct ingress controller IP associated with the health service for the app. Keep in mind that we have several clusters that we would globaly load balance, thus we would have K8gb instance running on each one of those clustrers.
However, since my last posting of this issue on the original Discussion back in May 2025, we've from k8gb v0.14.0 to v0.15.0-rc2.
What we discovered is that there is a new label setting "k8gb.io/ip-source" that we can set to "true". This does what we want (adding a single A record in Infoblox that won't keep changing each time we add a new globally loadbalanced application to the cluster), however, from what I've been able to determine, it forces us to already have an Ingress resource defined in our Kubernetes cluster with a label for "k8gb.io/ip-source" which may not be the case (especially in our usecase).
Based on the documentation, it's true that we can also define the K8gb coredns service to be of type loalBalancer which that would require us to allocate a separate external IP for K8gb.
Assuming our understanding is all correct, what do you suggest is the best approach for this issue. We would really appreciate your input and guidance. Please let me know if we can provide you any additional information to clarify anything. Also, I should add that our clusters are running on an isolated network (not connected to the public internet).
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
In the past we configured K8gb to communicate with our bind server to update/delete/add CNAME records. To give you a little perspective, we have multiple ingress controller instances on a give Kubernetes cluster. An application on a given cluster can sit behind any of one those ingress controllers in that cluster. At the time we had configured K8gb externalDNS deployment to write CNAME records into the bind server. This all worked very well and we were able to successfully use the K8gb yo loadbalance and failover for our services running on multiple GSLB clusters.
We have now switched to using InfoBlox as our edgeDNS. Based on some documentation we read and the external-dns template in the K8gb helm chart (v.0.14.0) is configured to deploy an instance of externalDNS only when using one of a handful of dns providers, InfoBlox not listed in the if condition as being one of them.
{{- if or .Values.ns1.enabled .Values.route53.enabled .Values.rfc2136.enabled .Values.azuredns.enabled .Values.cloudflare.enabled }}
I believe the K8gb controller is managing the API interaction itself to Infoblox and we found it appears that there are some limitations within externalDNS for Infoblox support that led to this decesion. Where can I specify the specific external IP I want to assign to this LoadBalancer rather than have it pick the next available.
What we see is the K8gb controller writing a NS record for the DNS to delegate to for our configured zone (K8gb CoreDNS), and then writing/updating the same A record in InfoBlox with the IP address for the DNS (K8gb CoreDNS) that it is delegating to for the zone we've defined.
This all would be fine, however, we notice that each time we configure an application in the cluster to be globally loadbalanced, the K8gb controller tries to update that same A record to then reflect the ingress controller IP associated with the ingress-class defined in the corresponding Ingress resource for that gslb application. This is obviously not desirable in a scenario where our cluster applications can sit behind anyone of our multiple IngressControllers configured on that cluster.
There is an option we had been pondering but not sure exactly how to configure K8gb to do this:
Essentially, not have K8gb update Infoblox at all and we would manually add a NS record and an A record in Infblox for the K8gb CoreDNS to delegate to. The K8gb CoreDNS would then return the correct ingress controller IP associated with the health service for the app. Keep in mind that we have several clusters that we would globaly load balance, thus we would have K8gb instance running on each one of those clustrers.
However, since my last posting of this issue on the original Discussion back in May 2025, we've from k8gb v0.14.0 to v0.15.0-rc2.
What we discovered is that there is a new label setting "k8gb.io/ip-source" that we can set to "true". This does what we want (adding a single A record in Infoblox that won't keep changing each time we add a new globally loadbalanced application to the cluster), however, from what I've been able to determine, it forces us to already have an Ingress resource defined in our Kubernetes cluster with a label for "k8gb.io/ip-source" which may not be the case (especially in our usecase).
Based on the documentation, it's true that we can also define the K8gb coredns service to be of type loalBalancer which that would require us to allocate a separate external IP for K8gb.
Assuming our understanding is all correct, what do you suggest is the best approach for this issue. We would really appreciate your input and guidance. Please let me know if we can provide you any additional information to clarify anything. Also, I should add that our clusters are running on an isolated network (not connected to the public internet).
Thanks
Beta Was this translation helpful? Give feedback.
All reactions