Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

After upgrade to 4.12 version there is strange error appeared: "annotation group StreamSnippet contains risky annotation based on ingress configuration" #12656

Closed
yaroslav-nakonechnikov opened this issue Jan 10, 2025 · 3 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@yaroslav-nakonechnikov
Copy link

yaroslav-nakonechnikov commented Jan 10, 2025

What happened:

After upgrading from 4.11 to 4.12 and leaving stream-snipped config the same, there are strange error appeared:

Error: Failed to create Ingress 'namespace/eks-44459-ingress' because: admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: annotation group StreamSnippet contains risky annotation based on ingress configuration

but in release notes for helm or controller there are no information about this.

What you expected to happen:
It should work like it was working.

NGINX Ingress controller version (exec into the pod and run /nginx-ingress-controller --version):

[user@ip-100-65-11-122 /]$ kubectl exec -itn ingress ingress-nginx-controller-798d4c8fc8-lpfwr -- /nginx-ingress-controller --version
-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       v1.12.0
  Build:         ba73b2c24d355f1cdcf4b31ef7c5574059f12118
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.25.5

-------------------------------------------------------------------------------

Kubernetes version (use kubectl version):

[user@ip-100-65-11-122 /]$ kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.3", GitCommit:"434bfd82814af038ad94d62ebe59b133fcb50506", GitTreeState:"clean", BuildDate:"2022-10-12T10:57:26Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"31", GitVersion:"v1.31.4-eks-2d5f260", GitCommit:"fc7cb496f47c4d5687532c5d1850bf20bdabeecb", GitTreeState:"clean", BuildDate:"2024-12-12T20:56:32Z", GoVersion:"go1.22.9", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.25) and server (1.31) exceeds the supported minor version skew of +/-1

Environment:

  • Cloud provider or hardware configuration: AWS EKS with managed nodes

  • OS: bottlerocket

  • How was the ingress-nginx-controller installed:

  • helm:

[user@ip-100-65-11-122 /]$ helm ls -A | grep -i ingress
ingress-nginx                   ingress         1               2025-01-09 16:08:43.809926403 +0000 UTC deployed        ingress-nginx-4.12.0                    1.12.0

How to reproduce this issue:

install ngress with helm with help of terraform:

resource "helm_release" "ingress_nginx" {
  name              = "ingress-nginx"
  chart             = "https://github.com/kubernetes/ingress-nginx/releases/download/helm-chart-${local.ingress_nginx_chart}/ingress-nginx-${local.ingress_nginx_chart}.tgz"
  namespace         = kubernetes_namespace.ingress.id
  wait_for_jobs     = true
  cleanup_on_fail   = true
  force_update      = true
  replace           = true
  lint              = true
  dependency_update = true

  dynamic "set" {
    for_each = {

 
      "controller.config.retry-non-idempotent"      = "true"
      "controller.config.allow-snippet-annotations" = "true"
      "controller.config.log-format-escape-none"    = "true"
      "controller.config.log-format-escape-json"    = "true"
      "controller.config.log-format-upstream"       = "\\{\"module\":\"upstreamlog\"\\,\"src_ip\":\"$remote_addr\"\\, \"username\":\"$remote_user\"\\,\"timestamp\":\"$time_local\"\\, \"request\":\"$request\"\\, \"status\":\"$status\"\\, \"bytes_sent\":\"$body_bytes_sent\"\\, \"http_referer\":\"$http_referer\"\\, \"http_user_agent\":\"$http_user_agent\"\\, \"req_len\":$request_length\\, \"req_time\":\"$request_time\"\\,\"proxy_upstream_name\":\"$proxy_upstream_name\"\\, \"proxy_alternative_upstream_name\":\"$proxy_alternative_upstream_name\"\\, \"upstream_addr\":\"$upstream_addr\"\\, \"upstream_response_length\":\"$upstream_response_length\"\\,\"upstream_response_time\":\"$upstream_response_time\"\\, \"upstream_status\":\"$upstream_status\"\\, \"req_id\":\"$req_id\"\\, \"service_name\":\"$service_name\"\\}"
      "controller.config.log-format-stream"         = "\\{\"module\":\"log_stream\"\\,\"src_ip\":\"$remote_addr\"\\,\"timestamp\":\"$time_local\"\\,\"protocol\":\"$protocol\"\\,\"status\":\"$status\"\\,\"bytes_out\":$bytes_sent\\,\"bytes_in\":$bytes_received\\,\"session_time\":\"$session_time\"\\,\"upstream_addr\":\"$upstream_addr\"\\,\"upstream_bytes_out\":\"$upstream_bytes_sent\"\\,\"upstream_bytes_in\":\"$upstream_bytes_received\"\\,\"upstream_connect_time\":\"$upstream_connect_time\"\\,\"proxy_upstream_name\":\"$proxy_upstream_name\"\\}"
      "controller.kind"                             = "Deployment"

      "controller.service.type"       = "ClusterIP"
      "controller.service.enableHttp" = "false"

      "controller.resources.requests.cpu"       = local.private_env ? "128m" : "2000m"
      "controller.resources.requests.memory"    = local.private_env ? "512Mi" : "1024Mi"
      "controller.resources.limits.cpu"         = local.private_env ? "128m" : "2000m"
      "controller.resources.limits.memory"      = local.private_env ? "512Mi" : "1024Mi"
      "controller.ingressClassResource.default" = "true"

      # doesn't work with the current version of the ingress-nginx chart
      # so we have to patch it
      # "controller.podSecurityContext.sysctls[1].name"  = "net.core.somaxconn"
      # "controller.podSecurityContext.sysctls[1].value" = "\"32768\""
      "controller.podSecurityContext.sysctls[0].name"  = "net.ipv4.ip_local_port_range"
      "controller.podSecurityContext.sysctls[0].value" = "1024 60000"
      "sysctls.net\\.core\\.somaxconn"                 = "32768"
      "sysctls.net\\.ipv4\\.ip_local_port_range"       = "1024 60000"

      "controller.extraVolumeMounts[0].name"         = "indexer"
      "controller.extraVolumeMounts[0].mountPath"    = "/mnt/indexer"
      "controller.extraVolumeMounts[0].readOnly"     = "true"
      "controller.extraVolumes[0].name"              = "indexer"
      "controller.extraVolumes[0].secret.secretName" = "indexer"

      "controller.extraVolumeMounts[1].name"         = "ingress"
      "controller.extraVolumeMounts[1].mountPath"    = "/mnt/ingress"
      "controller.extraVolumeMounts[1].readOnly"     = "true"
      "controller.extraVolumes[1].name"              = "ingress"
      "controller.extraVolumes[1].secret.secretName" = "ingress"

      "controller.tolerations[0].key"      = "function"
      "controller.tolerations[0].operator" = "Equal"
      "controller.tolerations[0].value"    = "ingress"
      "controller.tolerations[0].effect"   = "NoSchedule"

      "controller.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].key"       = "karpenter.sh/nodepool"
      "controller.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].operator"  = "In"
      "controller.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].values[0]" = "${local.cluster_name}-ingress"

      "controller.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].weight"                                                      = "100"
      "controller.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].key"       = "app.kubernetes.io/component"
      "controller.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].operator"  = "In"
      "controller.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].values[0]" = "controller"
      "controller.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.topologyKey"                                 = "topology.kubernetes.io/zone"

      "controller.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[1].weight"                                                      = "99"
      "controller.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[1].podAffinityTerm.labelSelector.matchExpressions[0].key"       = "app.kubernetes.io/component"
      "controller.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[1].podAffinityTerm.labelSelector.matchExpressions[0].operator"  = "In"
      "controller.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[1].podAffinityTerm.labelSelector.matchExpressions[0].values[0]" = "controller"
      "controller.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[1].podAffinityTerm.topologyKey"                                 = "topology.kubernetes.io/hostname"

      "controller.topologySpreadConstraints[0].maxSkew"                                     = "1"
      "controller.topologySpreadConstraints[0].topologyKey"                                 = "topology.kubernetes.io/zone"
      "controller.topologySpreadConstraints[0].whenUnsatisfiable"                           = "ScheduleAnyway"
      "controller.topologySpreadConstraints[0].labelSelector.matchExpressions[0].key"       = "app.kubernetes.io/component"
      "controller.topologySpreadConstraints[0].labelSelector.matchExpressions[0].operator"  = "In"
      "controller.topologySpreadConstraints[0].labelSelector.matchExpressions[0].values[0]" = "controller"

      "controller.keda.enabled" = "false"

      "tcp.8089" = "ingress/ingress-nginx-controller:443"
      "tcp.9999" = "ingress/ingress-nginx-controller:443"
    }
    content {
      name  = set.key
      value = set.value
    }
  }
}

install ingress

resource "kubernetes_ingress_v1" "ingress" {
  metadata {
    name      = "${local.cluster_name}-ingress"
    namespace = local.splunk_operator_namespace
    annotations = {
      #"kubernetes.io/ingress.class"                    = "nginx"
      "ingress.kubernetes.io/rewrite-target"                = "/"
      "nginx.ingress.kubernetes.io/default-backend"         = "splunk-${local.environment}-license-manager-service"
      "nginx.ingress.kubernetes.io/proxy-body-size"         = "0"
      "nginx.ingress.kubernetes.io/proxy-read-timeout"      = "600"
      "nginx.ingress.kubernetes.io/proxy-send-timeout"      = "600"
      "nginx.ingress.kubernetes.io/affinity"                = "cookie"
      "nginx.ingress.kubernetes.io/affinity-mode"           = "persistent"
      "nginx.ingress.kubernetes.io/session-cookie-name"     = "route"
      "nginx.ingress.kubernetes.io/session-cookie-expires"  = "172800"
      "nginx.ingress.kubernetes.io/session-cookie-max-age"  = "172800"
      "nginx.ingress.kubernetes.io/client-body-buffer-size" = "100M"
      "nginx.ingress.kubernetes.io/backend-protocol"        = "HTTPS"
      "nginx.ingress.kubernetes.io/session-cookie-samesite" = "true"
      "nginx.ingress.kubernetes.io/session-cookie-path"     = "/en-US"
      "nginx.ingress.kubernetes.io/stream-snippet" = tostring(
        templatefile("files/stream-snippet.conf", {
          tls_crt              = "/mnt/indexer/tls.crt",
          tls_key              = "/mnt/indexer/tls.key",
          ingress_tls_crt      = "/mnt/ingress/tls.crt",
          ingress_tls_key      = "/mnt/ingress/tls.key",
          splunk_lm            = "splunk-${local.environment}-license-manager-service.splunk-operator.svc.cluster.local:8089",
          splunk_cm            = "splunk-${local.environment}-cluster-manager-service.splunk-operator.svc.cluster.local:8089"
          }
        )
      )
    }
  }
  spec {
    ingress_class_name = "nginx"
    rule {
      host = "lm.${local.splunk_domain}"
      http {
        path {
          backend {
            service {
              name = "splunk-${local.environment}-license-manager-service"
              port { number = 8000 }
            }
          }
          path      = "/"
          path_type = "Prefix"
        }
        path {
          backend {
            service {
              name = "splunk-${local.environment}-license-manager-service"
              port { number = 8089 }
            }
          }
          path      = "/services"
          path_type = "Prefix"
        }
        path {
          backend {
            service {
              name = "splunk-${local.environment}-license-manager-service"
              port { number = 8089 }
            }
          }
          path      = "/servicesNS"
          path_type = "Prefix"
        }
      }
    }

    rule {
      host = "cm.${local.splunk_domain}"
      http {
        path {
          backend {
            service {
              name = "splunk-${local.environment}-cluster-manager-service"
              port { number = 8000 }
            }
          }
          path      = "/"
          path_type = "Prefix"
        }
        path {
          backend {
            service {
              name = "splunk-${local.environment}-cluster-manager-service"
              port { number = 8089 }
            }
          }
          path      = "/services"
          path_type = "Prefix"
        }
        path {
          backend {
            service {
              name = "splunk-${local.environment}-cluster-manager-service"
              port { number = 8089 }
            }
          }
          path      = "/servicesNS"
          path_type = "Prefix"
        }
      }
    }
    tls {
      secret_name = local.splunk_domain
      hosts       = ["${local.splunk_domain}"]
    }
  }
}

stream-snippet.conf:

ssl_certificate ${tls_crt};
ssl_certificate_key ${tls_key};

log_format prj_splunk_operator_proxy '$remote_addr [$time_local] $protocol $status $bytes_sent $bytes_received $session_time "$upstream_addr" "$upstream_bytes_sent" "$upstream_bytes_received" "$upstream_connect_time"';
access_log /var/log/nginx/stream.log prj_splunk_operator_proxy;

server { listen 10000 ssl; set $lm ${splunk_lm}; proxy_pass $lm; ssl_certificate ${ingress_tls_crt}; ssl_certificate_key ${ingress_tls_key}; proxy_ssl on; }
server { listen 10001 ssl; set $cm ${splunk_cm}; proxy_pass $cm; ssl_certificate ${ingress_tls_crt}; ssl_certificate_key ${ingress_tls_key}; proxy_ssl on; }

@yaroslav-nakonechnikov yaroslav-nakonechnikov added the kind/bug Categorizes issue or PR as related to a bug. label Jan 10, 2025
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority labels Jan 10, 2025
@Gacko
Copy link
Member

Gacko commented Jan 10, 2025

#12648
#12635
#12634

@yaroslav-nakonechnikov
Copy link
Author

yes, thanks!

so, i've added

      "controller.config.annotations-risk-level"    = "Critical"

in resource "helm_release" "ingress_nginx" and it works.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
Development

No branches or pull requests

3 participants