Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use SSL passthrough in nginx ingress controller without changing the listening port 443 #12262

Closed
kmarimuthu90 opened this issue Oct 29, 2024 · 8 comments
Labels
needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@kmarimuthu90
Copy link

No description provided.

@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Oct 29, 2024
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority labels Oct 29, 2024
@longwuyuan
Copy link
Contributor

@k8s-ci-robot
Copy link
Contributor

@longwuyuan: Closing this issue.

In response to this:

Example https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#kubernetesingress-nginx

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@kmarimuthu90
Copy link
Author

kmarimuthu90 commented Oct 29, 2024

`Change background:
We have to support IPv6 clients, but APIM is IPv4 only. So, we plan to use a dual-stack Nginx ingress controller to support both IPv4 and IPv6 and forward the traffic to the IPv4 backend ( APIM ).

How it works usually :
The devices connect with APIM by sending requests and client certificates for the handshake. We have configured the policy like below in API to verify the client's certification in the request.

TLS termination is handled at the APIM custom domain. Flow is Client device -> APIM DNS ( Cloud Flare ) -> APIM

New dual stack setup:
We have deployed a dual-stack ingress controller on the AKS cluster and it is exposed to the 443. And ingress route to forward the traffic to the backend external service ( APIM ) like the below,

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: apim-ingress
namespace: apim-proxy
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
#tls:
#- hosts:

- APIM-endpoint.com

secretName: tls-secret-apim-sectigo

ingressClassName: nginx
rules:
- host: APIM-endpoint.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: apim-external
port:
name: https

Backend APIM
apiVersion: v1
kind: Service
metadata:
name: apim-external
namespace: apim-proxy
spec:
type: ExternalName
externalName:

Here is my nginx.conf

Configuration checksum: 15284697795965044210

setup custom paths that do not require root access

pid /tmp/nginx/nginx.pid;

daemon off;

worker_processes 2;

worker_rlimit_nofile 1047552;

worker_shutdown_timeout 240s ;

events {
multi_accept on;
worker_connections 16384;
use epoll;

}

http {

lua_package_path "/etc/nginx/lua/?.lua;;";

lua_shared_dict balancer_ewma 10M;
lua_shared_dict balancer_ewma_last_touched_at 10M;
lua_shared_dict balancer_ewma_locks 1M;
lua_shared_dict certificate_data 20M;
lua_shared_dict certificate_servers 5M;
lua_shared_dict configuration_data 20M;
lua_shared_dict global_throttle_cache 10M;
lua_shared_dict ocsp_response_cache 5M;

init_by_lua_block {
    collectgarbage("collect")
   
    -- init modules
    local ok, res
   
    ok, res = pcall(require, "lua_ingress")
    if not ok then
    error("require failed: " .. tostring(res))
    else
    lua_ingress = res
    lua_ingress.set_config({
        use_forwarded_headers = true,
        use_proxy_protocol = false,
        is_ssl_passthrough_enabled = true,
        http_redirect_code = 308,
        listen_ports = { ssl_proxy = "442", https = "443" },
       
        hsts = true,
        hsts_max_age = 31536000,
        hsts_include_subdomains = true,
        hsts_preload = false,
       
        global_throttle = {
            memcached = {
                host = "", port = 11211, connect_timeout = 50, max_idle_timeout = 10000, pool_size = 50,
            },
            status_code = 429,
        }
    })
    end
   
    ok, res = pcall(require, "configuration")
    if not ok then
    error("require failed: " .. tostring(res))
    else
    configuration = res
    configuration.prohibited_localhost_port = '10246'
    end
   
    ok, res = pcall(require, "balancer")
    if not ok then
    error("require failed: " .. tostring(res))
    else
    balancer = res
    end
   
    ok, res = pcall(require, "certificate")
    if not ok then
    error("require failed: " .. tostring(res))
    else
    certificate = res
    certificate.is_ocsp_stapling_enabled = false
    end
   
    ok, res = pcall(require, "plugins")
    if not ok then
    error("require failed: " .. tostring(res))
    else
    plugins = res
    end
    -- load all plugins that'll be used here
    plugins.init({  })
}

init_worker_by_lua_block {
    lua_ingress.init_worker()
    balancer.init_worker()
   
    plugins.run()
}

real_ip_header      X-Forwarded-For;

real_ip_recursive   on;

set_real_ip_from    0.0.0.0/0;

aio                 threads;

aio_write           on;

tcp_nopush          on;
tcp_nodelay         on;

log_subrequest      on;

reset_timedout_connection on;

keepalive_timeout  75s;
keepalive_requests 1000;

client_body_temp_path           /tmp/nginx/client-body;
fastcgi_temp_path               /tmp/nginx/fastcgi-temp;
proxy_temp_path                 /tmp/nginx/proxy-temp;

client_header_buffer_size       1k;
client_header_timeout           60s;
large_client_header_buffers     4 8k;
client_body_buffer_size         8k;
client_body_timeout             60s;

http2_max_concurrent_streams    128;

types_hash_max_size             2048;
server_names_hash_max_size      1024;
server_names_hash_bucket_size   64;
map_hash_bucket_size            64;

proxy_headers_hash_max_size     512;
proxy_headers_hash_bucket_size  64;

variables_hash_bucket_size      256;
variables_hash_max_size         2048;

underscores_in_headers          off;
ignore_invalid_headers          on;

limit_req_status                503;
limit_conn_status               503;

include /etc/nginx/mime.types;
default_type text/html;

# Custom headers for response

server_tokens off;

more_clear_headers Server;

# disable warnings
uninitialized_variable_warn off;

# Additional available variables:
# $namespace
# $ingress_name
# $service_name
# $service_port
log_format upstreaminfo '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length $request_time [$proxy_upstream_name] [$proxy_alternative_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status $req_id';

map $request_uri $loggable {
   
    default 1;
}

access_log /var/log/nginx/access.log upstreaminfo  if=$loggable;

error_log  /var/log/nginx/error.log notice;

resolver 10.0.0.10 valid=30s;

# See https://www.nginx.com/blog/websocket-nginx
map $http_upgrade $connection_upgrade {
    default          upgrade;
   
    # See http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
    ''               '';
   
}

# Reverse proxies can detect if a client provides a X-Request-ID header, and pass it on to the backend server.
# If no such header is provided, it can provide a random value.
map $http_x_request_id $req_id {
    default   $http_x_request_id;
   
    ""        $request_id;
   
}

# Create a variable that contains the literal $ character.
# This works because the geo module will not resolve variables.
geo $literal_dollar {
    default "$";
}

server_name_in_redirect off;
port_in_redirect        off;

ssl_protocols TLSv1.2 TLSv1.3;

ssl_early_data off;

# turn on session caching to drastically improve performance

ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;

# allow configuring ssl session tickets
ssl_session_tickets off;

# slightly reduce the time-to-first-byte
ssl_buffer_size 4k;

# allow configuring custom ssl ciphers
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384';
ssl_prefer_server_ciphers on;

ssl_ecdh_curve auto;

# PEM sha: daceca1de4866f338854becec11d2dfe45f25171
ssl_certificate     /etc/ingress-controller/ssl/default-fake-certificate.pem;
ssl_certificate_key /etc/ingress-controller/ssl/default-fake-certificate.pem;

proxy_ssl_session_reuse on;

upstream upstream_balancer {
    ### Attention!!!
    #
    # We no longer create "upstream" section for every backend.
    # Backends are handled dynamically using Lua. If you would like to debug
    # and see what backends ingress-nginx has in its memory you can
    # install our kubectl plugin https://kubernetes.github.io/ingress-nginx/kubectl-plugin.
    # Once you have the plugin you can use "kubectl ingress-nginx backends" command to
    # inspect current backends.
    #
    ###
   
    server 0.0.0.1; # placeholder
   
    balancer_by_lua_block {
        balancer.balance()
    }
   
    keepalive 320;
    keepalive_time 1h;
    keepalive_timeout  60s;
    keepalive_requests 10000;
   
}

# Cache for internal auth checks
proxy_cache_path /tmp/nginx/nginx-cache-auth levels=1:2 keys_zone=auth_cache:10m max_size=128m inactive=30m use_temp_path=off;

# Global filters

## start server _
server {
    server_name _ ;
   
    http2 on;
   
    listen 80 default_server reuseport backlog=4096 ;
    listen [::]:80 default_server reuseport backlog=4096 ;
    listen 442 proxy_protocol default_server reuseport backlog=4096 ssl;
    listen [::]:442 proxy_protocol default_server reuseport backlog=4096 ssl;
   
    set $proxy_upstream_name "-";
   
    ssl_reject_handshake off;
   
    ssl_certificate_by_lua_block {
        certificate.call()
    }
   
    location / {
       
        set $namespace      "";
        set $ingress_name   "";
        set $service_name   "";
        set $service_port   "";
        set $location_path  "";
        set $global_rate_limit_exceeding n;
       
        rewrite_by_lua_block {
            lua_ingress.rewrite({
                force_ssl_redirect = false,
                ssl_redirect = false,
                force_no_ssl_redirect = false,
                preserve_trailing_slash = false,
                use_port_in_redirects = false,
                global_throttle = { namespace = "", limit = 0, window_size = 0, key = { }, ignored_cidrs = { } },
            })
            balancer.rewrite()
            plugins.run()
        }
       
        # be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
        # will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
        # other authentication method such as basic auth or external auth useless - all requests will be allowed.
        #access_by_lua_block {
        #}
       
        header_filter_by_lua_block {
            lua_ingress.header()
            plugins.run()
        }
       
        body_filter_by_lua_block {
            plugins.run()
        }
       
        log_by_lua_block {
            balancer.log()
           
            plugins.run()
        }
       
        access_log off;
       
        port_in_redirect off;
       
        set $balancer_ewma_score -1;
        set $proxy_upstream_name "upstream-default-backend";
        set $proxy_host          $proxy_upstream_name;
        set $pass_access_scheme  $scheme;
       
        set $pass_server_port    $server_port;
       
        set $best_http_host      $http_host;
        set $pass_port           $pass_server_port;
       
        set $proxy_alternative_upstream_name "";
       
        client_max_body_size                    1m;
       
        proxy_set_header Host                   $best_http_host;
       
        # Pass the extracted client certificate to the backend
       
        # Allow websocket connections
        proxy_set_header                        Upgrade           $http_upgrade;
       
        proxy_set_header                        Connection        $connection_upgrade;
       
        proxy_set_header X-Request-ID           $req_id;
        proxy_set_header X-Real-IP              $remote_addr;
       
        proxy_set_header X-Forwarded-For        $remote_addr;
       
        proxy_set_header X-Forwarded-Host       $best_http_host;
        proxy_set_header X-Forwarded-Port       $pass_port;
        proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
        proxy_set_header X-Forwarded-Scheme     $pass_access_scheme;
       
        proxy_set_header X-Scheme               $pass_access_scheme;
       
        # Pass the original X-Forwarded-For
        proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
       
        # mitigate HTTPoxy Vulnerability
        # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
        proxy_set_header Proxy                  "";
       
        # Custom headers to proxied server
       
        proxy_connect_timeout                   5s;
        proxy_send_timeout                      60s;
        proxy_read_timeout                      60s;
       
        proxy_buffering                         off;
        proxy_buffer_size                       4k;
        proxy_buffers                           4 4k;
       
        proxy_max_temp_file_size                1024m;
       
        proxy_request_buffering                 on;
        proxy_http_version                      1.1;
       
        proxy_cookie_domain                     off;
        proxy_cookie_path                       off;
       
        # In case of errors try the next upstream server before returning an error
        proxy_next_upstream                     error timeout;
        proxy_next_upstream_timeout             0;
        proxy_next_upstream_tries               3;
       
        # Custom Response Headers
       
        proxy_pass http://upstream_balancer/;
       
        proxy_redirect                          off;
       
    }
   
    # health checks in cloud providers require the use of port 80
    location /healthz {
       
        access_log off;
        return 200;
    }
   
    # this is required to avoid error if nginx is being monitored
    # with an external software (like sysdig)
    location /nginx_status {
       
        allow 127.0.0.1;
       
        allow ::1;
       
        deny all;
       
        access_log off;
        stub_status on;
    }
   
}
## end server _

## start server APIM-endpoint.com
server {
    server_name APIM-endpoint.com ;
   
    http2 on;
   
    listen 80  ;
    listen [::]:80  ;
    listen 442 proxy_protocol  ssl;
    listen [::]:442 proxy_protocol  ssl;
   
    set $proxy_upstream_name "-";
   
    ssl_certificate_by_lua_block {
        certificate.call()
    }
   
    location / {
       
        set $namespace      "apim-proxy";
        set $ingress_name   "apim-ingress";
        set $service_name   "apim-external";
        set $service_port   "https";
        set $location_path  "/";
        set $global_rate_limit_exceeding n;
       
        rewrite_by_lua_block {
            lua_ingress.rewrite({
                force_ssl_redirect = false,
                ssl_redirect = true,
                force_no_ssl_redirect = false,
                preserve_trailing_slash = false,
                use_port_in_redirects = false,
                global_throttle = { namespace = "", limit = 0, window_size = 0, key = { }, ignored_cidrs = { } },
            })
            balancer.rewrite()
            plugins.run()
        }
       
        # be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
        # will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
        # other authentication method such as basic auth or external auth useless - all requests will be allowed.
        #access_by_lua_block {
        #}
       
        header_filter_by_lua_block {
            lua_ingress.header()
            plugins.run()
        }
       
        body_filter_by_lua_block {
            plugins.run()
        }
       
        log_by_lua_block {
            balancer.log()
           
            plugins.run()
        }
       
        port_in_redirect off;
       
        set $balancer_ewma_score -1;
        set $proxy_upstream_name "apim-proxy-apim-external-https";
        set $proxy_host          $proxy_upstream_name;
        set $pass_access_scheme  $scheme;
       
        set $pass_server_port    $server_port;
       
        set $best_http_host      $http_host;
        set $pass_port           $pass_server_port;
       
        set $proxy_alternative_upstream_name "";
       
        client_max_body_size                    1m;
       
        proxy_set_header Host                   $best_http_host;
       
        # Pass the extracted client certificate to the backend
       
        # Allow websocket connections
        proxy_set_header                        Upgrade           $http_upgrade;
       
        proxy_set_header                        Connection        $connection_upgrade;
       
        proxy_set_header X-Request-ID           $req_id;
        proxy_set_header X-Real-IP              $remote_addr;
       
        proxy_set_header X-Forwarded-For        $remote_addr;
       
        proxy_set_header X-Forwarded-Host       $best_http_host;
        proxy_set_header X-Forwarded-Port       $pass_port;
        proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
        proxy_set_header X-Forwarded-Scheme     $pass_access_scheme;
       
        proxy_set_header X-Scheme               $pass_access_scheme;
       
        # Pass the original X-Forwarded-For
        proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
       
        # mitigate HTTPoxy Vulnerability
        # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
        proxy_set_header Proxy                  "";
       
        # Custom headers to proxied server
       
        proxy_connect_timeout                   5s;
        proxy_send_timeout                      60s;
        proxy_read_timeout                      60s;
       
        proxy_buffering                         off;
        proxy_buffer_size                       4k;
        proxy_buffers                           4 4k;
       
        proxy_max_temp_file_size                1024m;
       
        proxy_request_buffering                 on;
        proxy_http_version                      1.1;
       
        proxy_cookie_domain                     off;
        proxy_cookie_path                       off;
       
        # In case of errors try the next upstream server before returning an error
        proxy_next_upstream                     error timeout;
        proxy_next_upstream_timeout             0;
        proxy_next_upstream_tries               3;
       
        # Custom Response Headers
       
        proxy_pass https://upstream_balancer/;
       
        proxy_redirect                          off;
       
    }
   
}
## end serverAPIM-endpoint.com

# backend for when default-backend-service is not configured or it does not have endpoints
server {
    listen 8181 default_server reuseport backlog=4096;
    listen [::]:8181 default_server reuseport backlog=4096;
    set $proxy_upstream_name "internal";
   
    access_log off;
   
    location / {
        return 404;
    }
}

# default server, used for NGINX healthcheck and access to nginx stats
server {
    # Ensure that modsecurity will not run on an internal location as this is not accessible from outside
   
    listen 127.0.0.1:10246;
    set $proxy_upstream_name "internal";
   
    keepalive_timeout 0;
    gzip off;
   
    access_log off;
   
    location /healthz {
        return 200;
    }
   
    location /is-dynamic-lb-initialized {
        content_by_lua_block {
            local configuration = require("configuration")
            local backend_data = configuration.get_backends_data()
            if not backend_data then
            ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR)
            return
            end
           
            ngx.say("OK")
            ngx.exit(ngx.HTTP_OK)
        }
    }
   
    location /nginx_status {
        stub_status on;
    }
   
    location /configuration {
        client_max_body_size                    21M;
        client_body_buffer_size                 21M;
        proxy_buffering                         off;
       
        content_by_lua_block {
            configuration.call()
        }
    }
   
    location / {
        content_by_lua_block {
            ngx.exit(ngx.HTTP_NOT_FOUND)
        }
    }
}

}

stream {
lua_package_path "/etc/nginx/lua/?.lua;/etc/nginx/lua/vendor/?.lua;;";

lua_shared_dict tcp_udp_configuration_data 5M;

resolver 10.0.0.10 valid=30s;

init_by_lua_block {
    collectgarbage("collect")
   
    -- init modules
    local ok, res
   
    ok, res = pcall(require, "configuration")
    if not ok then
    error("require failed: " .. tostring(res))
    else
    configuration = res
    end
   
    ok, res = pcall(require, "tcp_udp_configuration")
    if not ok then
    error("require failed: " .. tostring(res))
    else
    tcp_udp_configuration = res
    tcp_udp_configuration.prohibited_localhost_port = '10246'
   
    end
   
    ok, res = pcall(require, "tcp_udp_balancer")
    if not ok then
    error("require failed: " .. tostring(res))
    else
    tcp_udp_balancer = res
    end
}

init_worker_by_lua_block {
    tcp_udp_balancer.init_worker()
}

lua_add_variable $proxy_upstream_name;

log_format log_stream '[$remote_addr] [$time_local] $protocol $status $bytes_sent $bytes_received $session_time';

access_log /var/log/nginx/access.log log_stream ;

error_log  /var/log/nginx/error.log notice;

upstream upstream_balancer {
    server 0.0.0.1:1234; # placeholder
   
    balancer_by_lua_block {
        tcp_udp_balancer.balance()
    }
}

server {
    listen 127.0.0.1:10247;
   
    access_log off;
   
    content_by_lua_block {
        tcp_udp_configuration.call()
    }
}

# TCP services

# UDP services

# Stream Snippets

}

Before it throws a 403 Invalid client certificates

I am getting below error now,

After adding SSL passthrough I see this error,

curl -4 -k -v --tlsv1.2 --location --request POST 'https://APIM-endpoint.com/api/v2/handshake?' --cert cert.pem --key cert.key --header "Content-Length: 0"

  • Trying ....:443...
  • Connected to APIM-endpoint.com (x.x.x.x) port 443 (#0)
  • ALPN, offering h2
  • ALPN, offering http/1.1
  • TLSv1.0 (OUT), TLS header, Certificate Status (22):
  • TLSv1.3 (OUT), TLS handshake, Client hello (1):
  • TLSv1.0 (OUT), TLS header, Unknown (21):
  • TLSv1.3 (OUT), TLS alert, decode error (562):
  • error:0A000126:SSL routines::unexpected eof while reading
  • Closing connection 0
    curl: (35) error:0A000126:SSL routines::unexpected eof while reading`

@kmarimuthu90
Copy link
Author

pls help on this

@longwuyuan
Copy link
Contributor

@kmarimuthu90 its unfortunate that you are facing problems to use ssl-passthrough feature. I hope it can be solved soon.

  • Github issues here require that you look at the template of a new bug report
  • Then it is required that you answer the questions asked in the new bug report template
  • That provides data to readers here that can be used to reproduce the problem. That raw data is also analyzed for obvious known problems

So I suggest that

  • you look at the template of a new bug report
  • edit this issue's description as per that templte of a new bug report
  • answer the questions that are asked in the bug report template
  • ensure that you format everything in markdown syntax
  • add steps to reproduce the problem
  • add any additional data that you think is related to the problem

This will help readers a lot. There are not many resources here as its a community project so you can help others to help you, if you consider the suggestions above

Once you have provided all the answers to the questions asked in the bug report template, then you can reopen this issue.

@kmarimuthu90
Copy link
Author

ok thanks

@kmarimuthu90
Copy link
Author

etc/nginx $ netstat -nlp
netstat: can't scan /proc - are you root?
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:10245 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:10246 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:10247 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:8181 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:8181 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:442 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:442 0.0.0.0:* LISTEN -
tcp 0 0 :::10254 :::* LISTEN -
tcp 0 0 :::8181 :::* LISTEN -
tcp 0 0 :::8181 :::* LISTEN -
tcp 0 0 :::80 :::* LISTEN -
tcp 0 0 :::80 :::* LISTEN -
tcp 0 0 :::8443 :::* LISTEN -
tcp 0 0 :::442 :::* LISTEN -
tcp 0 0 :::442 :::* LISTEN -
tcp 9 0 :::443 :::* LISTEN -

I see 442 only listening

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
Development

No branches or pull requests

3 participants