-
-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Spark UI in Azure Kubernetes Service #22
Comments
Do you have a default apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress If so, could you try to add a new network policy in the webserver namespace to permit the webserver pod to route the traffic to the driver pods? something like: apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-redirect-traffic
spec:
podSelector:
matchLabels:
app: spark-webserver
ingress:
- from:
- namespaceSelector:
matchLabels:
name: <your namespace>
ports:
- protocol: TCP
port: 8000
egress:
- to:
- podSelector:
matchLabels:
spark-role: driver
ports:
- protocol: TCP
port: 4040 |
Hi Hussein, spark-on-k8s api start --host 127.0.0.1 --port 8080 --workers 4 --log-level debug --limit-concurrency 100
It looks like there is an error in the API Does this API in AKS need an ingress controller? (kubernetes-sigs/kind#2953) |
I believe it's related to your CoreDNS configuration, where the reverse proxy routes the traffic to kubectl -n kube-system get configmaps coredns -o yaml You should have
We need to check if this URL is the default one |
Hi Hussein, how are you ? It Follows. kubectl -n kube-system get configmaps coredns -o yaml apiVersion: v1
data:
Corefile: |
.:53 {
errors
ready
health {
lameduck 5s
}
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
import custom/*.override
}
import custom/*.server
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"Corefile":".:53 {\n errors\n ready\n health {\n lameduck 5s\n }\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus :9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n import custom/*.override\n}\nimport custom/*.server\n"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kube-dns","kubernetes.io/cluster-service":"true"},"name":"coredns","namespace":"kube-system"}}
creationTimestamp: "2024-02-21T12:57:42Z"
labels:
addonmanager.kubernetes.io/mode: Reconcile
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
name: coredns
namespace: kube-system
resourceVersion: "454"
uid: de1b0cf3-444b-4aea-8f2b-4e341010a05e |
Since you have |
@quinnsp06 any update on this issue? Unfortunately, I don't have an Azure cloud setup to test it with AKS, but I had the opportunity to use the project in production on the AWS cloud:
As I mentioned before, I believe it is related to some custom DNS configuration on your cluster especially since you have |
Hi Hussein,
When executing the steps on an AKS cluster I receive the following errors:
I run the start API I get the error below and in the Browser it gives Internet Server Error. Remembering that to access the AKS UI POD on port 4040 I have to port-forward to localhost on port 8080:
The text was updated successfully, but these errors were encountered: