How can we give Master URL while submitting the app? #116
Replies: 2 comments
-
Hi @Premkumar7402, For deploy mode, the package is designed to create a new pod for the driver, similar to what Spark CLI does in cluster mode, so we can say that only the cluster mode is supported in the package. For master URL, the package uses the official Python Kubernetes client to create and manage the Spark application resources, if you have any advanced configuration to pass, you can use the client's argument apiVersion: v1
clusters:
- cluster:
server: https://new-cluster-url:6443
name: my-cluster
contexts:
- context:
cluster: my-cluster
user: my-user
name: my-context
- context:
...
name: another-context
current-context: another-context
kind: Config
preferences: {}
users:
- name: my-user
user:
token: my-access-token python code: client_obj = SparkOnK8S(
k8s_client_manager=KubernetesClientManager(context="my-context"),
)
client_obj.submit_app(
image=config.get("spark.kubernetes.container.image","spark-image"),
app_path=config.get("spark.kubernetes.app_path","/opt/spark/examples/jars/spark-examples_2.12-3.4.3.jar"),
app_name=config.get("spark.kubernetes.app_name","test-pi"),
namespace=config.get("spark.kubernetes.namespace","spark"),
spark_conf=config.get("spark.properties",{})
#ui_reverse_proxy=True,
) |
Beta Was this translation helpful? Give feedback.
-
Thank you for responding @hussein-awala For my case, I have spark-cluster already running on k8 setup, i wanted to submit my spark app to the spark-cluster which already running in k8 setup. Spark-cluster will manage apps/resources for my case. Would it be efficient to use this library for these case, as i see spark_on_k8s is used to submit the app and managed by k8s If its supported, another question is, i have a load balancer which is configured in k8 setup, i wanted to use the LB ip as my Master URL, if i mention that IP in Kube_config, the connection is refused as its required to use k8 instance IP. How can i configure my master URL in this case? fyi, i'm trying to submit app from docker container to k8 instance(running in same network). |
Beta Was this translation helpful? Give feedback.
-
Hi,
i'm trying to submitting the spark app by using below from one server to another server where spark k8 cluster present.
client_obj.submit_app(
image=config.get("spark.kubernetes.container.image","spark-image"),
app_path=config.get("spark.kubernetes.app_path","/opt/spark/examples/jars/spark-examples_2.12-3.4.3.jar"),
app_name=config.get("spark.kubernetes.app_name","test-pi"),
namespace=config.get("spark.kubernetes.namespace","spark"),
spark_conf=config.get("spark.properties",{})
#ui_reverse_proxy=True,
)
i see the driver pod gets created in K8 setup. my question is where we can give --master url and --deploy-mode args while submitting the app?
like below:
./bin/spark-submit
--master k8s://https://:
--deploy-mode cluster
--name spark-pi
--class org.apache.spark.examples.SparkPi \
Beta Was this translation helpful? Give feedback.
All reactions