Error running spark Recipes on Kubernetes- Initial job has not accepted any resources

Sameh Registered Posts: 1

when I use DSS v 13 to push execution of visual recipes to containerized execution on Kubernetes cluster(k8s), using Spark as the execution engine. I pushed two images to registry: dku-exec-base and dku-spark-base However, when I run the recipe it takes forever running (creating and deleting pods in k8s), I found this line in Job logs:

Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

found this on pod log

Environment: SPARK_USER: dssuser2 SPARK_DRIVER_URL: spark:// SPARK_EXECUTOR_CORES: 1

I already updated to set DK_BACKEND_EXT_HOST with correct IP for DSS machine. restarted , pushed base images again. but still not working

the SPARK_DRIVER_URL still shows this host name in log not IP . how can I solve this

Operating system used: Debian 11

Operating system used: Debian 11


Setup Info
      Help me…