Kubernetes and ACR and Container Image

Kman
Level 2
Kubernetes and ACR and Container Image

Hi All,

1) Does the Design node need to point to a Kubernetes cluster with Spark and Automation/API Production Nodes to another Kubernetes cluster which will have Spark and API docker containers?

2) Where does the Azure Container registry fit into the ecosystem? Does the Container come from the ACR? My understanding is the base image is from the designer node is that correct?

Thanks. 

 

0 Kudos
3 Replies
Clรฉment_Stenac

Hi,

All nodes can target a single cluster, be it managed by DSS or created externally. If you are using clusters managed by DSS, you will have one of the nodes creating the cluster and the other node use "Attach AKS cluster" in order to attach to it without creating it.

For the sake of isolation, we'd recommend that you use separate namespaces for the nodes.

Using multiple clusters is another possibility. It gives you some more flexibility, for example to upgrade the cluster of the design node(s) without risk to production. In that case, design node and automation node would each use "Create AKS cluster".

 

About ACR: each node will build its own base image and push it to ACR. Each node tags its base image with a node-specific tag in order to avoid any possible conflict and give you maximal isolation. Note that thanks to Docker layering, the common layers will not be duplicated on ACR.

In addition to the base image, each node dynamically builds new images for code envs and for API packages and pushes them to ACR.

0 Kudos
Kman
Level 2
Author

What about the Spark on the Ecosystem. For Designer node to get spark working does it need to leverage the spark in the Kubernetes and does the Kubernetes need a Container with Spark?

0 Kudos
Clรฉment_Stenac

Hi,

How it works is that you have Spark installed only on the DSS node itself (through the procedure here: https://doc.dataiku.com/dss/latest/spark/kubernetes/managed.html#initial-setup).

Spark is not "installed in the cluster". Instead, the capability to send arbitrary workloads to a K8S cluster is leveraged

The DSS node is connected to the K8S cluster, so when you start a Spark job, the Spark driver, running on the DSS machine, will send commands to K8S to start containers that will run the Spark executors. The computation runs on K8S and then the pods are stopped on K8S.

This is the biggest advantage of running K8S as the unified underlying elastic AI engine in Dataiku: it's a unified engine that can run all of the following without needing dedicated support or additional setup past the K8S setup itself:

  • In-memory workloads for recipes (Python, R), and notebooks (Python, R)
  • Visual ML
  • Webapps (Flask, Shiny, Bokeh)
  • Spark through Spark-on-K8S workloads (SparkSQL, Pyspark, SparkR, Spark-Scala)
  • API services (prediction, Python functions, R functions, lookups, SQL queries)
0 Kudos