Deploying Model into the Azure Container Registry

Solved!
Kman
Level 2
Deploying Model into the Azure Container Registry

Hi There,

I just want to clarify that if we deploy a model from the designer node we have to select a Kubernetes cluster from the Designer Node and the API deployer pushes the base image (Docker Image) into the Azure Registry Container and this image is pushed into the chosen Azure Kubernetes Cluster (through the kubectl command). I just wanted to know if this is correct.

I have attached a diagram. I just want to make sure I am on the right path in regards to my understanding.

Also does docker need to be installed into the Designer Node VM and Automation Node VM.

Thank You.

 
 

 

 

 

0 Kudos
2 Solutions
Clรฉment_Stenac

Hi,

Your understanding is correct.

In order to be able to build and push images, the Docker daemon needs to be installed and setup on all nodes that will push images. We recommend installing it on both design, automation and API deployer nodes.

View solution in original post

0 Kudos
Clรฉment_Stenac

Hi,

It's important to separate images from containers.

DSS will create and push to ACR three "base images":

  • One for deploying model APIs ("API deployer")
  • One for Spark
  • One for other Kubernetes-able tasks (Python and R recipes, notebooks, webapps, visual ML training)

In addition, DSS will create and push to ACR additional images based on these three images, depending on your code environments.

So there will be at least 3 base images in ACR, and a number of more specific images built on top of the base images.

Then, each time you do something, DSS will create one or more "pods" (which are more or less containers) within Kubernetes.

  • A Model API will run multiple pods for high availability and scalability
  • A Spark job will run multiple pods for the distributed computation
  • A Python and R recipes, notebooks, webapps, visual ML training will run 1 pod

Hope this helps

View solution in original post

0 Kudos
3 Replies
Clรฉment_Stenac

Hi,

Your understanding is correct.

In order to be able to build and push images, the Docker daemon needs to be installed and setup on all nodes that will push images. We recommend installing it on both design, automation and API deployer nodes.

0 Kudos
Kman
Level 2
Author

I just had another question which is if we use spark with the ML model and deploy it to the ACR will the ML and Spark be in one container?  

0 Kudos
Clรฉment_Stenac

Hi,

It's important to separate images from containers.

DSS will create and push to ACR three "base images":

  • One for deploying model APIs ("API deployer")
  • One for Spark
  • One for other Kubernetes-able tasks (Python and R recipes, notebooks, webapps, visual ML training)

In addition, DSS will create and push to ACR additional images based on these three images, depending on your code environments.

So there will be at least 3 base images in ACR, and a number of more specific images built on top of the base images.

Then, each time you do something, DSS will create one or more "pods" (which are more or less containers) within Kubernetes.

  • A Model API will run multiple pods for high availability and scalability
  • A Spark job will run multiple pods for the distributed computation
  • A Python and R recipes, notebooks, webapps, visual ML training will run 1 pod

Hope this helps

0 Kudos