Deploy a Python/R custom regression model on api node

rafael_rosado97
Level 4
Deploy a Python/R custom regression model on api node

Hi, everyone.

I want to deploy a custom regression model on api node to consume it since a external source. I'm following this tutorial for Python and this one for R, but I don't understant how it works. I mean, will I train a model using a Python/R recipe and after serializing the model in a folder to consume it in the code section of the endpoint? or will I train the model in the code section of the endpoint?

Do you have any example? or could you explain it to me?

 

Another question, I followed this tutorial to build an endpoint since MLtool. How does the model interpret the json if a variable used in the model is not sent? I test it with a query and it does not send an error. 

 

Than you very much!!!!


Operating system used: Centos

0 Kudos
2 Replies
JordanB
Dataiker

Hi @rafael_rosado97,

You can deploy a custom model endpoint using a couple different approaches:

Write your custom model within a python function endpoint. You must write the custom model separate from the python function so that the model does not run each time the endpoint is called. See below for example.

Screen Shot 2023-06-15 at 1.17.39 PM.png

You can import MLflow models in DSS as DSS saved models. This allows you to benefit from all of the ML management capabilities of DSS on your existing MLflow models, such as deploying your custom model as a prediction endpoint. For more information, please see MLFlow documentation: https://doc.dataiku.com/dss/latest/mlops/mlflow-models/index.html

Thanks!

Jordan

0 Kudos
rafael_rosado97
Level 4
Author

Than you for your answer, @JordanB!!

If I want to test querries, is this example correct?

 
 

Captura.PNG

 
 
 

I applied the same code of iris example and when I tested this query, it returned an error

Captura.PNG

0 Kudos