Deploy a Python/R custom regression model on api node
Hi, everyone.
I want to deploy a custom regression model on api node to consume it since a external source. I'm following this tutorial for Python and this one for R, but I don't understant how it works. I mean, will I train a model using a Python/R recipe and after serializing the model in a folder to consume it in the code section of the endpoint? or will I train the model in the code section of the endpoint?
Do you have any example? or could you explain it to me?
Another question, I followed this tutorial to build an endpoint since MLtool. How does the model interpret the json if a variable used in the model is not sent? I test it with a query and it does not send an error.
Than you very much!!!!
Operating system used: Centos
Answers
-
JordanB Dataiker, Dataiku DSS Core Designer, Dataiku DSS Adv Designer, Registered Posts: 296 Dataiker
Hi @rafael_rosado97
,You can deploy a custom model endpoint using a couple different approaches:
Write your custom model within a python function endpoint. You must write the custom model separate from the python function so that the model does not run each time the endpoint is called. See below for example.
You can import MLflow models in DSS as DSS saved models. This allows you to benefit from all of the ML management capabilities of DSS on your existing MLflow models, such as deploying your custom model as a prediction endpoint. For more information, please see MLFlow documentation: https://doc.dataiku.com/dss/latest/mlops/mlflow-models/index.html
Thanks!
Jordan
-
rafael_rosado97 Partner, Dataiku DSS Core Designer, Dataiku DSS ML Practitioner, Dataiku DSS Adv Designer, Registered Posts: 61 Partner
Than you for your answer, @JordanB
!!If I want to test querries, is this example correct?
I applied the same code of iris example and when I tested this query, it returned an error