Sign up to take part
Registered users can ask their own questions, contribute to discussions, and be part of the Community!
I want to deploy a custom regression model on api node to consume it since a external source. I'm following this tutorial for Python and this one for R, but I don't understant how it works. I mean, will I train a model using a Python/R recipe and after serializing the model in a folder to consume it in the code section of the endpoint? or will I train the model in the code section of the endpoint?
Do you have any example? or could you explain it to me?
Another question, I followed this tutorial to build an endpoint since MLtool. How does the model interpret the json if a variable used in the model is not sent? I test it with a query and it does not send an error.
Than you very much!!!!
Operating system used: Centos
You can deploy a custom model endpoint using a couple different approaches:
Write your custom model within a python function endpoint. You must write the custom model separate from the python function so that the model does not run each time the endpoint is called. See below for example.
You can import MLflow models in DSS as DSS saved models. This allows you to benefit from all of the ML management capabilities of DSS on your existing MLflow models, such as deploying your custom model as a prediction endpoint. For more information, please see MLFlow documentation: https://doc.dataiku.com/dss/latest/mlops/mlflow-models/index.html