Deployment of a predictive model
Hi
I read that: "you can compute predictions for all records of an unlabeled dataset. Using the REST API of the DSS API deployer, you can request predictions for new previously-unseen records in real time"
does this mean that the input to the model in my api must be a dataframe?
thank you
Answers
-
Following the question of @marirapellini
, I was wondering how to create an API service from an MLflow tracked model that would take images as input.I thought to use the custom python function in the API endpoint designer, but I am wondering how the folder, storing the model, will be kept when the API is deployed to an API infra througth the deployer node ?
because to load the model from custom python function in the API, one needs to have access to the managed folder containning the model ,which has an id associated with the design node.