How to Create a Batch Inference API for a Model?

Suhail
Suhail Registered Posts: 18 ✭✭✭✭

Hello Dataiku Community,

I'm looking for guidance on how to set up a batch inference API for a machine learning model. Specifically, I want to create an API endpoint that can take a batch of data and return predictions from my model.

Here are a few details about my setup:

- I have a trained model.

- I want to provide it with a batch of input data.

- I need the API to return predictions for each data point in the batch.

Could someone please provide step-by-step instructions, best practices, or point me to relevant documentation or tutorials on achieving this in Dataiku?

Any help or insights would be greatly appreciated.

Thank you in advance!

Best Answer

  • AdrienL
    AdrienL Dataiker, Alpha Tester Posts: 196 Dataiker
    Answer ✓

    Hi,

    This is natively handled by the API node, in which you can deploy a model prediction endpoint. You can call this endpoint with a batch of records (of reasonable size), and it will return the prediction for all these records.

    See the “First API” sections for guide in the API node documentation.

Answers

  • Suhail
    Suhail Registered Posts: 18 ✭✭✭✭

    Hi @AdrienL
    ,

    thanks for the reply.

    i already had followed the guide you shared.

    However i missed the part where it mentions for batch inference the endpoint would change from /predict to /predict-multi.

    I am able to run batch inferences now.

    Thanks

Setup Info
    Tags
      Help me…