Does Dataiku support multi-node GPU environment for LLM features (finetune, serving, etc.)?

성식
성식 Registered Posts: 6

Q1
Does Dataiku support multi-node GPU environment for LLM features (finetune, serving, etc.)?
If you apply, how do you apply?

I want to know if it is supported in a container execution environment and if there is a local GPU.

Q2
Also, when I run local LLM as a container execution, the pod runs, responds, and disappears after a certain amount of time. Is there a way to keep this running?

Q3
How to configure HA for LLM on Dataiku automation nodes

I would like to receive any answer to these three questions.

Setup Info
    Tags
      Help me…