Does Dataiku support multi-node GPU environment for LLM features (finetune, serving, etc.)?

Registered Posts: 6

Q1
Does Dataiku support multi-node GPU environment for LLM features (finetune, serving, etc.)?
If you apply, how do you apply?

I want to know if it is supported in a container execution environment and if there is a local GPU.

Q2
Also, when I run local LLM as a container execution, the pod runs, responds, and disappears after a certain amount of time. Is there a way to keep this running?

Q3
How to configure HA for LLM on Dataiku automation nodes

I would like to receive any answer to these three questions.

Welcome!

It looks like you're new here. Sign in or register to get started.

Welcome!

It looks like you're new here. Sign in or register to get started.