Sign up to take part
Registered users can ask their own questions, contribute to discussions, and be part of the Community!
Added on March 31, 2025 1:10AM
Likes: 0
Replies: 0
Q1
Does Dataiku support multi-node GPU environment for LLM features (finetune, serving, etc.)?
If you apply, how do you apply?
I want to know if it is supported in a container execution environment and if there is a local GPU.
Q2
Also, when I run local LLM as a container execution, the pod runs, responds, and disappears after a certain amount of time. Is there a way to keep this running?
Q3
How to configure HA for LLM on Dataiku automation nodes
I would like to receive any answer to these three questions.