clean stopped notebook kernels running as kubernetes jobs

Registered Posts: 60 ✭✭✭✭✭

When I launch a notebook kernel in dss, it creates a kubernetes job. When said kernel is stopped, the job remains. After a lot of notebook executions, I have way too many jobs/pods on kubernetes. Is there any way to clean these pods/jobs just like normal python or spark job kubernetes pods are cleaned?

Welcome!

It looks like you're new here. Sign in or register to get started.

Best Answer

  • Registered Posts: 60 ✭✭✭✭✭
    Answer ✓

    yeah, the workaround would be a cronjob cleaning dataiku jobs (using the labels) that are done. but I was wondering if there was any way dataiku would take care of that for me

Answers

  • Dataiker Posts: 355 Dataiker

    Hi,

    when a notebook kernel is stopped, the pod running it should eventually die, because there's a heartbeat to make sure only pods used for actually running notebooks stay up. So the pods could be staying there in Completed state (and eventually get garbage-collected by the K8S cluster), but they shouldn't be in Running state. If you have pods in Running state, that shouldn't be running anymore, can you check the logs of the pod?

  • Registered Posts: 60 ✭✭✭✭✭

    The pods are in completed/failed state, not running. And I have over 10 jobs created by dataiku with 10 completed pods, k8s didn't clean them. When is it supposed to clean them?

  • Dataiker Posts: 355 Dataiker

    that goes beyond my knowledge, I'm afraid

    When pods linger, I usually clean them up manually, for example with

    kubectl delete pod --field-selector=status.phase==Succeeded

Welcome!

It looks like you're new here. Sign in or register to get started.

Welcome!

It looks like you're new here. Sign in or register to get started.