clean stopped notebook kernels running as kubernetes jobs

Options
DrissiReda
DrissiReda Registered Posts: 57 ✭✭✭✭✭

When I launch a notebook kernel in dss, it creates a kubernetes job. When said kernel is stopped, the job remains. After a lot of notebook executions, I have way too many jobs/pods on kubernetes. Is there any way to clean these pods/jobs just like normal python or spark job kubernetes pods are cleaned?

Best Answer

  • DrissiReda
    DrissiReda Registered Posts: 57 ✭✭✭✭✭
    Answer ✓
    Options

    yeah, the workaround would be a cronjob cleaning dataiku jobs (using the labels) that are done. but I was wondering if there was any way dataiku would take care of that for me

Answers

  • fchataigner2
    fchataigner2 Dataiker Posts: 355 Dataiker
    Options

    Hi,

    when a notebook kernel is stopped, the pod running it should eventually die, because there's a heartbeat to make sure only pods used for actually running notebooks stay up. So the pods could be staying there in Completed state (and eventually get garbage-collected by the K8S cluster), but they shouldn't be in Running state. If you have pods in Running state, that shouldn't be running anymore, can you check the logs of the pod?

  • DrissiReda
    DrissiReda Registered Posts: 57 ✭✭✭✭✭
    Options

    The pods are in completed/failed state, not running. And I have over 10 jobs created by dataiku with 10 completed pods, k8s didn't clean them. When is it supposed to clean them?

  • fchataigner2
    fchataigner2 Dataiker Posts: 355 Dataiker
    Options

    that goes beyond my knowledge, I'm afraid

    When pods linger, I usually clean them up manually, for example with

    kubectl delete pod --field-selector=status.phase==Succeeded
Setup Info
    Tags
      Help me…