clean stopped notebook kernels running as kubernetes jobs

Solved!
DrissiReda
Level 4
clean stopped notebook kernels running as kubernetes jobs

When I launch a notebook kernel in dss, it creates a kubernetes job. When said kernel is stopped, the job remains. After a lot of notebook executions, I have way too many jobs/pods on kubernetes. Is there any way to clean these pods/jobs just like normal python or spark job kubernetes pods are cleaned?

0 Kudos
1 Solution
DrissiReda
Level 4
Author

yeah, the workaround would be a cronjob cleaning dataiku jobs (using the labels) that are done. but I was wondering if there was any way dataiku would take care of that for me

View solution in original post

0 Kudos
4 Replies
fchataigner2
Dataiker

Hi,

when a notebook kernel is stopped, the pod running it should eventually die, because there's a heartbeat to make sure only pods used for actually running notebooks stay up. So the pods could be staying there in Completed state (and eventually get garbage-collected by the K8S cluster), but they shouldn't be in Running state. If you have pods in Running state, that shouldn't be running anymore, can you check the logs of the pod?

0 Kudos
DrissiReda
Level 4
Author

The pods are in completed/failed state, not running. And I have over 10 jobs created by dataiku with 10 completed pods, k8s didn't clean them. When is it supposed to clean them?

0 Kudos
fchataigner2
Dataiker

that goes beyond my knowledge, I'm afraid

When pods linger, I usually clean them up manually, for example with 

kubectl delete pod --field-selector=status.phase==Succeeded
0 Kudos
DrissiReda
Level 4
Author

yeah, the workaround would be a cronjob cleaning dataiku jobs (using the labels) that are done. but I was wondering if there was any way dataiku would take care of that for me

0 Kudos