In Dataiku, for classification or regression problems, SHAP interpretability is generated.
For Isolation forest (Anomaly detection) we do not have model interpretability. Do I write my own SHAP tree explainer in python ?
Initially I was thinking to further train my data based on the cluster labels, and since it’s imbalanced, I will have to oversample the data to do so.
Any thoughts would be appreciated.