Survey banner
Switching to Dataiku - a new area to help users who are transitioning from other tools and diving into Dataiku! CHECK IT OUT

How the variable importance is computed?

Level 1
How the variable importance is computed?


I am looking for some information about the variable importance for the model Random Forest and XGboost. I have very different output. 

I would like to know what kind of method you use to compute them. Not the same one for all models?

0 Kudos
2 Replies


For Random Forest, visual ML uses the standard attribute from sklearn:

Same thing with XGBoost, the standard attribute:

Note that the importances shown are for the preprocessed features according to your Design screen settings (e.g. if you do standard rescaling in the Features Handling tab - importances are shown for the rescaled features.).



0 Kudos
Level 2

Hi Thanks, just a follow up question, what "importance type" was used in dataiku xgboost? because in the official document, there are several types, the default is "weight", so I guess it's "weight"?

  • importance_type (Optional[str]) –

    The feature importance type for the feature_importances_ property:

    • For tree model, it’s either “gain”, “weight”, “cover”, “total_gain” or “total_cover”.

    • For linear model, only “weight” is defined and it’s the normalized coefficients without bias.