Discover all of the brand-new features and improvements to existing capabilities in the Dataiku 11.3 updateLET'S GO

How the variable importance is computed?

Ju
Level 1
How the variable importance is computed?

Hello,

I am looking for some information about the variable importance for the model Random Forest and XGboost. I have very different output. 

I would like to know what kind of method you use to compute them. Not the same one for all models?

0 Kudos
2 Replies
pmasiphelps
Dataiker

Hi,

For Random Forest, visual ML uses the standard attribute from sklearn: https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html#sklea...

Same thing with XGBoost, the standard attribute: https://xgboost.readthedocs.io/en/stable/python/python_api.html

Note that the importances shown are for the preprocessed features according to your Design screen settings (e.g. if you do standard rescaling in the Features Handling tab - importances are shown for the rescaled features.).

Best,

Pat

0 Kudos
Tao_Z999
Level 1

Hi Thanks, just a follow up question, what "importance type" was used in dataiku xgboost? because in the official document, there are several types, the default is "weight", so I guess it's "weight"?

  • importance_type (Optional[str]) –

    The feature importance type for the feature_importances_ property:

    • For tree model, it’s either “gain”, “weight”, “cover”, “total_gain” or “total_cover”.

    • For linear model, only “weight” is defined and it’s the normalized coefficients without bias.

0 Kudos