Registered users can ask their own questions, contribute to discussions, and be part of the Community!
I am looking for some information about the variable importance for the model Random Forest and XGboost. I have very different output.
I would like to know what kind of method you use to compute them. Not the same one for all models?
For Random Forest, visual ML uses the standard attribute from sklearn: https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html#sklea...
Same thing with XGBoost, the standard attribute: https://xgboost.readthedocs.io/en/stable/python/python_api.html
Note that the importances shown are for the preprocessed features according to your Design screen settings (e.g. if you do standard rescaling in the Features Handling tab - importances are shown for the rescaled features.).