L1 , L2 regularization values for XGBoost model

Options
liap
liap Dataiku DSS Core Designer, Dataiku DSS & SQL, Dataiku DSS ML Practitioner, Dataiku DSS Core Concepts, Registered Posts: 1 ✭✭✭

Hi,

I am currently training an XGBoost model using the DSS Lab. In the process, I am faced with a limitation on the values I can select for hyperparameter tuning, specifically for L1 and L2 regularization. The range for these regularization values is restricted to [0,1], but I would like to choose values higher than 1. The warning message I receive states that the value must be less than or equal to 1. I have reviewed the official XGBoost documentation XGBoost Parameters — xgboost 1.7.6 documentation and did not find any reference to this limitation for L1 (reg_alpha) and L2 (reg_lambda) regularization.

Do you have any suggestions for overcoming this limitation?

Answers

  • Turribeach
    Turribeach Dataiku DSS Core Designer, Neuron, Dataiku DSS Adv Designer, Registered, Neuron 2023 Posts: 1,757 Neuron
    Options

    This doesn't answer your question directly but do note that you can train XGBoost models using sklearn manually using Python code recipes as you would do if you were doing modelling in pure Python.

Setup Info
    Tags
      Help me…