Thank you for reaching out.
1. Can you check if there is no number of pods limitation set on your kubernetes cluster ?
2. Do you have only one spark configuration? If not, make sure you are using the one where you set the spark.executor.instance to 20 for the job you are running
3. Did you make sure to save the settings after modifying the number of instances ?
We cannot say more about why it doesn't use more executors without logs, however.
If you need further assistance for that, please open a support ticket and join a job diagnosis (Job page > Actions > Download diagnosis)