I found this blog post from Chris Walker to be an excellent explainer on the differences between white-box vs. black-box machine learning models.
It does a great job of succinctly summarizing the inherent tradeoff between accuracy and explainability associated with these two broad categories of models.
In particular, I like how it communicates the pros and cons of these two categories in terms of familiar use cases. It makes it clear under what circumstances you might be comfortable trading accuracy for greater interpretability.
And of course, it's useful to see a quick list of how DSS can help find the right balance.
Hope others find it useful. Thanks!