Monitoring and Feedback

celiomesquita
celiomesquita Dataiku DSS Core Designer, Dataiku DSS ML Practitioner, Dataiku DSS Adv Designer, Registered Posts: 3

I believe that the most important is the "Champion/challenger" alternative, but the check point answer is the "Model evaluation store". What do you think?

Which component of Monitoring and Feedback helps identify a candidate model that outperforms the deployed model?

Logging system
Model evaluation store
A/B testing
Champion/challenger
Online system

The Champion/Challenger approach involves maintaining two or more models simultaneously in production. The current deployed model (Champion) is compared to alternative models (Challengers) to evaluate their performance. If a Challenger model consistently outperforms the Champion, it can be selected as the new deployed model, ensuring that the best-performing model is always in production.

Answers

  • Sean
    Sean Dataiker, Alpha Tester, Dataiku DSS Core Designer, Dataiku DSS ML Practitioner, Dataiku DSS Adv Designer Posts: 168 Dataiker

    Hi @celiomesquita
    , I think the question here is going for a "lower level" answer. Within an approach like Champion/Challenger, or any other approach, you'd use the data collected in the model evaluation store to make those decisions. Thanks for bringing this to our attention. It does have some ambiguity and we'll look to improve or replace it soon.

Setup Info
    Tags
      Help me…