Survey banner
Switching to Dataiku - a new area to help users who are transitioning from other tools and diving into Dataiku! CHECK IT OUT

Comparing the most recently created model to the second most recently created model

Solved!
ka
Level 1
Comparing the most recently created model to the second most recently created model

Hello,

I am trying to have a scenario step where after a new version of a model is created, it gets compared to the previously created version of the model using the "Model Comparisons" functionality. I then want to display the comparison results in a dashboard. I am having a really difficult time achieving this, and would appreciate any and all help. Thanks!


Operating system used: Windows

0 Kudos
1 Solution
pmasiphelps
Dataiker

Hi,

 

One way to do this is to use the Evaluation Store (detailed doc: https://doc.dataiku.com/dss/latest/mlops/model-evaluations/dss-models.html).

 

Once you've deployed your model (as a green diamond in the flow), you can click on a dataset to evaluate its performance on (say a validation set), then create an evaluate recipe:

 

Screen_Shot_2022-05-03_at_9_48_57_AM.png

Make sure to create output datasets for the metrics, input dataset + predictions, and evaluation store:

Screen_Shot_2022-05-03_at_9_53_30_AM.png

Whenever your model is retrained + redeployed to the flow, run this evaluate recipe to re-compute performance metrics for the new model. Then you'll see comparison metrics in your eval store icon, all in a nice dashboard:

Screen_Shot_2022-05-03_at_9_48_57_AM.png

โ€ƒ

Screen Shot 2022-05-03 at 9.54.50 AM.png

 

Hope this helps!

 

Best,

Pat

View solution in original post

2 Replies
pmasiphelps
Dataiker

Hi,

 

One way to do this is to use the Evaluation Store (detailed doc: https://doc.dataiku.com/dss/latest/mlops/model-evaluations/dss-models.html).

 

Once you've deployed your model (as a green diamond in the flow), you can click on a dataset to evaluate its performance on (say a validation set), then create an evaluate recipe:

 

Screen_Shot_2022-05-03_at_9_48_57_AM.png

Make sure to create output datasets for the metrics, input dataset + predictions, and evaluation store:

Screen_Shot_2022-05-03_at_9_53_30_AM.png

Whenever your model is retrained + redeployed to the flow, run this evaluate recipe to re-compute performance metrics for the new model. Then you'll see comparison metrics in your eval store icon, all in a nice dashboard:

Screen_Shot_2022-05-03_at_9_48_57_AM.png

โ€ƒ

Screen Shot 2022-05-03 at 9.54.50 AM.png

 

Hope this helps!

 

Best,

Pat

ka
Level 1
Author

That actually helps quite a bit, thank you so very much!