Why is accuracy metrics different when retrained on the same data with same set of parameters ?

maryas
Level 2
Why is accuracy metrics different when retrained on the same data with same set of parameters ?

Hello Community,

I'm quite new to DSS and I have this doubt that why the accuracy metrics are different when retrained in the flow on the same data with same design and the model ? If you retrain with the same design in the Lab, it would give the same result. But why is it showing different result in the flow ? Can anybody help ?

 

Thanks & Regards!

 

0 Kudos
3 Replies
tgb417

@maryas 

Welcome to the Dataiku Community.  I've enjoyed your questions so far.

Can you share some more detail about the flow you are using? 

One of the typical reasons that model results might be different between the Lab and flow is that you are actually running the model on a different dataset.  (In the lab this would typically be a training set.) In the flow, you are often running against a different validate set that the model has not yet "seen".   Might that distinction help explain what you are seeing?

Typical Train & Validate Split.jpg

 

--Tom
0 Kudos
maryas
Level 2
Author

Hello Tom,

Of course the accuracy will be different when executed on validation set rather than the test set in the Lab. But when you re-train the model with same design in Lab, you would get the same result. But when I retrain the model in the flow, with no changes at all, it will show me different accuracy metrics.

Regards!

0 Kudos
tgb417

Many models have random selection criteria based on a random seed.  (I donโ€™t know if these are held constant between labs and flow.) This may make a difference in results from one run to the next.  

You have not mentioned how different the results are between lab and flow?   Some more details on the specifics of your model flow and the amount of the variability you are seeing might be helpful for folks to be of help.

cc: @CoreyS 

--Tom
0 Kudos

Labels

?
Labels (2)
A banner prompting to get Dataiku