when training a new model there is a evluation report of the testdata (ROC Curve, Confusion Matrix, Metrics etc.) is there a built in way to get these report with new data?
For example just passing two columns with befored scored and the real true data?
You can code these insights and metrics but there some nice interactive features in the given report.
The Evaluate recipe will give you the metrics but unfortunately will not give you all of the other visualizations that are available at train time.
We are however currently working on adding this feature and expect to have it available by the end of the year.
Hey @Clément_Stenac ,
unfortunately, the Evaluation Recipe is very limited, as I understand it, it uses a current model and predicts new probabilities which are then evaluated. This also means that the input data must be complete. If the predictions are based on an old model or you don't want to reload & repredict all data, you have to calculate the metrics yourself. So if a feature is developed in this direction in the future, it would be great if it contains more flexibility than the current evaluation recipe.
Thanks for your response and best regards!