Hi, we happen to have multiple projects working on the same task on the same dataset. So it turns out we now have several models. Is it possible to benchmark all these models using Dataiku quickly and view it from an intuitive interface? Or do we have to rely on 3rd party software like DVC (https://dvc.org/)?
There are multiple ways in which Dataiku helps you benchmark your machine learning models:
* Each time you train a model using the visual machine learning, a complete report is produced with multiple performance metrics and charts (all standard metrics, ROC curve, Lift curves, Density chart, confusion matrix, decision chart, regression scatter plot, regression error distribution, trees, partial dependencies, subpopulation analysis, ...) - Each of these report elements can be put on a dashboard in Dataiku, allowing you to quickly visualize performance for multiple models at once
* If you want to perform the benchmarking on unrelated labeled datasets, the "evaluate" recipe allows you to perform an evaluation and output all metrics in a tabular format, as a Dataiku dataset. You can then use the visual ETL Capabilities of Dataiku to put together metrics for all models at once, and use the native charting capabilities of Dataiku to compare them
* To view evolution over time (when retraining models for example), you can use Metrics in Dataiku that historize the performance of a model (on test set at train time) and allows you to visualise the evolution.
In any case, data produced during ML is open, available either as datasets (output of an evaluate recipe for example) or through APIs (data underlying all report items) and can easily be fetched through code.
Hope this helps clarify,