Registered users can ask their own questions, contribute to discussions, and be part of the Community!
I created a logistic regression model in DSS.
After deploying, I scored a new set of data with the Predict recipe, receiving a prediction as well as the probabilities for each class (column named "proba_1")
Then, I tried the Evaluate recipe on the same dataset, receiving close but defiantly different probabilities (hence, some predictions were also different).
I could not find in the documentation. Both recipes score and return probabilities. Why were the probabilities different?
This could be due to the scoring engines used by the respective recipes. The Evaluate recipe uses Python whereas the Score/Predict recipes can use Optimized Scoring. This can lead to differences.
Within the settings of your Predict recipe, try checking the box "Force original backend," rerun, and see if it yields the same result as the Evaluate recipe.
Post a Question