We are working on a custom prepare recipe step that adds a user-input row to the dataset. It's working on local DSS. However, when tested on Spark, it adds a row for each file the dataset is partitioned into.
For Example, the dataset is stored in 10 HDFS files, using the recipe step adds 10 duplicated rows instead of 1. Is there a way to bypass this other than converting the code into a visual recipe?