Hi,
A recipe will read from the input (upstream) dataset and write into its output dataset every time you run it, including if you run several recipes to build a final dataset. Some recipes can be executed directly in the SQL database for instance, depending on where your data is and the Engine you set for that recipe, see Execution Engines. Depending on the engine and recipe, the data may be streamed (and not need a lot of memory), or loaded fully into memory. DSS will try to advise by default-selecting the best available engine for your recipes.
Under certain conditions, you can skip the writing of intermediate datasets you don't need using Spark pipelines.